The origins of Java

logo

Virtual Assistant, Web Design and bookkeeping services

In 1990, Patrick Naughton, a disgruntled software engineer working for Sun Microsystems (the company was acquired and merged into the Oracle Corporation in 2010), was considering leaving Sun Microsystems for a rival company and detailed in a letter to Sun Microsystems’ management, the shortcomings of Sun’s software division. Shaken by Naughton’s perceptive assessment of the problems their software division faced, Sun Microsystems’s management persuaded Naughton, Bill Joy, James Gosling and three others to form a research group to create something new and exciting that would allow them to catch the next wave of computing.

In early 1991 the group met and decided to look at the application of computers to consumer electronics. At this early stage the team were considering such household items as video cassette recorders, fridges, microwave ovens and washing machines, and thinking about the possibilities of developing, say, a central control unit (possibly handheld) for all the units in the system. In this way the group evolved the concept of a network of different types of device (kitchen equipment, entertainment units, etc.) that could all pass information between each other as necessary. A crucial part of the project was to decide on the best programming language to achieve the team’s objectives. This task fell to James Gosling who initially looked at C++. However, further investigation led to the conclusion that the difficulties encountered with C++ were best addressed by creating an entirely new language.

Given the application, the language needed to have the following characteristics.

  • Familiarity – the C and C++ languages were widely used in consumer electronics, so basing the syntax of the new language on these existing ones would aid acceptance and hence use.
  • Platform independence – the concept was to have a range of devices from different manufacturers communicating with each other, and thus the language would need to be able to perform on a variety of processors. This characteristic meant the language would have to be an interpreted language that could be run on virtual machines located on each device. Under this scheme, bytecode containing the appropriate instructions could be produced on one device (for example, the central control unit), then sent around the home network for execution on the virtual machine residing on the device requiring control (see Subsection 2.3 if you need to remind yourself how an interpreted language is used).
  • Robustness – for consumer acceptance the new technology would need to run without failure. Thus the underlying language technology should omit various error-prone features of C and C++, and incorporate strong syntax checks.
  • Security – as the various devices would be exchanging information within their network, the language would need to prevent intrusion by unauthorised code getting behind the scenes and introducing viruses or invading the file system.
  • Object-orientation – as the architecture of object-oriented languages fits so well with the architecture of client–server systems running over a network, the language should be designed to be object oriented from the ground up. In a client–server system, software is split between server tasks and client tasks. A client sends requests to a server asking for information or action, and the server responds. This is similar to the way that objects send messages to each other and get replies (message answers) back.

The design and architecture decisions were drawn from a variety of languages including Eiffel, Smalltalk, Objective C and Cedar/Mesa, and Gosling completed an interpreter for the language by August 1991. He named the language ‘Oak’, apparently after the tree that grew outside the window of his office (although other stories abound).  Despite great expectations, no commercial applications were found for ‘Oak’ and in 1993 the project was cancelled by Sun.

In the meantime, since 1991, Tim Berners-Lee’s brainchild, the world wide web (WWW), had gone from strength to strength because Berners-Lee had made the source code public. However, at this point the web was still the preserve of mainly scientists and academics. There were only about 50–100 websites, they were complex to use, and interaction was limited by slow internet connections. Early browsers could not display images, video and text on the same web page; websites mainly consisted of hypertext (text containing links to other mainly text documents), and users (concentrated in academia and government) still relied heavily on command-line based tools (such as telnet and ftp). Finding resources was also not straightforward as there were no user friendly search engines. Few people had email addresses. In all, it was a hugely more limited experience than the situation today – thus the web was not of much interest to the general public.

A major step forward occurred in early 1993 after a graduate student at the University of Illinois’ NCSA (National Center for Supercomputing Applications), called Marc Andreessen, together with a team of colleagues, released the first version of a new Unix-based browser. The browser – NCSA Mosaic for the X Window system – was especially interesting as it offered the user a straightforward graphical user interface. Andreessen and co-workers continued their programming and, later that year, a real landmark in the history of the world wide web was reached when they released free versions of the Mosaic browser for the Macintosh and Windows operating systems. Not only could the browser display text, graphics and video clips on the same page (and play audio), but, crucially, the browser was relatively easy to use and available for three popular operating systems. The world wide web had become multimedia and the online community liked it. No longer was the web an environment for dry scientific documents; it now became a virtual world full of colour and moving images. Subsequently Andreessen and most of his team left NCSA to form the Mosaic Communications Corp., which later developed the Netscape browser, catching the wave of public interest at just the right moment to make them all millionaires.

Until that point, the web had been totally overlooked by large corporations such as Sun Microsystems and Microsoft. However, public reaction to Mosaic convinced a few key members of the original Sun Microsystems team that Oak could play a part in the web explosion. After all, Oak was platform independent, ran on a virtual machine and was designed to run over a network – albeit a network of toasters, fridges and televisions. However, if a browser were written that incorporated an Oak virtual machine, any Oak program residing on a web server could be executed on a browser that incorporated an interpreter. Such applications could make the web experience far more interactive and, more importantly, lead to commercial exploitation of the web. An Oak program running within a web browser would be able to query a database, take customer details, and take online payments. A eureka moment had been reached.

In September 1994, Patrick Naughton wrote a prototype browser called WebRunner that incorporated an Oak virtual machine. The idea that a browser could support Oak applications (called applets) excited many and WebRunner was the perfect platform from which to demonstrate the power of the language. Unfortunately a patent search revealed that Oak was already a trademark and, so the story goes, the team came up with the replacement name – Java – during a trip to a coffee shop. Thus Oak was renamed Java and WebRunner renamed HotJava.

The first public release of Java and the HotJava web browser came on 23 May 1995, at the SunWorld conference. The announcement was made by John Gage, the Director of Science for Sun Microsystems. His announcement was accompanied by a surprise announcement by Marc Andreessen, Executive Vice President of Netscape, that Netscape would be including Java support in its browsers. As Netscape was, at the time, the world’s most popular browser, such support gave the Java language a major boost and significant credibility – so much so that Microsoft soon followed suit and implemented Java support in Internet Explorer. From then on Java’s popularity as a programming language grew meteorically and it has now grown into a full-scale development system, capable of being used for developing large applications that exist outside the web environment. More recently, although the importance of Java for web browsing has been reduced by the advent of web development technologies such as Adobe Flash and AJAX, Java continues to be widely used for producing desktop, mobile (handheld) and commercial-scale applications.

Reference:
The Open University (2012) M250 Object-oriented Java programming: Unit 1, Milton Keynes, The Open University.

You May Also Like…

What is procedural programming?

What is procedural programming?

Before object-orientation the predominant method for structuring programs was procedural programming. Procedural...

0 Comments

Submit a Comment

Your email address will not be published. Required fields are marked *