Tuesday, April 13, 2010

What is Graphic Design?

It seems self explanatory - Graphic Design is work done on the design of graphics. It's not that simple though. Graphic Designers do much more than work with photos and illustrations. They also work with typography - the placement of words, size color, and style of fonts. And they work with the placement of imagery and words on pages to convey not only information but also moods and emotions.

Designers are the link between the client and the audience. Clients may be too close to their business and their own needs to get a clear picture of their audience's needs, and they also may not understand there is more than one way to convey the same message. Graphic designers work with their client to understand the purpose and the content of a message.

Designers may work with many other people such as photographers, and printers. Some of the visual medium they work to produce are; business logos, magazine ads, album covers, posters, flyers, graphics and web designs. Before I read the description of what a graphic designer does at http://www.aiga.org/content.cfm/guide-whatisgraphicdesign I knew the basics but I didn't realize the description of graphic design was so involved and complex.

How Twitter Works: It May Not Be How You Think

When you think about Twitter you may think of it as an online text messaging site where people talk about mundane daily activities or actors tweet to their fans. These are just a couple of the myths about Twitter listed in an article on HowStuffWorks.com. People are not just talking about the coffee they had for breakfast. Twitter has gained an enormous amount of users in its short existence and people are using it for everything from micro-blogging, business marketing and even political organizing.

In the past couple of years twitterers have kept the world updated on attacks in other countries and Iranians used it to organize protests. There is no right way or wrong way to use Twitter. It can be a powerful tool for businesses when implemented correctly. Businesses can embed APIs into their websites and use Twitter to make quick and easy updates to their sites. Or Twitter can be used simply for fun and chatting.

Some of the greatest features I like about Twitter is that it’s quick, easy and portable for use on mobile phones. You can login from your cell phone and read or post updates instantly. If it’s linked to your website then your website has just been updated too. You don’t have to worry about going into your website and messing up the code. All you have to do is hit send and you’ve killed two birds with one stone by continuing your twitter connections or marketing and updating your news on your website.

Michele Simon

Monday, April 5, 2010

Web Page Optimization for Faster Download Time

It just makes sense. The more you have on a web page and the more you load it down with cool extras like flash, the longer it will take to download. But is this such a big problem with all the super speedy computers that we have nowadays? Apparently according to statistics around 30% of users are still using dial up. So if you want to casually disregard that 30% then go ahead and pile it on.

For those who prefer to target 100% of users Webmonkey has several tutorials that help you to optimize your site for quick and easy loading. The first tutorial makes several suggestions about trimming out excess from your site and using the correct image formats for your graphics. It also covers when to compress images and using interlacing graphics and progressive loading to speed things up. One good suggestion from the first tutorial is: if you reuse the same images, like logos and headers, on pages then they will be loaded into the user's cache and it will speed up the download times.

The second Webmonkey tutorial covers webpage layout. Tables can be a nightmare for browsers and can really slow down the download times if they are not implemented correctly. Simple tables when formated correctly are best, but when you nest table you start slowing down. The more tables you nest the long it takes the browers to read and create them. CSS is a new formating language for browsers that has become very popular and is fast becoming the standard for formating webpages. With CSS you don't need tables and you can remove all excess formatting code from the html pages. This speeds things up.

I don't think people should jump right in and start cutting down their sites before they take a look at their target market. For example how likely is it that a business that sells upscale diamond jewelry is going to be selling to someone using dial-up? I did a little research on the internet on dial-up users today and found some artciles that said most people using dial-up have the ability to upgrade but continue to use it out of choice. With competition on the web becoming fiercer every year, it doesn't make a lot of sense to dumb down your site, and make it less visually stimulating and appealing, to cater to these diehards. But still for businesses and informational sites that want to reach the largest possible amount of their target audience the challenge is in making a sweet compromise.

Michele Simon

Monday, March 29, 2010

Universal Access and The Wireless Revolution

Back in 2003 Michael L. Best from the Program in Internet & Telecom Convergence at Massachusetts Institute of Technology wrote on the then current wireless revolution and how it could effect the world if done right. This excerpt from his book, while outdated from a technological point of view contains interesting theories on where wireless was and is headed.

Best said universal access to the wireless infrastructure can make poor and rural markets profitable. There are three important critical innovations that are necessary. In order for this to happen we would need new and low-cost technologies. Also micro and small enterprises that provide services that can create value to the community while insuring that revenues stay up. In addition it needs to be supported by the public policy makers as a source of development and not a source of government revenue.

Many countries, including the US have licensed certain frequencies for business, and other use. For costs to remain low, allowing universal access to the wireless infrastructure, the signals need to transmit over unlicensed frequencies. That way small local networks will be able to provide access to the wireless infrastructure and these small networks when connected together could provide global communication and access to the internet at an affordable price. This is more important in poor areas and rural places that don’t currently have wired communications. Placing a wired structure in many of these areas would be too costly.

A good quote explaining this from Best’s book excerpt is: “Increasingly, policy experts agree that the concept of universal access should not end with basic voice services, but must also embrace value-added services, including the Internet. This is not simply because of the social and economic value of the Internet--although that would be reason enough—it is because the Internet is critical to the financial sustainability of rural access.”

In my opinion I think Best is correct. In many rural places and countries it would be almost impossible to facilitate the communications they need to thrive. The way to get this to happen quickly and at affordable rates is to let communities connect together and for now keep licensing free.

Are podcasts significant and profitable?

A podcast is a digital audio file made available for downloading from the internet through a feed, to your computer. It can be synched with your mp3 player. Podcasting is great because people can chose the media they want and listen to it when they want to. It also has made it a cheap way for anyone to become a radio producer and opened up the field of radio to the average person. For businesses it can be a powerful way to connect with customers and promote your business to them on a regular basis and it has opened the door for many people to become a recognized expert or celebrity in their niche.

In 2005 National Public Radio launched podcasts and within a week they were number one on iTunes for the most downloaded podcast. While NPR has offered repurposed material in their podcasts they also have offered some with new material. They try to present their audience with fresh material that isn’t offered on the air or web. NPR has been experimenting with different types of podcasts. They found that most people preferred smaller podcasts, like their “Story of the Day” which highlights their editorial picks, probably because they are multitasking and don’t have a lot of time to listen.

NPR is well known as free public radio with quality content, but they have to bring in money somehow to keep going. Sponsors and underwriters so far have provide much of their funds. NPR has found that podcasting is a new source of income that can help keep them alive. To make money on podcasting they have ads, but to prevent irritating their listeners they are very careful how often ads are played and what ads are offered. For a 30 minute podcast they usually have two ads placed at the beginning and end. They also try to look carefully look for ads that would be of interest their listeners.

Has NPR set a new model for businesses using podcasting? I think that having new material available for podcasts is likely to attract many people, and their brief updates are a good idea because not everyone can listen all day, or at certain times, and it’s a good way for them to still keep up and interested in the programming. Having carefully selected ads, such as for a political program offering an ad for a site you can go to, to get more information, seems like a great idea. Not all ads are bad. Many offer people more information and ideas that are helpful. If more producers were as careful about the ads they chose to air with their material, then people might not find them as irritating.

Wednesday, March 10, 2010

The open source GNU Project; free software, free contributions - Free

Have you ever gotten “blue screens”, or system errors while working on your computer and had to wait an intolerable amount of time to get help from the manufacturer? Have you tried to build your own computer from scratch to cut down on cost, only to find the price of the operating system is twice what it cost you for the parts? In the early 80’s Richard Stallman announce a project to develop the GNU operating system, which is a free Unix-like operating system. GNU, which stands for ‘Gnu’s Not Unix’, is open source software that anyone can contribute to. Any one can also get it for free and copy and distribute it. Because of Richard Stallman and many other talented programmers willing to work without pay, we have now have a free operating systems like the united GNU/Linux.

The GNU Manifesto, found at http://www.gnu.org/gnu/manifesto.html , was written by Richard Stallman when he first began the GNU project to ask for participation and support. Stallman personally did not like to have to sign licenses and ‘intellectual property’ rights. He believed that software should be free, just like air, and used the example of a space station in which everyone had to pay for air per breather and liter: It would be better to support the production of air by taxes than charge the breathers.

In the GNU Manifesto Stallman tried to address many of the concerns and questions that people had about free software such as support, distribution and the effect on programmers. Today millions of people are now using GNU/Linux. It has a reputation of being one of the safest operating systems available and, if you know how to get it, it’s free. Many talented programmers have also willingly contributed to the project.

Personally I don’t like how Stallman casually dismissed what he thought would be the decline of high paid programming jobs. I have an inkling that Stallman may be one of those talented individuals that programming comes easy to and he doesn’t realize how difficult it can be to countless people. In music, art, literature, science and even programming, which at the highest level requires a great deal of talent, creativity and innovation, there are individuals that excel and make significant contributions and advancements to their vocation. Such a decline as he suggested would be a travesty. In the footnotes though, he later added - “The custom software business would continue to exist, more or less unchanged, in a free software world. Therefore, I no longer expect that most paid programmers would earn less in a free software world.”

Tuesday, March 9, 2010

Clay Shirky on Web 2.0

When the Industrial Revolution came about, people suddenly found they had a lot of free time available to them. According to Clay Shirky first they spent that time consuming alcohol, then when TV was invented they soaked up much of their time in front of the set. The invention of the Internet has changed that. Now people are choosing to spend a lot of their free time in front of a computer participating on social sites, such as YouTube, Facebook and Wikipedia.

Shirky has published a video on Blip.tv about this subject. While the topic is interesting, I found Shirky’s anti-TV and pro-social networking on sites such as Wikipedia incredibly biased and a little silly. Wikipedia, while interesting to study from a cultural point of view, is incredibly useless and a phenomenal waste of human cognitive time that could be better put to use elsewhere. You can’t trust anything you read on Wikipedia.

Social networking may not be just a fad. It may be just the beginning of something that is here to stay. But arguing that sitting in front of a computer wasting time on useless social networking sites is better than sitting in front of the TV is laughable. Television, while having the potential for many to abuse also has many education channels and shows that are produced for people to easily understand and enjoy. Many brilliant writers and producers have created stories, such as “The Day After” (1983) and “The Burning Bed” (1984) that have shocked the world and made people really question what was going on in our society. TV and Web 2.0 both have the potential for people to misuse. But they both also have their potential to captivate and affect our society.

Web 2.0 does give people the power to create and produce and participate. Right now, it does have enormous potential, but whether it will be the savior of our society or the downfall is still unknown.


Michele Simon

Social software design and group politics

Before the Internet existed we communicated by talking over the telephone and through the media, such as newspapers, and before that by telegraph. Having a group meeting usually meant getting together in one place. With the invention of the Internet and Web we are able to easily communicate with people instantly over vast areas. We do this using social software, such as forums, email and MUDs. Designing the social software that allows people to come together in group ‘meetings’ over the web though, has unique challenges from other types of software.

Clay Shirky first published an article titled “Social Software and the Politics of Groups” on March 3rd 2003 that explained the complications that have arisen from having a, many times anonymous, social network. “The thing that makes social software behave differently than other communications tools is that groups are entities in their own right,” he said. Designing software for groups cannot be done in the same way as typical software because behaviors of people interacting with one another cannot be predicted as an individual user. Trust and reputation are concerns that need to be addressed, along with trolling, flaming and off topic conversations.

In order for groups to be successful they need to be able to focus on their topic. Because the Internet has allowed people to express themselves individually, and covet that, the needs of groups are often disrupted by the needs and desires of the individual. Groups that are free and open often fail miserably. Successful groups have used some type of moderation and/or registration.

I’ve participated in a large active focused forum before, and I found that the interaction can be fun and at times exhilarating. But I’ve also found it to be extremely frustrating with people hijacking threads and talking off topic, and trolls causing all sorts off trouble. I remember someone once saying that a forum with paid membership has less of those problems. I tried one of those and found that to be somewhat true, but even those were not immune. While the social software has been getting better and better, we still have a lot to work on, on the way we use it.

Wednesday, March 3, 2010

Making it easier for people to find information and web pages on the Internet

Not being able to find information you know exists somewhere is a very old problem. Information architects have applied known and well-tried tools from library science to solve this problem on the web, but there is still increasing amount of difficulty searching for and finding the information that you want. There is hope on the horizon: Topic maps are a new tool available to help solve this problem.

Information on the web is organized using metadata. Metadata is currently the foundation of all information retrieval on the Internet. Metadata can be defined as information about a document, an image, etc. It contains the title of a web page or document, description and keywords that define the subject. Often this data is not enough to clearly define the subject so when searches are made the web page may either not show up, or may show up under a closely related subject. One of the main reasons is that the title and description can be ambiguous or not contain enough information, and the keywords, which can be entered by the author, can have too many subjects, or not clearly define the topic.

According to the abstract on
Topic Maps vs. Thesauri written by Lars Marius Garshol "topic maps are organized around topics, and each topic is used to represent some real-world thing." In his abstract Garshol explains how metadata works and other library science type of data retrieval systems such as taxonomy and thesauri. So far these 'library science' ways of organizing and categorizing information for data retrieval are not working very well. Garshol goes on to explain how topic maps can solve this problem. “By using topic maps to represent metadata and subject-based classification it is possible to reuse existing classifications and classification techniques, while at the same time describing the world more precisely where desired” he says.

I personally would love to be able to easily search for a subject on the web and have quick accurate hits! As the information overload on the internet increases our search engines are becoming bogged down in irrelevance. A new accurate way of conducting searches are needed. Hopefully topic maps are the answer. I believe that it is also just as equally important for site owners to try to accurately define their site in the metadata and using the tools available. If web pages aren’t defined well then they may miss their target audience.

Designing websites with user experience design

When you visit a website the images, text and navigation are the first thing you see, but designing a successful website comprises of a lot more than those visual elements. Websites must be designed with the user experience in mind.

There are several stages of website design and they begin with a strategy. The strategy consists of the goals for the site that come specifically from the people who will use the website. The site owner’s own objectives are then balanced against user needs for the site. The strategy is always the first thing that is developed, before any visuals are created or other stages are planned. 

Each stage of the development of a website is dependant on the one before it. According to Jesse James Garrett’s book ‘The Elements of User Experience’ the stages in the development of a website are; The Strategy, Scope, Structure, Skeleton and Surface. If you try to create a website out of sequence, for instance creating the surface visual design before the strategy, you can end up with some awkward problems or poor design.

Garrett says there are two basic types of websites; the informational Hypertext System that contains information and hyperlinks, and the Software Interface that is mainly concerned with tasks (the site is considered as a tool or set of tools that the user utilizes to accomplish tasks). The stages of design for each of these types, depending on which your working on, is basically the same but the scope, structure and skeleton have slightly different needs addressed.

When designing a Software Interface the Scope addresses the functional specifications of the site (a detailed description of the “feature set” of the product). The Scope of the informational Hypertext System addresses the content requirements. The Structure design of the Software Interface looks at the interaction while the Hypertext System focuses on information architecture - the arrangement of content elements on the page. The Skeletal design of each of these types of sites both includes information design, which is how the information is presented so users can understand it. The Software Interface would concentrate on the interface design and user interaction though, while the Hypertext System focuses on navigation design of the site.

I consider the development of a website for the user’s experience extremely important and it should never be overlooked. You take a high risk of losing touch with your audience or prospective customers or clients when you broadcast your own needs and desires on your site without considering how the users will experience the website when they visit it.

Sunday, February 14, 2010

Should 'analog' human beings conform themselves to a digital society?

We are analog beings – flexible, tolerant and compliant - living in a digital – rigid, fixed and intolerant - world that we created. This is the main argument from an excerpt from Don Norman’s book The Invisible Computer that can be found at http://www.jnd.org/dn.mss/being_analog.html. Norman explains in depth how adaptable and flexible humans are, and how that does not work well in an inflexible computerized society that values efficiency and accuracy.

While Norman comes to some great conclusions and thought provoking points, I found the core of his reasoning flawed and it was very distracting from trying to glean the points he was making.

Norman goes into great depth to illustrate how humans are flexible, tolerant and intuitive and have evolved that way. Perhaps his greatest flaw is in trying to sum up human beings in a nutshell to serve his point of view. He either has forgotten or doesn’t realize that when you break down a human to its smallest parts all the way down to DNA you have something that is almost mathematical and so precise that the smallest error could cause a person to have a beak or a tail. This doesn’t happen very often. We are basically incredibly complex machines that have the ability to function beyond the mechanical and adapt and evolve. The ability to adapt and evolve is something that is necessary in our survival of the fittest world.

Norman’s main summary and points are still very interesting though. It was painstaking trying to still be open to that with his flawed reasoning, but in the end he does point out the intriguing dilemma our society faces in either adapting computers to think more like humans or trying to adapt people to behave more like computers. I don’t think we should push human beings toward behaving more like computers: We should use computers and tools to enhance ourselves and our greatest natural abilities.

Saturday, February 13, 2010

The Rudimentary Basics of Internet

It’s easy to sit in front of a computer and point and click, sending and receiving information in less time than a blink of an eye. Most of us take for granted all that happens to transfer that information. Ethan Zuckerman & Andrew McLaughlin have published any easy to read and understand explanation called Introduction to Internet Architecture and Institutions, which can be found here - http://cyber.law.harvard.edu/digitaldemocracy/internetarchitecture.html.
They use easy to understand examples while still providing just enough technical information without overwhelming the reader. This is a great site for someone just wanting to know the basics. I know from experience that there is a lot, in fact books and books, of technical information on this subject, and a person without a degree could easily get lost in it all.

The most important thing to know is that the Internet around the world is run using IP, which stands for Internet Protocol. IP was invented in 1974 by Vint Cerf and Robert Kahn and has not changed substantially since then. A simplified explanation of IP is that it breaks down information into packets and sends it across media to a predetermined destination. When it arrives there the information is rebuilt. IP has been adopted and used around the world and it can be used across any media, including fiber optic, cable, radio waves and as Zuckerman and McLaughlin related even on carrier pigeons. In order for the Internet to work all the networks need to be able to talk and communicate to each other. This is done using IP and because the Internet is a global network of voluntary and interconnected networks, no one can force a new standard to be used.

IP uses a number called an IP address to get the information to destinations. We would never be able to remember all those numbers for all the places we visit on the internet, and numbers make it hard for humans to understand, so the Internet uses an addressing system which consists of two types of identifiers: the IP addresses and DNS. DNS stands for the domain name
system. DNS takes the address number and converts and stores it as a recognizable name such a Google. Each of these identifiers is unique.

ICANN, which is short for Internet Corporation for Assigned Names and Numbers, is the overall coordinator of the Internet's systems of unique identifiers, including domain names, IP address, and protocol port and parameter numbers, along with the DNS root name server system. ICANN is a not-for-profit public-benefit corporation with participants from all over the world dedicated to keeping the Internet secure, stable and interoperable. It promotes competition and develops policy on the Internet’s unique identifiers. There are also other organizations such as IETF and W3C that help define and to make the Internet open, easy and accessible to everyone.

The Internet is not so free and easy in all places around the world though. Developing countries have more challenges to face getting on the internet than countries such as the United States. Because they don’t have the numerous lines, connections and service providers they frequently have to re-route their traffic through satellites and other countries. This makes the service slow and very costly. Also because their current service providers may be businesses that could be hurt by competition, this can stand in the way of growth.

According to Zuckerman and McLaughlin “Achieving cooperation among competitors (in developing countries) is a profound challenge. In the United States, ISPs (Internet Service Providers) have their roots in the cooperative academic networks that came together to form the Internet; in other words, the cooperative technical operations and the techies that ran them were later joined by business managers who fought for advantage in the competitive
marketplace. In the US, then, it has proven relatively easy for rival ISPs to remain cooperative at the level of network operations. In countries that are new to the Internet,however, the business-side competitive imperatives have come first, giving little support to the necessary culture of technical cooperation among peers.”

Friday, February 12, 2010

Smarter people, smarter markets, and the importance for businesses to speak in a human voice

Back in the early days of the Internet, when the dotcoms were booming, a book called the Cluetrain Manifesto was published. With people connecting on the Internet around the world, employees of companies were gaining a voice, sometimes more prominent and influential than the companies they worked for. Also people that shared common interests, often referred to as markets in the business world, were linking together and the markets were becoming smarter than the businesses that hoped to sell to them. In the Cluetrain Manifesto authors Christopher Locke, Doc Searls, David Weinberger and Rick Levine revealed some amazing revelations about how businesses were currently run and how they needed to adjust to keep up with our changing society as affected by the Internet.

One of the most important points, which can be found on their website http://www.cluetrain.com/, is that most corporations “only know how to talk in the soothing, humorless monotone of the mission statement, marketing brochure, and your-call-is-important-to-us busy signal.” This can become insulting to the people of the new linked-together market groups because they are now smarter than that. “To their intended online audiences, companies sound hollow, flat, literally inhuman” the Cluetrain states. It is necessary for corporations to speak in a human voice and sound “real.” The book goes on to explain this phenomenon in depth.

The Cluetrain website lists their 95 Thesis which alone - without reading the book – are very insightful. It also contains a link to read the complete manuscript online.

One of the more laughable insights from this website is that it’s a terrible example of a website and people wouldn’t stand for this format today. You have to read a good portion of the website just to figure out what it’s about. In this day and age most people wouldn’t take the time to do that when they first come to a new site.

The Cluetrain website isn’t really trying to sell the book though (which is still offered for sale, but is also offered free). This website is the original that came out ten years ago and at the time it probably was not their intention to create an atrocious site. Left the way it was when it originally came out, the site is proof of some of the points they are trying to make; today people have become smarter and would not stand for this disorganized and disorderly format in a website.

Thursday, February 4, 2010

Information Overload

As a society we are suffering from information overload. With the development of media and data storage we have been able to store an immense amount of information. Just one newspaper today carries more information than a person a few centuries ago would have been exposed to in their life time.

It is important to be able to search through all this information and get what we need. In order to do that we need tools which are easy to use and enable us to make productive searches.

The focus has been on generating information but we need to change that focus to receiving and how the information is controlled and filtered so it reaches the people who need it.

The Advent of New Media has Changed Our Horizons

"We are in the middle of a New Media revolution. All of our culture is being transferred to computer mediated forms of production, distribution and communication. " - (Manovich, pg 43)

What is New Media compared to "old" Media? The original intent of computers, invented by Babbage and Turing, was to perform mathematical calculations. About the time that computers were being invented, photography and cameras had also been founded. Cinematography and moving pictures soon followed. When Babbage was inventing his first computing machine, called an Analytical Engine, a man named J.M. Jacquard invented a loom which was controlled by punch Cards. The loomed weaved detailed figures and portraits. Images were already being synthesized by a programmable machine even before computers were put to work on mathematical calculations.

Ironically the first digital computer, created by Konrad Zuse was run by a program on a punched tape of a discarded movie film. Half of a century later all existing media such as images became transferable into numerical data making it possible to create digital images, movies, etc and New Media was born.

New Media is modular, meaning it is made up of many different pieces that can put put together to build something. The modularity combined with the numerical calculating can eliminate humans from the computing, making automation possible. Web sites are automatically generated "on the fly" using templates and scripts to assemble and format information from databases. High level automation, or artificial Intelligence has made it possible for users to interact with programs such as computer games.

In Lev Manovich's book The Language of New Media he said "The Internet, which can be thought of as one huge distributed media database, also crystallized the basic condition of the new information society: over-abundance of information of all kind." There was so much media created it became hard to sift through it all to find what you wanted. Organizing, categorizing and tagging is an important part of New Media for search-ability.

Other important parts of New Media are variability, the ability to reproduce media in different versions, and Hypermedia - linking media together. Transcoding is another part of New Media. Transcoding is translating something into another format.

New media has revolutionized our culture. Old media such as still photography and printing presses were run by humans. Now we can have computers do many tasks for us without even thinking about them. Our culture is changing and becoming computerized by all the New Media available to us.

Wednesday, February 3, 2010

Beginnings

Everything has a beginning and so for the beginning of this blog I am touching upon the history of the beginning of the electronic age and the World Wide Web.

The desperation of war has brought about many advances in our society. With hundreds of men literally dying before their eyes, doctors improvised and came up with new ways of saving men. With the constant pressure to trump the other side scientists have worked beyond their current knowledge to do the impossible. Weapons were created and not just guns and weapons of destruction. Communication and it's expediency and secrecy was advanced. By the end of World War II we had the first calculating machines, photography, typewriters and the computer age was on the verge of it's beginning. 

At the end of World War II Dr. Vanner Bush published an article titled As We May Think. It can be found on W3C, the World Wide Web Consortium, which is an international community where member organizations, a full-time staff, and the public work together to develop web standards and pursue their mission of leading the web to it's full potential. When the article was published Bush was a "Director of the Office of Scientific Research and Development, coordinating the activities of some six thousand leading American scientists in the application of science to warfare."

Bush touched on many of the amazing advancements of his time in his article, but his overall message to the scientists of his time, was to set aside the scientific work on war that they had been concentrating on, and come together to work towards the future filled with possiblitlies that their new advancements suggested.

He proposed a massive record that people could use to not only make their lives easier but could advance the abilities of professionals, doctors, scientists and more. He suggested 'dry' photography that could be stored and used, much like the digital photography we now have. He pictured a world where a person could sit down at their desk and access the great record, using several at a time (which is called muti-tasking in our time). He called it a "new profession of trail blazers, those who find delight in the task of establishing useful trails through the enormous mass of the common record." Bush was eerily accurate in many of his predictions.

It's ironic that the countless lives that have been saved through the advancements of the electronic age, got it's start in war. The progress hasn't found it's peak yet though. The World Wide Web is still digging for it's roots and being defined as we speak. Groups such as the W3C are determined to see it reach it's full potential. People now have the ability to network and come together across the world to work on common goals, or even just to chat and swap news.

It's just the beginning.