Wednesday, August 29, 2007

Web 2.0 Tools – Google Docs

This week I decided to try Google Docs for a collaboration project to write a Computer Security Policy for our church. There are three of us on the committee and it seemed like a good idea to jointly work on the documents together.

The first step was to upload the files to my Google Docs folder. This seemed like a simple enough task. It turns out that there where 11 files for this project with each file having to be uploaded separately. Not believing that Google would make you do this, I kept trying to select multiple files, but the interface to Google Docs would not allow it. So after uploading 12 files (lost track and ended uploading one file twice) I was able to copy them into a separate folder. Again, only one file at a time. At this point I was starting to get a little impatient, but I continued. The next task was to share these files. So logically I though that I could just share the entire folder. Again I was wrong, each file must be shared individually, a pattern was beginning to form. This required that I email an invitation to each member of the group for each file. I think that you are starting to get the picture, 11 files meant 11 email invites. After about the fourth invite I decided I better see how well the collaboration tools actual worked before I spammed everyone’s mailbox.

So the next task was to actually edit one of the documents. Since these where boiler plated security policy documents form the SANS Website, I need to do a global find and replace. I quickly learned that this not a supported feature. So after manually cutting and pasting I notice the font type and size where different. After spending 10 minutes trying to get the font right, I decided just to cut and past into MS Word. Even this operation was less than perfect resulting in some of the formatting being lost.

So after spending the best part of an hour, I have to conclude the experiment was less then a success. The concept is great, just that the implementation is less than perfect. It may work well to act as a central repository, but the features of the word processor are inadequate to allow productive work. I would have to give Google Docs a C-. One could say what do your expect for free.

Sunday, August 26, 2007

More on Web 2.0...

There has been much discussion on what is Web 2.0? After nearly 1 ½ years there is still substantial disagreement on what Web 2.0 really means. Nearly everyone will agree that Web 2.0 is social network sites, Wikipedia, Blogs, and RSS feeds. Some will even go as far to say that many of these services were included in Web 1.0 and that Web 2.0 is nothing but, a marketing play.

Tim O’Reilly on his Website in an attempt to clarify the meaning of Web 2.0 listed the following comparisons.

Web 1.0 Web 2.0
DoubleClick --> Google AdSense
Ofoto --> Flickr
Akamai --> BitTorrent
mp3.com --> Napster
Britannica Online --> Wikipedia
personal websites --> blogging
evite --> upcoming.org and EVDB
domain name speculation --> search engine optimization
page views --> cost per click
screen scraping --> web services
publishing --> participation
content management systems --> wikis
directories (taxonomy) --> tagging ("folksonomy")
stickiness --> syndication

When thinking of the progression from Web 1.0 to 2.0 one of the first examples is that of Netscape and Google. Netscape is thought to be the standard bearer of Web 1.0 in as much that its IPO was one of the defining events for that era. The same can also be said, for Google and Web 2.0. Netscape framed the Web 1.0 as being part of old software paradigm relying heavily on Web based browser which was packaged as a desktop application. Google on the other hand has always been a native Web application which delivered services, not packaged or sold. Google and other Web 2.0 applications often took a radical approach in their design. BitTorrent made every client a server. Wekipedia was based on the concept that any user could make an entry and that any entry could be edited by any user.

The power of Web 2.0 is the ability to extend useful information to a user from the collection of terabytes of available data. Google does this through its extensive search language which allows a user to navigate through all the data on the Internet to target a specific piece of information residing on an unknown server somewhere. This power has both a good and dark side which will have to be brought under control. This same technology can be used to find credit card numbers, personal SSNs, along with system administration passwords. A term has even been coined for this activity, Google Hacking.

To summarize some of the core competencies of Web 2.0 companies can be thought of as:
Services with ease of scalability
Services that gets richer as more people use them
Services based on collective intelligence

http://www.oreillynet.com/pub/a/oreilly/tim/news/2005/09/30/what-is-web-20.html

Mark

Wednesday, August 8, 2007

DEFCON 15 - DAY ONE

This past week I was in Las Vegas attending DEFCON 15. The next few Blogs will cover areas I saw of interest while I was there. The forum consisted of 5 tracks that ran from 10:00AM to 9:00PM in the evening. The amount of information that is provided is tremendous. For those that are not familiar with DEFCON it is titled as the “Largest Underground Hacking Convention” held at the Rivera Hotel in downtown Las Vegas. This year there were over 7000 hackers and security professionals attending the show. It is unique in that there is no pre-registration; in fact there is no registration at all. You show up in the hotel lobby with $100 cash and you exchange the money for a badge. No questions, no lists, and no record of who attended. Due to time I have I will only be able to discuss a few of the presentations that I attended.

A new contest has been added to the venue... spot the undercover press official (a take off of spot the Fed). The story goes that a female reporter was trying to interview unsuspecting young hackers for a pending story on hackers. She was confronted and requested to wear press credentials. After she refused, the question was posed to the attendees, what should we do with the undercover press official. After discounting the cries for lynching it was decided to ask her to leave the conference. The last that was seen of her was her running for the door (see the YouTube video below)

http://www.youtube.com/watch?v=2kbar4ahK7M

http://www.youtube.com/watch?v=3n2cBSBIAP0

For a more serious comment on day one I will talk about Bruce Schneier’s Q & A session. No speech from Bruce would be complete without his ragging on the TSA (not that they don’t deserve it). The point of his TSA talk was that it is possible to fly on commercial airlines without showing a government ID, regardless of what you have been told. To prove the point he traveled to DEFCON in Las Vegas without any identification.

He was asked about encryption algorithms and if he considered SHA1 still to be secure to use. What he said about encryption was that even weak algorithms are hard if not impossible to break. The problem is with the passwords or use of limited key space. He went on to say that the FBI doesn’t try and break encryption, they just install a key logger or send the hard drive to AccessData Corp and have them search for the password in some file or slack area of the drive. He believes that SHA1 still has about 5 years of use before it must be replaced. He though that there was plenty of time to develop the next generation of encryption.

Next he was asked about Data Privacy. His thoughts were, it was really a little too late to be worrying about it. We all have so much sensitive data in the public domain that we have lost control of it. The other problem is that we don’t own the data about us. When we go to Amazon dot com they collect information about what we buy, how we paid for it, and where we live. They are able to use this data, share it with there partners, or sell it. Until we get control of the data that is collected about us we will not be able to protect our data privacy. His believe is that Society as a whole will have to decide how it important this data is and force controls to be put into place to protect it. With Web 2.0 and social Web sites this problem will only become worse.

More breaking news on simple hacks for office doors…

Hacking a office door reader

http://www.youtube.com/watch?v=z7oPn7V5mHg

12 year old girl bumps a look at Defcon 15

http://www.youtube.com/watch?v=D1LH7lrftKA


Tuesday, August 7, 2007

Predicting a Digital Pearl Harbor

This spring we saw the first example of what an Informational Warfare Attack against a country might look like. Estonia came under a loosely organized attack from Russia Websites on April 27th. This action appeared to be in response to the removal of a Russian monument. The attacks appeared to be limited to internet infrastructure targets and simple in design. Most of the attacks where limited to denial of service (DoS) attacks with posting on Russian sites on how to craft DoS scripts and what IP addresses to use. It played on the Nationalism sprit of the average Russian to attack Estonia under the banner of attacking “Estonian Fascists”. I am not attempting to enter into a political discussion between Estonian and Russia, but list some of the facts that have been reported about the incident. Since Estonia was not attacked directly by another Government this incident is being referred to as a Cyber Riot.

The incident that we saw in Estonian demonstrated the effectiveness of Internet attacks against another country. This was one of first documented cases of an attack at this level. The trends show that a full scale Digital Pearl Harbor attack is not only possible, but it is a question of when. What was lacking in this attack was the sophistication required to attack the integrity of systems and make changes. This attack only was able to deny access to key systems.

A more recent large scale attack was launched against sites in Italy. The attacks where based on a Russian made exploit kit called Mpack which provides the attackers a management console to show which exploits are effective and which Domains are susceptible. While this attack was believed to be gang related it does demonstrate the level of sophistication that has been developed. If this type of tool was deployed in a botnet farm the results could be devastating. Recently “Operation Bot Roast” conducted by the FBI uncovered a botnet containing over a million computers. The Italian attack was only based on about 6000 compromised computers.

Although the ability has been demonstrated to launch a denial of service attack against another country it will be a number of years before a sophisticated attack can be coordinated to attack critical infrastructure sites to impact a country. To be able to target Wall Street and be able to corrupt the “Trading Platform” or target the control systems of nuclear power plants.

Based on past studies I predict that within the next 10 to 15 years we will see a full scale sophisticated attack against another country as an alternative to a conventional military action. The USAF has recently added Cyberspace to its mission and it is charge with developing surgical attack tools along with protecting US National Internet assists.


Links for additional information:

http://www.nytimes.com/2007/05/29/technology/29estonia.html?ex=1338091200&en=80d1358e48b6740b&ei=5088&partner=rssnyt&emc=rss

http://www.guardian.co.uk/technology/2003/feb/20/security.onlinesupplement

http://www.dickdestiny.com/blog/2007/06/electronic-pearl-harbor-for-day-idiot.html

http://www.computerworlduk.com/management/security/cybercrime/news/index.cfm?newsid=3561

http://www.computerworlduk.com/management/security/cybercrime/news/index.cfm?newsid=3514

http://www.msisac.org/webcast/05_05/info/05_05presentation.swf

Wednesday, August 1, 2007

Failed Prediction: The flying car:

One area that I have been asked to comment on, is why predictions fail. I recently came across an interesting page in Wikipedia that discussed Fords Flying Car. I thought this was an interesting example of where the technology was there, but other outside forces prevented the prediction from becoming a reality.

The Ford Motor Company in the 50’s performed a feasibility study and they conclude that both the technology and market supported a flying car. They saw interests from the public service sector such as emergency response, police, military, and luxury transportation. The concept may have been fine, but the supporting infrastructure wasn’t there. When Ford approached the FAA it was identified that the means for air traffic control was inadequate for the amount of flying cars that Ford was proposing.

In 1937 Waldo Waterman had produced the Watermans’ Aerobile powered by a Studebaker engine that could fly at a speed of 112 MPH and on the ground at 56 MHP. The technology existed, but the government wasn’t ready for every home to have a plane in their garage.

What has been developed in this market space is lightweight helicopters for niche markets. What was lost in the development was a low cost model that could be called the “peoples” Aerobile. It could be argued that the prediction was 70-80 years ahead of its time. Maybe in the next twenty years we will see an affordable flying car that is capable of both ground and in air transportation. Maybe when we master gravity we will see a host of not just flying cars, but flying personal transports.

http://en.wikipedia.org/wiki/Flying_car