1.) i won't say i will not produce illegal copies of softwares, it's common these days.. In fact you can even download a lot of them using torrents. so why bother asking from your friends while you can download it for free.
2.) it can't be helped, there are a lot of hackers, companies should invest more on protecting their precious information so that they can avoid losing it.
3.)they can get lazy, they cannot devote time in their work.
Thursday, September 18, 2008
Sunday, August 10, 2008
Currently, the predominant business model for commercial search engines is advertising. The goals of the advertising business model do not always correspond to providing quality search to users. For example, in our prototype search engine one of the top results for cellular phone is "The Effect of Cellular Phone Use Upon Driver Attention", a study which explains in great detail the distractions and risk associated with conversing on a cell phone while driving. This search result came up first because of its high importance as judged by the PageRank algorithm, an approximation of citation importance on the web. It is clear that a search engine which was taking money for showing cellular phone ads would have difficulty justifying the page that our system returned to its paying advertisers. For this type of reason and historical experience with other media, we expect that advertising funded search engines will be inherently biased towards the advertisers and away from the needs of the consumers.
Even as Google keeps growing, competitors are taking aim at the Internet's top search service.
America Online recently signed a contract for Google to provide search results to its members, starting this summer — which could add 34 million to the legions worldwide who already make Google.com their first stop when searching for information among the Web's billions of pages or looking for bulletin-board posts on specific topics among Usenet's thousands of newsgroups.
But a recent wave of competitors has surfaced to try to provide alternatives to one of the Web's biggest success stories.
"There's lots of room for others," insists Paul Gardi of competitor Ask.com, which owns Ask Jeeves and Direct Hit and recently launched Teoma. "You may love Coca-Cola, but you wouldn't want to drink it at every meal."
Google, founded in 1998, has come to redefine — and to dominate — searching for data on the Net. Its results are widely considered superior and more relevant than those from older competitors. Google usage has grown 54% over the past six months. It's No. 6 on Jupiter Media Metrix's monthly rankings, reaching 33 million users in March — more than other top sites such as eBay and Amazon.
A new study from Onestat.com finds nearly half of all searches globally go through Google; second-place Yahoo (which actually licenses Google to provide its Web searches) draws about one in five.
Google's results are based in part on popularity: Those sites that are linked to most often from other Web pages, and those that get the most hits, tend to rise to the top of the search rankings. New competitors build upon and refine that concept:
Teoma (Gaelic for "expert") also offers input from specialized sites created by experts. A search for "June weddings" on Teoma displayed results for planner associations and advice sites with FAQs on the ins and outs of the ceremony.
Wisenut, owned by Look Smart (which also supplies search technology to MSN), helps users with multiple ways to reword their searches to come up with better results. A query on "roller-blading" brings up links to sites for equipment makers, articles and books on the sport, and also suggests refinements such as "skate blading," "inline skating" and "roller blades."
Alltheweb.com, owned by the Norwegian firm Fast Search and Transfer, tries to out-Google Google by offering specific search engines for MP3s, video and photographs on its little-advertised "technology showcase" site. Like Wisenut, it offers category refinements to make the search easier.
"The last thing anyone wants is for the only search player to be Google," says Danny Sullivan, the editor of Search Engine Watch, an online newsletter. "Having competition makes them better."
Google's response to the new entries: Bring them on. "We're glad to have companies focusing on Web search, and we hope it will raise general awareness about the value of search engines," says Google's Craig Silverstein. "When we started, not all companies were focused on that."
Google recently unveiled two new services. For really tough questions, Google now offers answers from experts for a fee, starting at $5. It also added a free news search tool to its engine, allowing users to go beyond Web pages and discussion groups with real-time indexing of newspaper and magazine articles.
Yahoo offers a similar service, but only as part of its directory, which points to a specific group of newspapers. A recent search for Bill Clinton on news.yahoo .com retrieved topical articles from the Associated Press, Reuters, The New York Times, USA TODAY and the Arizona Republic. The reach at news.google.com went much wider, with papers from Iran, the United Kingdom, Singapore, North Carolina, Seattle and Miami, plus CNN, Reuters and E!
"We are always considering new services," Silverstein says. "Our goal is to find all the world's information and get it to people. A news service is one part of that."
While Google has attracted millions of users, the search engine reaping the most profit these days is Overture (formerly Goto.com), one of the few publicly traded dot-coms to post positive financial results. Last year it made $20 million on $288 million in revenues.
Most users probably haven't heard of Overture, but they come across it all the time, thanks to the behind-the-scenes partnerships common in the search industry.
Overture offers advertisers paid placement on Web sites. While you won't see a "Brought to you by Overture" blurb, its results are seen at the top of searches performed at America Online (where it will soon be replaced by Google), MSN, Yahoo and AltaVista, among others. For example, a search for "Mother's Day" on Yahoo puts "sponsor matches" first, then offers to buy flowers and gifts, followed by editorial links about the history of the day and famous moms.
"We're like search meets the Yellow Pages," says Overture's Harry Chandler. "If someone is looking for information, the fact that an advertiser is paying is irrelevant."
Of the top search firms, the only one that doesn't have a deal with Overture is Google, which has its own paid listings program. Overture has filed a patent infringement suit, claiming Google is stealing its pay-for-placement technology. Google denies it; Overture won't discuss the suit.
Besides text-based ads and sponsored listings, Google makes money by licensing technology to other sites (searches on washingtonpost.com are provided by Google, for example). Its contract with Yahoo is up next month, and Yahoo hasn't said how it plans to proceed.
Google rivals are already lining up to challenge Google's Yahoo deal. Fast is already making its pitch: "You better believe it," says Fast's Stephen Baker.
Fast powers Lycos, but doesn't advertise its own alltheweb.com, despite its growing reputation, because "that would put us in conflict with our customers," Baker says. "If I'm Yahoo, I want people searching at my site, not at Google."
Sullivan calls Fast technology the up-and-comer, "No. 2 to Google, in terms of relevancy (of results)." But he doesn't believe Google can be toppled soon. "The real fight will be who can be No. 2. There's a lot of duking it out to be had."
Google's advantage is more of a company philosophy than a technicaladvantage. Thus far they have simply remembered what works on theInternet. Some history.
Back in the days of Yore, (lets see that was a whole 8 years agomaybe), search engines weren't very good. They were better thannothing, but trying to find things took quite a bit of skill or alarge sense of humor. You could put in a query and almost anythingwould come up, and that anything would number in the 10,000's. AResearcher on the web could use plus and minus signs, quotes and ANDOR logic to wade through the results, getting a bit better listingsfor his query, but for the average user, it was still a mess.
META keyword tags were then used, because the thought was "You knowwhat your page is about, so you tell us" and that worked a littlebetter for a while until the marketing people got a hold of the idea.Any marketing person will tell you that it is better to have yourcompany name show up as much as possible, than to only show up whenit's relevant. "Keep your product in view" is the thought there, sowebmasters started putting all kinds of keywords in their META tags,and descriptions. Not based on the contents of the page but on thepopularity of the keywords used for searches.
Even as Google keeps growing, competitors are taking aim at the Internet's top search service.
America Online recently signed a contract for Google to provide search results to its members, starting this summer — which could add 34 million to the legions worldwide who already make Google.com their first stop when searching for information among the Web's billions of pages or looking for bulletin-board posts on specific topics among Usenet's thousands of newsgroups.
But a recent wave of competitors has surfaced to try to provide alternatives to one of the Web's biggest success stories.
"There's lots of room for others," insists Paul Gardi of competitor Ask.com, which owns Ask Jeeves and Direct Hit and recently launched Teoma. "You may love Coca-Cola, but you wouldn't want to drink it at every meal."
Google, founded in 1998, has come to redefine — and to dominate — searching for data on the Net. Its results are widely considered superior and more relevant than those from older competitors. Google usage has grown 54% over the past six months. It's No. 6 on Jupiter Media Metrix's monthly rankings, reaching 33 million users in March — more than other top sites such as eBay and Amazon.
A new study from Onestat.com finds nearly half of all searches globally go through Google; second-place Yahoo (which actually licenses Google to provide its Web searches) draws about one in five.
Google's results are based in part on popularity: Those sites that are linked to most often from other Web pages, and those that get the most hits, tend to rise to the top of the search rankings. New competitors build upon and refine that concept:
Teoma (Gaelic for "expert") also offers input from specialized sites created by experts. A search for "June weddings" on Teoma displayed results for planner associations and advice sites with FAQs on the ins and outs of the ceremony.
Wisenut, owned by Look Smart (which also supplies search technology to MSN), helps users with multiple ways to reword their searches to come up with better results. A query on "roller-blading" brings up links to sites for equipment makers, articles and books on the sport, and also suggests refinements such as "skate blading," "inline skating" and "roller blades."
Alltheweb.com, owned by the Norwegian firm Fast Search and Transfer, tries to out-Google Google by offering specific search engines for MP3s, video and photographs on its little-advertised "technology showcase" site. Like Wisenut, it offers category refinements to make the search easier.
"The last thing anyone wants is for the only search player to be Google," says Danny Sullivan, the editor of Search Engine Watch, an online newsletter. "Having competition makes them better."
Google's response to the new entries: Bring them on. "We're glad to have companies focusing on Web search, and we hope it will raise general awareness about the value of search engines," says Google's Craig Silverstein. "When we started, not all companies were focused on that."
Google recently unveiled two new services. For really tough questions, Google now offers answers from experts for a fee, starting at $5. It also added a free news search tool to its engine, allowing users to go beyond Web pages and discussion groups with real-time indexing of newspaper and magazine articles.
Yahoo offers a similar service, but only as part of its directory, which points to a specific group of newspapers. A recent search for Bill Clinton on news.yahoo .com retrieved topical articles from the Associated Press, Reuters, The New York Times, USA TODAY and the Arizona Republic. The reach at news.google.com went much wider, with papers from Iran, the United Kingdom, Singapore, North Carolina, Seattle and Miami, plus CNN, Reuters and E!
"We are always considering new services," Silverstein says. "Our goal is to find all the world's information and get it to people. A news service is one part of that."
While Google has attracted millions of users, the search engine reaping the most profit these days is Overture (formerly Goto.com), one of the few publicly traded dot-coms to post positive financial results. Last year it made $20 million on $288 million in revenues.
Most users probably haven't heard of Overture, but they come across it all the time, thanks to the behind-the-scenes partnerships common in the search industry.
Overture offers advertisers paid placement on Web sites. While you won't see a "Brought to you by Overture" blurb, its results are seen at the top of searches performed at America Online (where it will soon be replaced by Google), MSN, Yahoo and AltaVista, among others. For example, a search for "Mother's Day" on Yahoo puts "sponsor matches" first, then offers to buy flowers and gifts, followed by editorial links about the history of the day and famous moms.
"We're like search meets the Yellow Pages," says Overture's Harry Chandler. "If someone is looking for information, the fact that an advertiser is paying is irrelevant."
Of the top search firms, the only one that doesn't have a deal with Overture is Google, which has its own paid listings program. Overture has filed a patent infringement suit, claiming Google is stealing its pay-for-placement technology. Google denies it; Overture won't discuss the suit.
Besides text-based ads and sponsored listings, Google makes money by licensing technology to other sites (searches on washingtonpost.com are provided by Google, for example). Its contract with Yahoo is up next month, and Yahoo hasn't said how it plans to proceed.
Google rivals are already lining up to challenge Google's Yahoo deal. Fast is already making its pitch: "You better believe it," says Fast's Stephen Baker.
Fast powers Lycos, but doesn't advertise its own alltheweb.com, despite its growing reputation, because "that would put us in conflict with our customers," Baker says. "If I'm Yahoo, I want people searching at my site, not at Google."
Sullivan calls Fast technology the up-and-comer, "No. 2 to Google, in terms of relevancy (of results)." But he doesn't believe Google can be toppled soon. "The real fight will be who can be No. 2. There's a lot of duking it out to be had."
Google's advantage is more of a company philosophy than a technicaladvantage. Thus far they have simply remembered what works on theInternet. Some history.
Back in the days of Yore, (lets see that was a whole 8 years agomaybe), search engines weren't very good. They were better thannothing, but trying to find things took quite a bit of skill or alarge sense of humor. You could put in a query and almost anythingwould come up, and that anything would number in the 10,000's. AResearcher on the web could use plus and minus signs, quotes and ANDOR logic to wade through the results, getting a bit better listingsfor his query, but for the average user, it was still a mess.
META keyword tags were then used, because the thought was "You knowwhat your page is about, so you tell us" and that worked a littlebetter for a while until the marketing people got a hold of the idea.Any marketing person will tell you that it is better to have yourcompany name show up as much as possible, than to only show up whenit's relevant. "Keep your product in view" is the thought there, sowebmasters started putting all kinds of keywords in their META tags,and descriptions. Not based on the contents of the page but on thepopularity of the keywords used for searches.
Tuesday, July 29, 2008
My security
My database security denotes the system, processes, and procedures that protect a database from unintended activity.
My security is usually enforced through access control, auditing, and encryption.
Access control ensures and restricts who can connect and what can be done to the database.
Auditing logs what action or change has been performed, when and by who.
Encryption: Since security has become a major issue in recent years, many commercial database vendors provide built-in encryption mechanism. Data is encoded natively into the tables and deciphered "on the fly" when a query comes in. Connections can also be secured and encrypted if required using DSA, MD5, SSL or legacy encryption standard.
My security is usually enforced through access control, auditing, and encryption.
Access control ensures and restricts who can connect and what can be done to the database.
Auditing logs what action or change has been performed, when and by who.
Encryption: Since security has become a major issue in recent years, many commercial database vendors provide built-in encryption mechanism. Data is encoded natively into the tables and deciphered "on the fly" when a query comes in. Connections can also be secured and encrypted if required using DSA, MD5, SSL or legacy encryption standard.
Types of database
Types of Database
A lot of the sites that we visit on the web today are generated by a script of some description, and a great deal of them will use a database in one form or another. Like it or loathe it, building pages dynamically from databases is a technique that is here to stay.
There are two main types of database; flat-file and relational. Which is the best one to use for a particular job will depend on factors such as the type and the amount of data to be processed; not to mention how frequently it will be used.
Flat-File
The flat-file style of database are ideal for small amounts of data that needs to be human readable or edited by hand. Essentially all they are made up of is a set of strings in one or more files that can be parsed to get the information they store; great for storing simple lists and data values, but can get complicated when you try to replicate more complex data structures. That's not to say that it is impossible to store complex data in a flat-file database; just that doing so can be more costly in time and processing power compared to a relational database. The methods used for storing the more complex data types, are also likely to render the file unreadable and un-editable to anyone looking after the database.
The typical flat-file database is split up using a common delimiter. If the data is simple enough, this could be a comma, but more complex strings are usually split up using tabs, new lines or a combination of characters not likely to be found in the record itself.
One of the main problems with using flat files for even a semi-active database is the fact that it is very prone to corruption. There is no inherent locking mechanism that detects when a file is being used or modified, and so this has to be done on the script level. Even if care is taken to lock and unlock the file on each access, a busy script can cause a "race condition" and it is possible for a file to be wiped clean by two or more processes that are fighting for the lock; the timing of your file locks will become more and more important as a site gets busy.
Database Management (DBM)
The Database Management Layer allows script programmers to store information as a pair of strings; a key, which is used to find the associated value. Essentially, a DBM adds more functionality and better sortation during storage to the binary flat-files that it uses. There are several versions of DBMs available, but the most popular is the Berkley Database Manager; also known as the Berkley DB.
The Berkley DB is an improvement over normal flat-files, as it provides a way for programmers to use the database without having to worry about how the data is stored or how to retrieve the values. Retrieval of data using the Berkley DB is often much faster than from a flat-file, with the time savings being made by storing data in a way that speeds up the locating of a specific key-value pair.
Creating, editing and deleting data when using the Berkley DB is actually quite simple; once the database has been tied to the script you just use and manipulate the variables as normal. The problem of file locking that plagues flat-file databases is still apparent when using DBM, so you should still take care when planning scripts that utilize it.
Relational
The relational databases such as MySQL, Microsoft SQL Server and Oracle, have a much more logical structure in the way that it stores data. Tables can be used to represent real world objects, with each field acting like an attribute. For example, a table called books could have the columns title, author and ISBN, which describe the details of each book where each row in the table is a new book.
The "relation" comes from the fact that the tables can be linked to each other, for example the author of a book could be cross-referenced with the authors table (assuming there was one) to provide more information about the author. These kind of relations can be quite complex in nature, and would be hard to replicate in the standard flat-file format.
One major advantage of the relational model is that, if a database is designed efficiently, there should be no duplication of any data; helping to maintain database integrity. This can also represent a huge saving in file size, which is important when dealing with large volumes of data. Having said that, joining large tables to each other to get the data required for a query can be quite heavy on the processor; so in some cases, particularly when data is read only, it can be beneficial to have some duplicate data in a relational database.
Relational databases also have functions "built in" that help them to retrieve, sort and edit the data in many different ways. These functions save script designers from having to worry about filtering out the results that they get, and so can go quite some way to speeding up the development and production of web applications.
Database Comparisons
In most cases, you would want your database to support various types of relations; such databases, particularly if designed correctly, can dramatically improve the speed of data retrieval as well as being easier to maintain. Ideally, you will want to avoid the replication of data within a database to keep a high level of integrity, otherwise changes to one field will have to be made manually to those that are related.
While several flat-files can be combined in such a way as to be able to emulate some of the behaviours of a relational database, it can prove to be slower in practice. A single connection to a relational database can access all the tables within that database; whereas a flat file implementation of the same data would result in a new file open operation for each table.
All the sorting for flat-file databases need to be done at the script level. Relational databases have functions that can sort and filter the data so the results that are sent to the script are pretty much what you need to work with. It is often quicker to sort the results before they are returned to the script than to have them sorted via a script, few scripting languages are designed to filter data effectively and so the more functions a database supports, the less work a script has to do.
If you are only working with a small amount of data that is rarely updated then a full blown relational database solution can be considered overkill. Flat-file databases are not as scaleable as the relational model, so if you are looking for a suitable database for more frequent and heavy use then a relational database is probably more suitable.
A lot of the sites that we visit on the web today are generated by a script of some description, and a great deal of them will use a database in one form or another. Like it or loathe it, building pages dynamically from databases is a technique that is here to stay.
There are two main types of database; flat-file and relational. Which is the best one to use for a particular job will depend on factors such as the type and the amount of data to be processed; not to mention how frequently it will be used.
Flat-File
The flat-file style of database are ideal for small amounts of data that needs to be human readable or edited by hand. Essentially all they are made up of is a set of strings in one or more files that can be parsed to get the information they store; great for storing simple lists and data values, but can get complicated when you try to replicate more complex data structures. That's not to say that it is impossible to store complex data in a flat-file database; just that doing so can be more costly in time and processing power compared to a relational database. The methods used for storing the more complex data types, are also likely to render the file unreadable and un-editable to anyone looking after the database.
The typical flat-file database is split up using a common delimiter. If the data is simple enough, this could be a comma, but more complex strings are usually split up using tabs, new lines or a combination of characters not likely to be found in the record itself.
One of the main problems with using flat files for even a semi-active database is the fact that it is very prone to corruption. There is no inherent locking mechanism that detects when a file is being used or modified, and so this has to be done on the script level. Even if care is taken to lock and unlock the file on each access, a busy script can cause a "race condition" and it is possible for a file to be wiped clean by two or more processes that are fighting for the lock; the timing of your file locks will become more and more important as a site gets busy.
Database Management (DBM)
The Database Management Layer allows script programmers to store information as a pair of strings; a key, which is used to find the associated value. Essentially, a DBM adds more functionality and better sortation during storage to the binary flat-files that it uses. There are several versions of DBMs available, but the most popular is the Berkley Database Manager; also known as the Berkley DB.
The Berkley DB is an improvement over normal flat-files, as it provides a way for programmers to use the database without having to worry about how the data is stored or how to retrieve the values. Retrieval of data using the Berkley DB is often much faster than from a flat-file, with the time savings being made by storing data in a way that speeds up the locating of a specific key-value pair.
Creating, editing and deleting data when using the Berkley DB is actually quite simple; once the database has been tied to the script you just use and manipulate the variables as normal. The problem of file locking that plagues flat-file databases is still apparent when using DBM, so you should still take care when planning scripts that utilize it.
Relational
The relational databases such as MySQL, Microsoft SQL Server and Oracle, have a much more logical structure in the way that it stores data. Tables can be used to represent real world objects, with each field acting like an attribute. For example, a table called books could have the columns title, author and ISBN, which describe the details of each book where each row in the table is a new book.
The "relation" comes from the fact that the tables can be linked to each other, for example the author of a book could be cross-referenced with the authors table (assuming there was one) to provide more information about the author. These kind of relations can be quite complex in nature, and would be hard to replicate in the standard flat-file format.
One major advantage of the relational model is that, if a database is designed efficiently, there should be no duplication of any data; helping to maintain database integrity. This can also represent a huge saving in file size, which is important when dealing with large volumes of data. Having said that, joining large tables to each other to get the data required for a query can be quite heavy on the processor; so in some cases, particularly when data is read only, it can be beneficial to have some duplicate data in a relational database.
Relational databases also have functions "built in" that help them to retrieve, sort and edit the data in many different ways. These functions save script designers from having to worry about filtering out the results that they get, and so can go quite some way to speeding up the development and production of web applications.
Database Comparisons
In most cases, you would want your database to support various types of relations; such databases, particularly if designed correctly, can dramatically improve the speed of data retrieval as well as being easier to maintain. Ideally, you will want to avoid the replication of data within a database to keep a high level of integrity, otherwise changes to one field will have to be made manually to those that are related.
While several flat-files can be combined in such a way as to be able to emulate some of the behaviours of a relational database, it can prove to be slower in practice. A single connection to a relational database can access all the tables within that database; whereas a flat file implementation of the same data would result in a new file open operation for each table.
All the sorting for flat-file databases need to be done at the script level. Relational databases have functions that can sort and filter the data so the results that are sent to the script are pretty much what you need to work with. It is often quicker to sort the results before they are returned to the script than to have them sorted via a script, few scripting languages are designed to filter data effectively and so the more functions a database supports, the less work a script has to do.
If you are only working with a small amount of data that is rarely updated then a full blown relational database solution can be considered overkill. Flat-file databases are not as scaleable as the relational model, so if you are looking for a suitable database for more frequent and heavy use then a relational database is probably more suitable.
History of database
The earliest known use of the term data base was in November 1963, when the System Development Corporation sponsored a symposium under the title Development and Management of a Computer-centered Data Base[1]. Database as a single word became common in Europe in the early 1970s and by the end of the decade it was being used in major American newspapers. (The abbreviation DB, however, survives.)
The first database management systems were developed in the 1960s. A pioneer in the field was Charles Bachman. Bachman's early papers show that his aim was to make more effective use of the new direct access storage devices becoming available: until then, data processing had been based on punched cards and magnetic tape, so that serial processing was the dominant activity. Two key data models arose at this time: CODASYL developed the network model based on Bachman's ideas, and (apparently independently) the hierarchical model was used in a system developed by North American Rockwell later adopted by IBM as the cornerstone of their IMS product. While IMS along with the CODASYL IDMS were the big, high visibility databases developed in the 1960s, several others were also born in that decade, some of which have a significant installed base today. Two worthy of mention are the PICK and MUMPS databases, with the former developed originally as an operating system with an embedded database and the latter as a programming language and database for the development of healthcare systems.
The relational model was proposed by E. F. Codd in 1970. He criticized existing models for confusing the abstract description of information structure with descriptions of physical access mechanisms. For a long while, however, the relational model remained of academic interest only. While CODASYL products (IDMS) and network model products (IMS) were conceived as practical engineering solutions taking account of the technology as it existed at the time, the relational model took a much more theoretical perspective, arguing (correctly) that hardware and software technology would catch up in time. Among the first implementations were Michael Stonebraker's Ingres at Berkeley, and the System R project at IBM. Both of these were research prototypes, announced during 1976. The first commercial products, Oracle and DB2, did not appear until around 1980. The first successful database product for microcomputers was dBASE for the CP/M and PC-DOS/MS-DOS operating systems.
During the 1980s, research activity focused on distributed database systems and database machines. Another important theoretical idea was the Functional Data Model, but apart from some specialized applications in genetics, molecular biology, and fraud investigation, the world took little notice.
In the 1990s, attention shifted to object-oriented databases. These had some success in fields where it was necessary to handle more complex data than relational systems could easily cope with, such as spatial databases, engineering data (including software repositories), and multimedia data. Some of these ideas were adopted by the relational vendors, who integrated new features into their products as a result. The 1990s also saw the spread of Open Source databases, such as PostgreSQL and MySQL.
In the 2000s, the fashionable area for innovation is the XML database. As with object databases, this has spawned a new collection of start-up companies, but at the same time the key ideas are being integrated into the established relational products. XML databases aim to remove the traditional divide between documents and data, allowing all of an organization's information resources to be held in one place, whether they are highly structured or not.
The first database management systems were developed in the 1960s. A pioneer in the field was Charles Bachman. Bachman's early papers show that his aim was to make more effective use of the new direct access storage devices becoming available: until then, data processing had been based on punched cards and magnetic tape, so that serial processing was the dominant activity. Two key data models arose at this time: CODASYL developed the network model based on Bachman's ideas, and (apparently independently) the hierarchical model was used in a system developed by North American Rockwell later adopted by IBM as the cornerstone of their IMS product. While IMS along with the CODASYL IDMS were the big, high visibility databases developed in the 1960s, several others were also born in that decade, some of which have a significant installed base today. Two worthy of mention are the PICK and MUMPS databases, with the former developed originally as an operating system with an embedded database and the latter as a programming language and database for the development of healthcare systems.
The relational model was proposed by E. F. Codd in 1970. He criticized existing models for confusing the abstract description of information structure with descriptions of physical access mechanisms. For a long while, however, the relational model remained of academic interest only. While CODASYL products (IDMS) and network model products (IMS) were conceived as practical engineering solutions taking account of the technology as it existed at the time, the relational model took a much more theoretical perspective, arguing (correctly) that hardware and software technology would catch up in time. Among the first implementations were Michael Stonebraker's Ingres at Berkeley, and the System R project at IBM. Both of these were research prototypes, announced during 1976. The first commercial products, Oracle and DB2, did not appear until around 1980. The first successful database product for microcomputers was dBASE for the CP/M and PC-DOS/MS-DOS operating systems.
During the 1980s, research activity focused on distributed database systems and database machines. Another important theoretical idea was the Functional Data Model, but apart from some specialized applications in genetics, molecular biology, and fraud investigation, the world took little notice.
In the 1990s, attention shifted to object-oriented databases. These had some success in fields where it was necessary to handle more complex data than relational systems could easily cope with, such as spatial databases, engineering data (including software repositories), and multimedia data. Some of these ideas were adopted by the relational vendors, who integrated new features into their products as a result. The 1990s also saw the spread of Open Source databases, such as PostgreSQL and MySQL.
In the 2000s, the fashionable area for innovation is the XML database. As with object databases, this has spawned a new collection of start-up companies, but at the same time the key ideas are being integrated into the established relational products. XML databases aim to remove the traditional divide between documents and data, allowing all of an organization's information resources to be held in one place, whether they are highly structured or not.
What is a Data base..
A database is a structured collection of records or data. A computer database relies upon software to organize the storage of data. The software models the database structure in what are known as database models. The model in most common use today is the relational model. Other models such as the hierarchical model and the network model use a more explicit representation of relationships (see below for explanation of the various database models).
Database management systems (DBMS) are the software used to organize and maintain the database. These are categorized according to the database model that they support. The model tends to determine the query languages that are available to access the database. A great deal of the internal engineering of a DBMS, however, is independent of the data model, and is concerned with managing factors such as performance, concurrency, integrity, and recovery from hardware failures. In these areas there are large differences between products.
Database management systems (DBMS) are the software used to organize and maintain the database. These are categorized according to the database model that they support. The model tends to determine the query languages that are available to access the database. A great deal of the internal engineering of a DBMS, however, is independent of the data model, and is concerned with managing factors such as performance, concurrency, integrity, and recovery from hardware failures. In these areas there are large differences between products.
Monday, March 24, 2008
The Jay and the Peacock (aesop)
A Jay venturing into a yard where Peacocks used to walk, foundthere a number of feathers which had fallen from the Peacocks whenthey were moulting. He tied them all to his tail and strutteddown towards the Peacocks. When he came near them they soondiscovered the cheat, and striding up to him pecked at him andplucked away his borrowed plumes. So the Jay could do no betterthan go back to the other Jays, who had watched his behaviour froma distance; but they were equally annoyed with him, and told him: "It is not only fine feathers that make fine birds."
(no comment)
(no comment)
Subscribe to:
Posts (Atom)