Joon2.0

Monday, May 29, 2006




MS-구글-야후 사생결단 3파전


[한겨레] 미국 인터넷업계가 합병과 제휴, 경쟁과 견제가 난무하는 ‘전국시대’로 접어들었다. 소프트웨어 거인 마이크로소프트(MS)와 무서운 신예 구글, 전통의 인터넷 강호 야후가 온라인의 진정한 지배자가 누구인지를 가리기 위해 사생결단을 벌이고 있다.
합병·제휴, 경쟁·견제 난무‘짝짓기 계절’ 최후 승자 누구냐…구글 돌풍 지속 ‘촉각’
짝짓기의 계절…동맹을 확보하라=물밑에서 전개되던 싸움은 지난 26일 두 건의 굵직한 거래로 실체를 드러냈다. 이날 검색 1위 업체 구글이 피시 생산 1위 델과의 ‘동맹’을 전격 선언했다. 구글은 델이 만드는 연간 2천여만대의 피시에, 인터넷 기능을 배경화면에 노출시켜 접속을 유도하는 도구막대(툴바)를 설치하기로 했다. 또, 전자우편과 하드드라이브 검색용 소프트웨어도 기본 프로그램으로 깔기로 했다. 인터넷익스플로러 클릭을 필요없게 만들어 피시 이용 초기단계에서 네티즌들을 구글로 끌어들인다는 전략이다.
출시를 앞둔 엠에스의 인터넷익스플로러7이 엠에스엔(MSN) 이외의 검색사이트 접근을 어렵게 만들어 경쟁업체들에 타격을 준다는 불평이 나오는 가운데, 구글이 선수를 치고 나온 셈이다. 이번 제휴를 놓고 구글이 델에 3년간 10억달러를 주기로 했다는 얘기가 흘러나온다고 <뉴욕타임스>가 전했다.
구글의 상승세에 짓눌렸던 야후도 이날 인터넷쇼핑업계 1위 이베이와의 제휴를 발표했다. 야후는 이베이에 그래픽 광고와 검색 콘텐츠를 제공하고, 이베이는 야후가 전자결제 서비스를 이용할 수 있게 했다. 콘텐츠 공유와 서비스 연동으로 서로 부족한 부분을 채워 시너지 효과를 노리는 것이다.
앞서 구글은 엠에스와의 경쟁 끝에 아메리카온라인(AOL)과의 제휴를 따내, 10억달러 어치의 지분 투자에 나섰다. 반면, 아마존닷컴의 검색엔진 공급 다툼에서는 엠에스가 승리를 거뒀다. 이밖에 엠에스의 야후 검색사업 인수설, 야후와 이베이의 합병설 등이 당사자들의 부인에도 잦아들지 않고 있다.
3파전 최종 승자는?=각자의 길에 전념하던 인터넷업체들이 합종연횡 바람에 휩싸인 배경에는 패권을 쥐려는 구글과 야후, 엠에스의 3파전이 자리하고 있다. 특히 검색시장 점유율 50%를 넘보는 구글은 정보기술업계 제왕인 엠에스의 지위를 흔들 정도가 됐다. <파이낸셜타임스>는 “모든 (인수·합병) 논의의 배경에 구글의 부상이 있다”고 분석했다.
물불 안가리는 싸움은 소프트웨어 개발로 성장한 엠에스가 인터넷사업에 승부수를 던지고, 구글도 각종 소프트웨어 개발과 배포로 엠에스의 사업영역을 침범하고 나서면서 한층 치열해졌다. 구글과 엠에스는 또 루퍼트 머독의 뉴스코프가 지난해 인수한 커뮤니티사이트 ‘마이스페이스’와의 제휴 성사를 위한 경쟁에 들어갔다. 8천여만명의 회원을 지닌 마이스페이스와의 제휴는 검색 서비스와 광고의 확대 기회로 여겨진다. 검색과 광고를 연계한 검색광고 시장이 올해 69억달러에 이르러, 온라인 지배를 통한 광고 독식 경쟁이 뜨거워지고 있는 것이다.
엠에스 경영진은 지난해 말 직원들에게 돌린 글에서 “우리는 검색이 매우 중요해질 것을 알았지만, 구글이 강력한 위치를 선점하게 내버려두고 말았다”고 자성했다. 이에 따라 엠에스는 7월에 시작되는 새 회계연도의 인터넷사업 투자 규모를 이전의 10억달러에서 16억달러로 늘리겠다고 이달 초 밝혔다. 빌 게이츠 회장은 윈도 체제와 결합한 새 제품들이 검색분야 경쟁자들을 물리칠 것이라고 호언했다.
하지만 엠에스와 야후의 몸부림에도 구글 돌풍은 가라앉을 기색이 없다. 시장 조사업체 컴스코어네트웍스 자료를 보면, 구글의 검색분야 시장점유율은 지난해 4월 36.5%에서 1년 만에 43.1%로 뛰었다. 야후는 같은 기간에 2.7% 포인트 감소한 28.0%, 엠에스엔은 3.2% 포인트 깎인 12.9%로 내려앉았다. 구글 경영진 중 여럿이 인터넷익스플로러에 의해 시장에서 쫓겨난 넷스케이프 출신이라는 점은 흥미로운 관전 포인트가 되고 있다.


이런 상황에서 구글-델, 야후-이베이의 동맹 결성으로 엠에스는 더욱 초조하게 됐다. 엠에스는 최근 야후의 검색 부문에 지분을 투자하려 했지만, 테리 시멜 야후 회장은 “검색 부문 일부를 파는 것은 한 쪽 팔만을 파는 것처럼 말이 안 된다”고 퇴짜를 놨다.
<월스트리트저널>은 인터넷 사용인구가 성숙단계로 접어든 점도 이들이 더 필사적으로 시장점유율 확대에 나서는 이유라고 지적했다. 이본영 기자 ebon@hani.co.kr

Wednesday, May 03, 2006

Semantic Web Ontologies: What Works and What Doesn't

Peter Norvig: (Mr. Norvig is director of search quality at Google.) [There are] four individual challenges. First is a chicken-and-egg problem: How do we build this information, because what's the point of building the tools unless you got the information, and what's the point of putting the information in there unless you have tools. A friend of mine just asked can I send him all the URLs on the web that have dot-RDF, dot-OWL, and a couple other extensions on them; he couldn't find them all. I looked, and it turns out there's only around 200,000 of them. That's about 0.005% of the web. We've got a ways to go. The next problem is competing ontologies. Everybody's got a different way to look at it. You have some tools to address it. We'll see how far that will scale. Then the Cyc problem, which is a problem of background knowledge, and the spam problem. That's something I have to face every day. As you get out of the lab and into the real world, there are people who have a monetary advantage to try to defeat you.So, the chicken-and-egg problem. That's "What interesting information is in these kind of semantic technologies, and where is the other information?" It turns out most of the interesting information is still in text. What we concentrate on is how do you get it out of text. Here's an example of a little demo called IO Knot. You can type a natural language question, and it pulls out documents from text and pulls out semantic entities. And you see, it's not quite perfect—couldn't quite resolve the spelling problem. But this is all automated, so there's no work in putting this information into the right place.

In general, it seems like semantic technology is good for defining schemas, but then what goes into the schemas. There's a lot of work to get it there. Here's another example. This is a Google News page from last night, and what we've done here is apply clustering technology to put the news stories together in categories, so you see the top story there about Blair, and there're 658 related stories that we've clustered together. Now imagine what it would be like if instead of using our algorithms we relied on the news suppliers to put in all the right metadata and label their stories the way they wanted to. "Is my story a story that's going to be buried on page 20, or is it a top story? I'll put my metadata in. Are the people I'm talking about terrorists or freedom fighters? What's the definition of patriot? What's the definition of marriage?"Just defining these kinds of ontologies when you're talking about these kinds of political questions rather than about part numbers; this becomes a political statement. People get killed over less than this. These are places where ontologies are not going to work. There's going to be arguments over them. And you've got to fall back on some other kinds of approaches. The best place where ontologies will work is when you have an oligarchy of consumers who can force the providers to play the game. Something like the auto parts industry, where the auto manufacturers can get together and say, "Everybody who wants to sell to us do this." They can do that because there's only a couple of them. In other industries, if there's one major player, then they don't want to play the game because they don't want everybody else to catch up. And if there's too many minor players, then it's hard for them to get together.Semantic technologies are good for essentially breaking up information into chunks. But essentially you get down to the part that's in between the angle brackets. And one of our founders, Sergey Brin, was quoted as saying, "Putting angle brackets around things is not a technology by itself." The problem is what goes into the angle brackets. You can say, "Well, my database has a person name field, and your database has a first name field and a last name field, and we'll have a concatenation between them to match them up." But it doesn't always work that smoothly. Here's an example of a couple days' worth of queries at Google for which we've spelling-corrected all to one canonical form. It's one of our more popular queries, and there were something like 4,000 different spelling variations over the course of a week. Somebody's got to do that kind of canonicalization. So the problem of understanding content hasn't gone away; it's just been forced down to smaller pieces between angle brackets. So there's a problem of spelling correction; there's a problem of transliteration from another alphabet such as Arabic into a Roman alphabet; there's a problem of abbreviations, HP versus Hewlett Packard versus Hewlett-Packard, and so on. And there's a problem with identical names: Michael Jordan the basketball player, the CEO, and the Berkeley professor.And now we get to this problem of background knowledge. Cyc project went about trying to define all the knowledge that was in a dictionary, a Dublin Core type of thing, and then found what we need was the stuff that wasn't in the dictionary or encyclopedia. Lenat and Guha said there's this vast storehouse of general knowledge that you rarely talk about, common-sense things like, "Water flows downhill" and "Living things get diseases." I thought we could launch a big project to try to do this kind of thing. Then I decided to simplify a little—just put quote marks around it and type it in. So I typed "water flows downhill" and I got 1,200 hits. [That first hit] says, "lesson plan by Emily, kindergarten teacher." It actually explains why water flows downhill, and it's the kind of thing that you don't find in an encyclopedia. The conclusion here is Lenat was 99.999993% right, because only 1,200 out of those 4.3 billion cases actually talked about water flowing downhill. But that's enough, and you can go on from there. You can use the web to do voting, so you say this pump goes uphill and that only happens 275, so the downhill wins, 1,200 to 275.Essentially what we're doing here is using the power of masses of untrained people who you aren't paying to do all your work for you, as opposed to trying to get trained people to use a well-defined formalism and write text in that formalism and let's just use the stuff that's already out there. I'm all for this idea of harvesting this "unskilled labor" and trying to put it to use using statistical techniques over masses of large data and filtering through that yourself, rather than trying to closely define it on your own. The last issue is the spam issue. When you're in the lab and you're defining your ontology, everything looks nice and neat. But then you unleash it on the world, and you find out how devious some people are. This is an example; it looks like two pages here. This is actually one page. On the left is the page as Googlebot sees it, and on the right is a page as any other user agent sees it. This website—when it sees Googlebot.com, it serves up the page that it thinks will most convince us to match against it, and then when a regular user comes, it shows the page that it wants to show.What this indicates is, one, we've got a lot of work to do to deal with this kind of thing, but also you can't trust the metadata. You can't trust what people are going to say. In general, search engines have turned away from metadata, and they try to hone in more on what's exactly perceivable to the user. For the most part we throw away the meta tags, unless there's a good reason to believe them, because they tend to be more deceptive than they are helpful. And the more there's a marketplace in which people can make money off of this deception, the more it's going to happen. Humans are very good at detecting this kind of spam, and machines aren't necessarily that good. So if more of the information flows between machines, this is something you're going to have to look out for more and more.

How Google beat Amazon and Ebay to the Semantic Web

Friday, July 26, 2002
August 2009: How Google beat Amazon and Ebay to the Semantic Web
By Paul Ford
A work of fiction. A Semantic Web scenario. A short feature from a business magazine published in 2009.
Please note that this story was written in 2002.

It's hard to believe Google - which is now the world's largest single online marketplace - came on the scene only a little more than 8 years ago, back in the days when Amazon and Ebay reigned supreme. So how did Google become the world's single largest marketplace?
Well, the short answer is “the Semantic Web” (whatever that is - more in a moment). While Amazon and Ebay continue to have average quarterly profits of $1 billion and $1.8 billion, respectively, and are successes by any measure, the $17 billion per annum Google Marketplace is clearly the most impressive success story of what used to be called, pre-crash, “The New Economy.”
Amazon and Ebay both worked as virtual marketplaces: they outsourced as much inventory as possible (in Ebay's case, of course, that was all the inventory, but Amazon also kept as little stock on hand as it could). Then, through a variety of methods, each brought together buyers and sellers, taking a cut of every transaction.
For Amazon, that meant selling new items, or allowing thousands of users to sell them used. For Ebay, it meant bringing together auctioneers and auction buyers. Once you got everything started, this approach was extremely profitable. It was fast. It was managed by phone calls, emails, and database applications. It worked.
Enter Google. By 2002, it was the search engine, and its ad sales were picking up. At the same time, the concept of the “Semantic Web,” which had been around since 1998 or so, was gaining a little traction, and the attention of an increasing circle of people.
So what's the Semantic Web? At its heart, it's just a way to describe things in a way that a computer can “understand.” Of course, what's going on is not understanding, but logic, like you learn in high school:
If A is a friend of B, then B is a friend of A.
Jim has a friend named Paul.
Therefore, Paul has a friend named Jim.
Jim has a friend named Paul.
Therefore, Paul has a friend named Jim.
Using a markup language called RDF (an acronym that's here to stay, so you might as well learn it - it stands for Resource Description Framework), you could put logical statements like these on the Internet, “spiders” could collect them, and the statements could be searched, analyzed, and processed. What makes this different than regular search is that the statements can be combined. So if I find a statement on Jim's web site that says “Jim is a friend of Paul” and someone does a search for Paul's friends, even if Paul's web site doesn't have a mention of Jim on it, we know Jim's considers himself a friend of Paul.
Other things we might know for sure? That Car Seller A is selling Miatas for 10% less than Car Seller B. That Jan Hammer played keyboards on the Mahavishnu Orchestra's albums in the 1970s. That dogs have paws. That your specific model of computer requires a new motherboard and a faster bus before it can be upgraded to a Pentium 18. The Semantic Web isn't about pages and links, it's about relationships between things - whether one thing is a part of another, or how much a thing costs, or when it happened.
The Semweb was originally supposed to give the web the “smarts” it lacked - and much of the early work on it was in things like calendaring and scheduling, and in expressing relationships between people. By late 2003, when Google began to seriously experiment with the Semweb (after two years of experiments at their research labs), it was still a slow-growing technology that almost no one understood and very few people used, except for academics with backgrounds in logic, computer science, or artificial intelligence. The learning curve was as steep as a cliff, and there wasn't a great incentive for new coders to climb it and survey the world from their new vantage.
The Semweb, it was promised, would make it much easier to schedule dentist's appointment, update your computer, check the train schedule, and coordinate shipments of car parts. It would make searching for things easier. All great stuff, stuff to make millions of dollars from, perhaps. But not exactly sexy to the people who write the checks, especially after they'd been burnt 95 times over by the dot-com bust. All they saw was the web - the same web that had lined a few pockets and emptied a few million - with the word “semantic” in front of it.
. . . . .
Semantics vs. Syntax, Fight at 9
The semantics of something is the meaning of it. Nebulous stuff, but in the world of AI, the goal has long been getting semantics out of syntax. See, the trillion dollar question is, when you have a whole lot of stuff arranged syntactically, in a given structure that the computer can chew up, how do you then get meaning out of it? How does syntax become semantics? Human brains are really good at this, but computers, are dreadful. They're whizzes at syntax. You can tell them anything, if you tell it in a structured way, but they can't make sense of it, they keep deciding that “The flesh is willing but the spirit is weak” in English translates to “The meat is full of stars but the vodka is made of pinking shears” or suchlike in Russian.
So the guess has always been that you need a whole lot of syntactically stable statements in order to come up with anything interesting. In fact, you need a whole brain's worth - millions. Now, no one has proved this approach works at all, and the #1 advocate for this approach was a man named Doug Lenat of the CYC corporation, who somehow ended up on President Ashcroft's post-coup blacklist as a dangerous intellectual and hasn't been seen since. But the basic, overarching idea with the Semweb was - and still is, really - to throw together so much syntax from so many people that there's a chance to generate meaning out of it all.
As you know, computers still aren't listening to us as well as we'd like, but in the meantime the Semweb technology matured, and all of a sudden centralized databases - and Amazon and Ebay were prime examples of centralized databases with millions of items each - could suddenly be spread out through the entire web. Everyone could own their little piece of the database, their own part of the puzzle. It was easy to publish the stuff. But the problem was that there was no good way to bring it all together. And it was hard to create RDF files, even for some programmers - so we're back to that steep learning curve.
That all changed - suprisingly slowly - in late 2004, when with little fanfare, Google introduced three services, Google Marketplace Search, Google Personal Agent, and Google Verification Manager, and a software product, Google Marketplace Manager.
. . . . .
Google Marketplace Search
Marketplace Search is a search feature built on top of the Google Semantic Search feature, and it's likely nearly everyone reading will have used it at least once. You simply enter:
sell:martin guitar
to see a list of people buying Martin-brand acoustic guitars, and
buy:martin guitar
to see a list of sellers. Google asked for, and remembered, your postal code, and you could use easy sort controls inside the page to organize the resulting list of guitars by price, condition, model number, new/used, and proximity. The pages drew from Google's “classic,” non-Semantic-Web search tools, long considered the best on the Web, to link to information on Martin models and buyer's guides, as well as from Google's Usenet News archive. Links to sites like Epinions filled in the gaps.
So where did Google Marketplace Search get its information? The same way Google got all of its information - by crawling through the entire web and indexing what it found. Except now it was looking for RDDL files, which pointed to RDF files, which contained logical statements like these:
(Scott Rahin) lives in Zip Code (11231). (Scott Rahin) has the email address (ford@ftrain.com). (Scott Rahin) has a (Martin Guitar). [Scott's] (Martin Guitar) is a model (245). [Scott's] (Martin Guitar) can be seen at (http://ftrain.com/picture/martin.jpg). [Scott's] (Martin Guitar) costs ($900). [Scott's] (Martin Guitar) is in condition (Good). [Scott's] (Martin Guitar) can be described as “Well cared for, and played rarely (sadly!). Beautiful, mellow sound and a spare set of strings. I'll be glad to show it to anyone who wants to stop by, or deliver it anywhere within the NYC area.”
What's important to understand is that the things in parentheses and brackets above are not just words, they're pointers. (Scott Rahin) is a pointer to http://ftrain.com/people/Scott. (Martin Acoustic Guitar) is a pointer to a URL that in turn refers to a special knowledge database that has other logical statements, like these:
(Martin Guitar) is an (Acoustic Guitar). (Acoustic Guitar) is a (Guitar). (Guitar) is an (Instrument).
Which means that if someone searches for guitar, or acoustic guitar, all Martin Guitars can be included in the search. And that means that Scott can simply say he has a Martin, or a Martin guitar, and the computers figure the rest out for him.
Actually, I just lied to you - it doesn't work exactly that way, and there's a lot of trickery with the pointers, and even the verb phrases are pointers, but rather than spout out a few dozen ugly terms like namespaces, URIs, prefixes, serialization, PURLs, and the like, we'll skip that part and just focus on the essential fact: everything on the Semantic Web describes something that has a URL. Or a URI. Or something like that. What that really means is that RDF is data about web data - or metadata. Sometimes RDF describes other RDF. So do you see how you take all those syntactic statements and hope to build a semantic web, one that can figure things out for itself? Combining the statements like that? Do you? Come on now, really? Yeah, well no one does.
So Google connects everyone by spidering RDF and indexing it. Of course, connecting anonymous buyers and sellers isn't enough. There needs to be accountability. Enter the “Web Accountability and Rating Framework.” There were a lot of various frameworks for accountability, but this one was certified, finally, by the World Wide Web Consortium, before the nuclear accident at MIT, and ECMA, and it's now the standard. How does it work? Well:
On Kara Dobbs's site, we find this statement:
[Kara Dobbs] says (Scott Rahin) is (Trustworthy).
On James Drevin's site, we find this statement:
[James Drevin] says (Scott Rahin) is (Trustworthy).
And so forth. Fine - but how do you know how to trust any of these people in the first place? Stay with me:
On Citibank's site:
[Citibank] says (Scott Rahin) is (Trustworthy).
On Mastercard's site:
[Mastercard] says (Scott Rahin) is (Trustworthy).
And inside Google:
[Google Verification Service] says (Scott Rahin) is (Trustworthy).
and if
[Citibank] says (Kara Dobbs, etc) is (Trustworthy).
then you start to see how it can all fit together, and you can actually get a pretty good sense of whether someone is the least bit dishonest or not. Now, this raises a billion problems about accountability and the nature of truth and human behavior and so forth, but we don't have the requisite 30 trillion pages, so just accept that it works for now. And that a lot of other stuff in this ilk is coming down the pike, like:
[The United States Social Security Administration] says (Pete Jefferson) was born in (1992).
Which means that Pete Jefferson can download smutty videos and “adult” video games from the Internet, since he's 19 and has a Social Security number. That's what the Safe Access for Minors bill says should happen, anyway. And don't forget the civil liberty ramifications of statements like these:
[The Sherriff's Department of Dallas, Texas] says (Martin Chalbarinstik) is a (Repeat Sexual Offender).
[The Sherriff's Department of Dallas, Texas] says (Dave Trebuchet) has (Bounced Checks).
[The Green Party, USA] says (Susan Petershaw) is a (Member).
Databases are powerful, and as much as they bring together data, they can intrude on privacy, but rather than giving the author permission to become a frothing mess lamenting the total destruction of our civil liberties at the hand of cruel machines, let's move on.
Anyway, when you think about it, you can see why Google was a natural to put it all together. Google already searched the entire Web. Google already had a distributed framework with thousands of independent machines. Google already looked for the links between pages, the way they fit together, in order to build its index. Google's search engine solved equations with millions of variables. Semantic Web content, in RDF, was just another search problem, another set of equations. The major problem was getting the information in the first place. And figuring out what to do with it. And making a profit from all that work. And keeping it updated....
. . . . .
Google Marketplace Manager
Well, first you need the information. Asking people to simply throw it on a server seemed like chaos - so enter Google Marketplace Manager, a small piece of software for Windows, Unix, and Macintosh (this is before Apple bought Spain and renamed it the Different-thinking Capitalist Republic of Information). The Marketplace Manager, or MM, looked like a regular spreadsheet and allowed you to list information about yourself, what you wanted to sell, what you wanted to buy, and so forth. MM was essentially an “logical statement editor,” disguised as a spreadsheet. People entered their names, addresses, and other relevant information about themselves, then they entered what they were selling, and MM saved RDF-formatted files to the server of their choice - and sent a “ping” to Google which told the search engine to update their index.
When it came out, the MM was a little bit magical. Let's say you wanted to sell a book. You entered “Book” in the category and MM queried the Open Product Taxonomy, then came back and asked you to identify whether it was a hardcover book, softcover, used, new, collectible, and so forth. The Open Product Taxonomy is a structured thesaurus, essentially, of product types, and it's quickly becoming the absolute standard for representing products for sale.
Then you enter an ISBN number from the back of the book, hit return, and the MM automatically fills in the author, copyright, number of pages, and a field for notes - it just queries a server for the RDF, gets it, chews it up, and gives it to you. If you were a small publishing house, you could list your catalog. If you had a first edition Grapes of Wrath you could describe it and give it a lowest acceptable price, and it'd appear in Google Auctions. Most of the smarts in the MM were actually on the server, as Google interpreted what was entered and adapted the spreadsheet around it. If you entered car, it asked for color. If you entered wine, it asked for vintage, vineyard, number of bottles. Then, when someone searched for 1998 Merlot, your bottle was high on the list.
You could also buy advertisements on Google right through the Manager for high-volume or big ticket items, and track how those advertisements were doing; it all updated and refreshed in a nice table. You could see the same data on the Web at any time, but the MM was sweet and fast and optimized. When you bought something, it was listed in your “purchases” column, organized by type of purchase - easy to print out for your accountant, nice for your records.
So, as we've said, Google allowed you to search for buyers and sellers, and then, using a service shamelessly copied from the then-ubiquitous PayPal, handled the transaction for a 1.75% charge. Sure, people could send checks or contact one another and avoid the 1.75%, but for most items that was your best bet - fast and cheap. 1.75% plus advertising and a global reach, and you can count on millions flowing smoothly through your accounts.
Amazon and Ebay - remember them? - doubtless saw the new product and realized they were in a bind. They would have to “cannibalize their own business” in order to go the Google path - give up their databases to the vagaries of the Web. So, in classic big-company style, they hedged their bets and did nothing.
Despite their inaction, before long all manner of competing services popped up, spidering the same data as Google and offering a cheaper transaction rate. But Google had the brand and the trust, and the profits.
It took 2 years for over a million individuals to accept and begin using the new, Semweb-based shopping. During that time, Google had about $300 million in volume - for a net of $4.5 million on transactions. But, just as Ebay and Amazon had once compelled consumers to bring their business to the web, the word-of-mouth began to work its magic. Since it was easy to search for things to buy, and easy to download the MM and get started, the number of people actively looking through Google Marketplace grew to 10 million by 2006.
. . . . .
Google Personal Agent
Now, search is not enough. You need service. You need the computer to help you. So Google also rolled out the Personal Agent - a small piece of software that, in essence, simply queried Google on a regular basis and sent you email when it found what you were looking for on the Semweb.
Want cheap phone rates? Ask the agent. Want to know when Wholand, the Who-based theme park, opens outside of London? Ask the agent. Or when your wife updates her web-based calendar, or when the price of MSFT goes up three bucks, or when stories about Ghanaian politics hit the wire. You could even program it to negotiate for you - if it found a first-edition Paterson in good condition for less than $2000, offer $500 below the asking price and work up from there. It's between you and the seller, anonymously, perhaps even tax-free if you have the right account number, no one takes a cut. Not using it to buy items began to be considered backwards. Just as the regular Google search negotiated the logical propositions of the Semweb, the Personal Agent did the same - it just did it every few minutes, and on its own, according to pre-set rules.
. . . . .
Google Verification Service
Finally, Google realized they could grab a cut on the “Web of Trust” idea by offering their own verification and rating service, $15 a year to answer a questionnaire, have your credit checked, and fill in some bank account information. But people signed up, because Google was the marketplace; the Google seal of approval meant more than the government's.
. . . . .
A Jury of Your Peer-to-Peers
Since all the information was already in RDF format, Google's own strategy came back to bite it. Free clones of Google Marketplace Manager began to appear, and other search engines began to aggregate without the 1.75% cut, trying to find other revenue models. The Peer-to-Peer model, long the favorite of MP3 and OGG traders, came back to include real-time sales data aggregation, spread over hundreds of thousands of volunteer machines - the same model used by Google, but decentralized among individuals. Amazon and Ebay began automatically including RDF-spidered data on their sites, fitting it right in with existing auctions and items for sale, taking whatever cuts they could find or force out of the situation.
In 2006, Citibank introduced Drop Box Accounts for $100/month, then $30, then $15, and $5/month for checking account holders. The Drop Box account is identified by a single number, and can only receive deposits, which can then be transferred into a checking or savings account. They were even URL-addressable, and hosted using the Finance Transfer Protocol. Simply point your browser to account://382882-2838292-29-1939 and enter the amount you want to deposit. There's no risk in giving out a secure drop box number, and no fee for deposits. Banks held the account information of depositors in federally supervised escrow accounts. Suddenly everyone could simply publish their bank account number and sell their goods without any middleman at all.
Feeling the pressure, and concerned, just as the music companies had been ears before, that their lead would slip to the peer-to-peer market, Google dropped its fees to 1%, allowed MM users to use Drop Box accounts, and began to charge $25 a year for the MM software and service for sellers, while still making it free for users. After a nervous few months, Google found that the majority of users who sold more than 10 items per year - the volume users - were glad to buy a working product with a brand name behind it; the peer-to-peer networks were considered less trustworthy, and the connection to Google advertising. Google also realized that they could also offer Drop Box accounts, and tie them to stock and money-market trading accounts, which opened a can of worms that we'll skip over here. If you're interested, you can read The Dragon in the Chicken Coop, by Tom Rawley.
Google's financials can, of course, be automatically inserted into your MM stock ticker; right now they're trading at 25,000 times earnings, heralding news of the “New New New New Economy.” You'll get no such heralding here; while they've pulled it off once, the competition is fierce. Google was the dream company for a little less than the last decade, but they're finally slowing down, and it's high time for a new batch of graduate students too itchy to finish their Ph.D.'s to get on the ball. And I'm sure they will.
. . . . .
A Semantically Terrifying Future?
The cultural future of the Semantic Web is a tricky one. Privacy is a huge concern, but too much privacy is unnerving. Remember those taxonomies? Well, a group of people out of the Cayman Islands came up with a “ghost taxonomy” - a thesaurus that seemed to be a listing of interconnected yacht parts for a specific brand of yacht, but in truth the yacht-building company never existed except on paper - it was a front for a money-laundering organization with ties to arms and drug smuggling. When someone said “rigging” they meant high powered automatic rifles. Sailcloth was cocaine. And an engine was weapons-grade plutonium.
So, you're a small African republic in the midst of a revolution with a megalomaniac leader, an expatriate Russian scientist in your employ, and 6 billion in heroin profits in your bank account, and you need to buy some weapons-grade plutonium. Who does it for you? Google Personal Agent, your web-based pal, ostensibly buying a new engine for your yacht, a little pricey for $18 million, sure. But you're selling aluminum coffeemakers through the Home Products Unlimited (Barbados) Ghost Taxonomy - or nearly pure heroin, you might say - so you'll make up the difference.
Suddenly one of the biggest problems of being a criminal mastermind - finding a seller who won't sell you out - is gone. With so many sellers, you can even bargain. Selling plutonium is as smooth and easy and anonymous (now that you can get Free Republic of Christian Ghana Drop Boxes) as selling that Martin guitar. Couldn't happen? Some people say it can, which explains the Mandatory Metadata Review bill on its way through Congress right now, where all RDF must be referenced to a public taxonomy approved by a special review board. Like the people say, may you live in interesting times. Which people? Look it up on Google.

. . . . .
See also: Robot Exclusion Protocol, Google Search, 12:35 AM, Internet Culture Review, and Speculation: ReichOS, in which Hitler learns about computers.

MS가 야후의 지분을 인수하려는 이유

[머니투데이 김유림 기자]마이크로소프트(MS)가 야후의 지분 일부를 인수해 두 회사가 전략적 제휴를 맺는 방안이 추진되고 있다.
MS가 MSN 온라인 네트워크 기반을 야후에 넘기는 대신 야후의 50% 미만 지분을 인수하는 안이 유력하다.
월스트리트저널은 3일(현지시간) 양사의 전략적 제휴는 몇 년 동안 가능성 높은 대안으로 협상 테이블에 올려져 있었지만 최근 들어 MS의 주주들이 적극적으로 스티브 발머 최고경영자에게 야후와의 제휴 추진을 압박하고 있다고 보도했다.
◆ MS "구글 대항마 되겠다"
소프트웨어 공룡 기업인 MS는 인터넷 비즈니스, 그 중에서도 검색 엔진과 광고를 결합한 검색 광고를 차세대 전략 사업으로 정했다. 구글의 대항마가 되겠다는 포석이다.
지난달 27일 성장 전략을 발표하는 자리에서 MS는 "올해 7월부터 시작되는 차기 회계연도에 당초 계획보다 20억 달러 더 많은 비용을 투입할 계획"이라고 밝혔다. 검색 엔진 연구 개발과 온라인 광고 시스템만 전문적으로 다루는 '애드센터'도 곧 공식 출범시킬 계획이다.
그러나 MSN서치의 점유율은 야심을 채우기엔 턱없이 부족하다.
전문조사기관 넷레이팅스에 따르면 지난 3월 MSN서치의 점유율은 10.9%로 지난해 같은 기간의 14.2%보다 더 떨어졌다. 구글과 야후는 각각 49%와 22.5%로 점유율이 늘었다.
이 때문에 사용자를 많이 확보한 검색 엔진은 늘 MS의 관심 대상이다. 지난해에는 타임워너의 AOL인터넷 사업부와 파트너십을 추진했지만 구글이 AOL에 5% 투자하기로 하자 관련 계획을 철회했다.
지난 8일에는 MSN의 부사장에 검색엔진 애스크닷컴의 스티브 버코위츠를 영입했다. 버코위츠는 40여 건의 크고 작은 합병 계약을 이끌어낸 협상의 귀재라는 점에서 검색 엔진의 인수를 담당할 책임자로 주목받고 있다.
야후에게도 구글은 위협적인 존재다. MS 주주들이 스티브 발머 CEO에게 야후와의 제휴를 압박하는 것 역시 이런 이유에서다.
애널리스트들은 MS와 야후의 제휴에 대해 대체로 긍정적인 반응이다. RCM 캐피탈 매니지먼트 월터 프라이스는 "MS 자체 검색 엔진으로 구글을 따라잡으려는 것은 돈과 시간 낭비에 불과할 뿐"이라며 "야후와의 제휴가 오히려 대안이 될 수 있다"고 전망했다.
◆ MS, 문어발식 경영 "초점이 없다"
MS가 인터넷 비즈니스에 취약하다는 것은 주주들의 불만이었다. 규모가 훨씬 작은 구글이 잘 해내는 인터넷 사업을 조직력과 시장 지배력을 겸비한 MS가 못 해낸다는 것 역시 MS로서는 자존심 상하는 일이다.
이 때문에 MS는 최근 들어 '구글 강박증 환자' 처럼 구글 따라하기게 목을 메고 있다.
마켓워치는 이와 관련 "구글은 이미 자신만의 영역을 구축해 놓았다"며 "구글이 어떤 사업을 하느냐에 강박증세를 보이지 말고 MS의 장점에 집중하는 것이 현명하다"고 지적했다.
더 큰 문제는 새 사업 모델에 몰두한 나머지 MS가 강점을 가진 분야에서 조차 버벅대는 일이 잦아졌다는 점이다.
마켓워치는 "MS의 모든 계획 중 가장 우선순위에 와야 할 비스타 출시가 늦어지면서 신뢰성과 수익에 큰 타격을 입게 될 전망"이라고 진단했다.
MSN 메신저 등 네트워크 기반도 수익이 나지 않는다면 포기하는게 현명하다는 지적이 많다. 이 밖에 오피스 2007버전 역시 시장의 기대를 채우지 못하고 있고 비디오게임인 엑스박스도 생각보다는 신통치 않다.
애널리스트들은 "변화를 선도해 온 MS가 변화에 늦어지고 있는 것은 물론 향후 사업의 선택과 집중에서 차질을 빚고 있다"며 우려의 목소리를 내고 있다.
MS의 주가는 지난 99년과 비슷한 수준을 유지하고 있고 월스트리트저널의 보도가 나간 3일 장에서는 33센트(1.4%) 하락한 23.66달러로 마감됐다.
김유림기자 kyr@

MS가 잘될 수 없는 8가지 이유

[이데일리 김현동기자] 세계 최대 소프트웨어 업체인 마이크로소프트(MS)가 내우외환에 시달리고 있다.
밖으로는 신생 구글의 도전에 고전하고 있고, 안으로는 관심을 모았던 `윈도 비스타`의 출시 지연으로 시장의 신뢰를 잃고 있다. 지난 주말에는 기대에 미치지 못한 분기 실적으로 인해 주가가 곤두박질쳤다.

마켓워치의 칼럼니스트인 존 드보락은 3일 `MS의 불안`이라는 칼럼에서 MS가 고전할 수 밖에 없는 이유를 8가지로 제시했다.

1, `윈도 비스타`의 실패 : 정보통신(IT) 수요를 자극할 것으로 관심을 모았던 `윈도 비스타` 출시가 내년 초에나 이뤄질 전망이다. 내년 초에 `윈도 비스타`가 나온다고 하더라도 `윈도 XP`의 개정판 정도에 그칠 것으로 보인다. MS가 최우선적으로 추진했던 `윈도 비스타`의 실패는 엄청난 실망감으로 표출될 것이다.

2. 실망스런 `오피스 2007` : MS 전체 매출의 3분의 1을 차지하는 것이 오피스 프로그램이다. 그런 면에서 새로울 게 없는 `오피스 2007`은 실망스럽다. 여기에다 7가지 다른 버전의 제품이 나올 것으로 예정돼 있어 혼란만 가중시킬 전망이다.

3. `10년전에 포기해야 했던` MSN : MS는 10년전에 이미 MSN을 포기했어야 했다. MS는 광고를 파는 회사가 아니라 광고를 하는 회사여야 한다. MS는 미디어회사가 아니라 소프트웨어 회사다.

4. `돈먹는 하마` MSN 검색 부문 : MS에게 검색 사업은 아무런 의미가 없다.

5. `계획성없는` X박스 360 공급 : `X박스 360` 경쟁력있는 게임기가 될 수 있는 잠재력이 충분하다. 그렇지만 MS는 소니의 플레이스테이션3 출시 지연을 예상하지 못해 충분한 제품을 공급하지 못했다. 이는 MS가 얼마나 계획성이 없고 사업감각이 없는지를 보여주는 사례이다.

6. 터치패드 PC : 몇 년 전 빌 게이츠 MS 회장은 터치패드형 PC가 주류를 형성할 것으로 전망했다. 그러나 지금 상황은 어떤가

7. 경쟁력 상실한 `닷넷` 프로젝트 : MS가 야심차게 추진했던 닷넷 프로젝트는 오픈 소스 운동에 대해 아무런 대응도 할 수 없었다.

8. 구글에 대한 강박관념 : 구글의 성공으로 인해 MS는 이제 자신의 프로젝트에 집중하는 것이 아니라 구글이 뭘 하느냐에 강박적으로 사로잡혀 있다. 공룡 MS가 야후 지분 인수에 나설 것이라는 얘기 역시 MS의 강박 때문이다. ☞관련기사 MS, 구글 견제위해 야후 지분인수 추진했었다-WSJ

Tuesday, May 02, 2006

우리 시대 6억의 의미는?

남조선 인민 공화국에 사는 우리들 세상...

[머니투데이 봉준호 컬럼중에서]

6억원은 연봉 3,000만 원짜리 샐러리맨이 20년간 한 푼도 안 쓰고 모아야 하는 큰돈이다. 산술적으로 계산해보면 스물일곱 살에 직장생활을 시작할 경우 대략 40대 중반이 되어야 만져볼까 말까한 돈이다. 미국 명문 사립대 6년 유학비용이 6억원이다. 또는 요즘 젊은이들이 목표하는 10억만들기의 60%에 해당하는 돈이다. 이 시대 고급주택의 기준도 6억원이다.

숫자로 표시되는 부동산 정책의 대부분은 6억원에서 바(Bar)를 그린다. 6억원…. 그것이 무슨 의미인지? 왜 6억원인지? 커피를 시키고 눈을 껌뻑이고 볼펜으로 숫자를 세어가면서 한참 생각을 해봤다. 집 한 채 값 6억원? 집 두 채 값 6억원... 강남 땅 10평 값 6억원, 대형자동차 10대 값 6억원... 강남권 아파트의 70%는 6억원 이상이며 서울 아파트의 20%가량도 6억원 이상이다. 매월 그 수와 비율은 계속 늘어간다.

① 종합부동산세의 기준 6억 원
올해 부과되는 종합부동산세의 기준은 6억원 초과주택이다. 공시지가 6억원 이상의 주택을 소유한 사람은 이른바 부자로 구분되어서 매년말 종합부동산세를 내야 한다. 종합부동산세는 사상 유례없는 초대형 세금이다. 앞으로 해가 바뀔 때마다 일정규모 이상가격의 부동산 소유자들은 엄청난 부피의 늘어나는 세금에 짓눌리게 되고, 매년 부동산의 공시가격이 올라갈수록 과세액 폭증은 계속된다. 종부세 기준이 주택 6억원으로 고정되어 있고, 지금처럼 집값 상승이 꺾이지 않는다면 곧 수백만 명이 종부세 대상자가 되는 때가 올 것이다.

② 비과세 면제선 6억원
6억 미만 주택보유자가 집을 잘 사서 얻게 되는 차익은 여전히 비과세된다. 반드시 1주택자여야 하고 3년 보유의 기본규칙을 지켜야 한다. 서울, 과천, 5개 신도시는 2년 동안 직접 거주하는 조건을 하나 더 완성해야 한다. 예컨대 1억 9,000만원에 집을 사서 3년 후에 5억 8,000만원이 되었을 때 그 차액은 과세대상이 아니다. 6억원 초과 주택은 고가주택으로 구분되어 양도세 면제주택이 아니다.

2007년이 되면 모든 아파트가격은 실거래가로 과세되며 2006년까지는 비투기지역등 일부 요건을 갖춘 경우라면 기준시가로 과세된다. 보통 기준시가는 실제가격의 70%~90%선에서 매겨진다. 6억 이하의 1주택이라도 보유기간이 1년 미만이면 양도차익의 50%, 1년 이상 2년 미만이면 양도차익의 40%를 양도세로 내야하며, 2년 이상일 때에는 양도차익에 따라 9~36%의 세율을 적용받는다.

③ 부동산 중개 수수료 협의선 6억원
부동산 중개 수수료도 부동산의 거래금액이 크면 상당히 부담이 된다. 2억원에서 6억원 이하까지는 0.4%, 6억원 이상부터는 0.2%~0.9%까지 중개사와 거래인이 협의하게 되어 있다. 이런 경우라면 중개사는 0.9%로 요율을 매기려하고 매매자는 0.2%로 계산하기를 원한다. 사람에 따라 서비스의 질에 따라 달리 매겨질 수밖에 없는 상황이 전개되는 것이 고급주택이라 분류되는 6억원 이상 부동산이다. 6억원 이상의 주택을 '협의'로 만들어 놓은 법규가 서로를 난처하게 하거나 분쟁으로 이를 수 있는 단초가 되는 경우도 많다.

④ 6억 넘는 아파트 대출 규제
3.30 대책은 투기지역 내 6억 넘는 아파트에 대출을 규제한다고 발표했다. 이른바 DTI(Dept TO Income)라는 연소득과 담보대출을 연관시키는 총부채상환비율은 6억 이상의 아파트에 해당된다. 결국 6억 이상의 주택을 사려는 사람에게는 융자를 제한함으로써 타인의 자본을 빌려서 도약하는 기회를 제한하는 정책을 쓴다고 한다.

이 효과는 어느 정도 시장에 파급력을 주고 있다. 그러나 자기 부동산에 대하여 6억이라는 바(Bar)를 정하고 대출을 제한하는 것이 적법할까? 그법은 발전하는 선진경제 시스템과는 크게 배치되어 합헌여부는 구체적으로 따져볼 수밖에 없는 또 다른 문제를 발생시킨다. 그보다 먼저 따져보아야 할 것은 고급주택의 기준과 6억원이라는 금액의 정당성일 것이다.

⑤ 역모기지 가능 주택 6억 이하
역모기지론(Reverse Mortgage Loan)이란 자신이 소유한 주택을 담보로 금융기관에서 일정금액을 연금식으로 지급받는 장기 주택저당 대출이다. 6억 이하의 주택은 역모기지론 대상이다. 65세 이상인 사람은 6억 이하의 집을 담보로 15년이든 20년이든 돈을 받아 쓸 수 있다. 자기 집에 살면서 죽을 때까지 은행에서 돈을 타 쓸 수 있는 선진적인 금융론(loan)이다. 그러나 이 또한 6억 이하의 주택에만 해당된다. 6억 이상의 주택은 돈 많은 사람의 집으로 간주되어 역모기지론 해당 주택이 아니다.

6억 주택 보유자의 모기지론 가능금액은 약 3억 정도이고 15년론인경우 월180만원씩을 받아쓸 수 있다.

이때의 주택가격 6억원이란 과세기준시가로 평가되며, 정부는 역모기지론 활성화 방안을 확정하면서 6억원 이하 중산, 서민층 중저가 주택 가능 이라는 표현을 썼다. 중산층이란 참으로 범위가 광범위한 것 같다.


◆ 6억원의 철학

외국에는 600억원짜리 집도 있고 6채 이상의 집을 가진 사람도 많다. 부자는 천문학적인 돈을 갖고 있고, 누가 고급주택을 얼마나 가지고 있는지는 뉴스거리가 아니다. 이제 우리나라에서 6억원은 서민층과 중산층을 가르는 기준같은게 되어 버린 느낌이다. 6억원은 10년 넘게 법제화된 고급주택의 기준이지만 물가상승이나 집값 폭등과 상관없이 시간이 지나면서 하향 획일화 하는 기준으로 이용되어져간다.

집으로 자본주의 국가의 논리와 재화의 가치, 부동산 투자의 근본원리를 다시 따져야 하는 지금의 사회 상황은 약 50년전쯤 이데올로기시대의 모습을 보는 듯하다. 모으고 불리고 굴리고, 청년은 아버지가 되고 할아버지가 되어간다.근로소득과 사업소득으로번 종자돈 1억원과 대출 1억원으로 산 집은 인플레를 반복해서 6억짜리 집이 됐다.

모든 돈은 정부와 사회가 공인하는 생산적인 일에 투자해야 하고, 오르는 것을 기대하고 집을 사서는 안되고, 필요한 집만 필요한 만큼의 크기만 사야하고 세금은 내라는 대로 내야하고,집값이 올라 6억원 이상의 주택을 보유하게 되면 각종 규제와 과세 상향 대상이 되고.... 그것이 국가가 국민에게 권하는 양식일 수밖에 없는 지금의 각종 정책들을 어떻게 받아들여야 하는가?

거리를 오가는 대부분의 사람들은 6억원의 의미도 모르며, 크게 오를 것을 기대하고 집을 사지도 않는다. 그냥 갑자기 수배로 늘어나는 세금이 무겁고 중산층과 서민층을 가르는 각종 정책과 부동산에 대한 과도한 정부의 규제와 어느 곳이 또 많이 올랐다는 언론의 시황리포트가 부담스러울 뿐이다.

Monday, May 01, 2006

Google's Meaning Layer

READERS How Google builds systems to enable the extraction of meaning from distributed information.
Betrifft: AMAZON COM Customer Relationship Management Direct Response Marketing GOOGLE Internet Search & Directory Media Retailing & Rental Strategy

by OWT
The author is registered user with YEALD. To get your own profile please become a Recognized Writer. more...


Yes, Amazon does have a very good system to extract meaning from seemingly disconnected information, but that is because the system is layed out to allow that to happen.

With Google, things are layed out differently. Advances in search technology allow for more meaning to be picked out of the stream of information. The bigger the system the better the prediction. So where does Google get their meaning? From an analysis of connections for one. For this we need to take a closer look at what Google already knows and might know in the future.

First they know what I search for, especially if you use the personalised search they are offering at google.com/ig. That is a lot of meaning right there if you aggregate it. Above that they know which links I click and if that site as Google AdSense on it, they know that I ended up there, how long I spent, and if I went to further sites. If those further sites also have Google AdSense they can track my entire path. Due to the login I have at google.com, any cookie they set cannot really be deleted consistently because it will always be reset with the right ID when I log back in.

If I click on an AdWords link, either on Google or on a Network site, then they know what that click was worth, and as some AdWords users use their tracking system to integrate further steps along a sign-up or sales process into the AdWords system, they potentially know that I bought something and might even know what it was worth for the advertiser.

If I searched for Sony, TV, HTDV and other items and then clicks a HDTV ad, that was linked to HDTV, Buy, Big, Crisp, ... they know I am looking for a HDTV TV with a big screen and if I buy it they know I have it. They also know through which shop as that name is attached to the AdWords account.

When Google Base launches, they will add a whole layer of meta-information to the mix as they seem to have set up the service in a way that you need to add meta information like this is a house I sell for 350.000 EURs in Cologne, Germany, with 350 square meters of ground.

Once they grow their own WiFi network which is starting in San Francisco, they know exactly what I surf to. The same is correct for the Google Accelerator, which has just relaunched. Through Google Mail they know who I talk to and possibly what they search for and do. Aggregating this allows them to find out about my community of practice and general interest.

Yes, it is not as clear and easy as for Amazon, but through the diversity the amount of information is a lot more rich, if analyzed correctly. And that is why they need the large amount of servers they have and have developed their own API and programming language internally that is there for one purpose only, analysis of huge amounts of data distributed among thousands of servers.

Wednesday, April 26, 2006

“구글 너무 컸다” 견제 본격화

(::문어발 사업확장에 이베이·아마존 등 협력 중단::) 세계 최대 인터넷검색업체 구글이 경쟁업체는 물론 사회 각 분야 의 총공세에 몸살을 앓고 있다. 그동안 협력관계였던 이베이와 아마존 등 대형 인터넷업체가 구글의 사업영역 침범 가능성을 비 판하고 있는데 이어 신문·출판업계와 인권단체, 미국 정부까지 본격적인 견제에 나서고 있기 때문이다. 심지어 지난 20일 구글 은 웹사이트에 스페인 화가 후안 미로의 이미지 사진을 올려놓았 다가 유족들로부터 초상권 침해라는 비난을 받고 내리는 해프닝 까지 벌였다. 미 경제주간지 비즈니스위크는 최근기사에서 구글 이 고대 그리스의 전법인 ‘팔랑크스(밀집대형)’ 공격을 받고 있다며 9대 반(反)구글 움직임을 보도했다.
◆이베이와 아마존, 전화·인터넷업체 = 구글과 협력관계를 유지 해온 최대 인터넷경매업체 이베이는 금주중 ‘마젤란’ 이라는 자체 검색·쇼핑서비스 출시를 앞두고 마이크로소프트·야후와 반(反)구글 연대를 구성하고 있다고 비즈니스위크는 전했다.
이베이는 그동안 구글 광고를 통해 고객을 유치해왔지만, 이 서 비스로 예전처럼 광고하거나 구글에서 검색어를 일괄 구입할 필 요가 없다는 판단을 내린 것으로 알려졌다. 앞서 세계 최대 온 라인서점 아마존 역시 구글 검색어 매입 비용이 급증하자 아예 2 년전 자체 검색사이트 ‘A9 닷컴’ 을 개설하고, 이용자에게 할 인혜택을 주고 있다. 구글은 미국 최대 전화사업자 AT&T 등 인터넷 전용선 서비스업계와도 마찰을 빚고 있다. AT&T는 최근 구글이 무선서비스를 강화하려는 움직임을 보이자 “비용없이 거저 먹 으려 든다” 며 비난했다. 또 쇼핑이나 구인 등 특정분야에 대한 전문검색 서비스를 제공하는 소형 인터넷업체들도 구글의 공룡 화를 노골적으로 경계하고 있다.
◆신문·출판업계와 성인오락산업 = 신문업계에서는 신문산업 수 입의 35%를 차지하는 구인·구직 등 안내광고가 구글로 인해 크 게 위축되면서 구글의 검색 서비스가 근본적인 위협이 될 것으로 판단하고 있다. 출판업계에서도 구글이 2005년초 세계 5대 도서 관이 보유한 수백만권의 책을 스캔, 사이트에 검색하도록 한 ‘ 구글 북스’ 서비스를 시작한 이후 저작권 전쟁을 벌이고 있다.
구글측에서는 검색 가능하도록 일부만 발췌한 것이라고 주장하 고 있지만, 미국작가협회와 출판계는 지난해 9월 소송을 제기해 놓은 상태다. 성인오락산업계는 구글이 저작권료를 지불하지 않 고 누드 사진 등을 검색서비스를 통해 사실상 전시하고 있다며 단체행동에 나설 태세다.
◆미 법무부와 인권단체도 가세 = 구글은 올초 미 법무부로부터 구글 사이트에 접속한 100만개 URL 주소 및 특정 한주간 검색질 문 100만개를 제출하라는 요구를 거절하면서 미 정부와도 껄끄러 운 상태다. 법무부는 1998년 아동온라인보호법에 관한 소송 과정 에서 야후와 마이크로소프트 등에도 같은 자료를 요구했지만, 구 글만이 회사 기밀사항이자 개인정보 보호라는 이유로 거부한 것.
하지만 중국 정부의 검색 검열은 수용해 인권단체들의 눈엣가시 신세가 됐다. 그런가하면, 놀랍도록 상세한 위성사진을 서비스 하는 구글어스는 각국 정부의 보안당국의 경계대상이 되고 있다.
비즈니스위크는 “새로운 시장에 계속 진입할수록 구글은 영역 을 지키려는 수많은 적들을 만들어낼 것” 이라고 전망했다.

Sunday, April 23, 2006

The New Marketing Remix

The remix is
From Place to Presence;
From Promotion to Persuasion;
From Positioning to Preference;
From Price (static) to Price (dynamic); and
From Product to Personalization.

--- Originial text below -----------

Marketing Remix (with Antony Paoni)
It has been reported that Wal-Mart, which represents almost two percent of the Gross Domestic Product of the United States, is "afraid" of Google because the powerful search engine may make comparability of price more transparent, and thereby hit the powerful retailer right in the heart of their value proposition. If a capable upstart or competitor jumps ahead of you in the demand chain, they can poach your customers, and steal the value that you extract from the entire rest of the customer experience. All that hard work to build stores, logistics, and low-cost delivery is at risk if the customer is diverted at the very beginning of the demand chain. Put another way, being first in the demand chain is the most valuable place to be in any value network.
Traditional radio stations are concerned that the new satellite radio systems which can beam hundreds of commercially free channels to the car, the person, or the home, may upend their entire value proposition. Consumer marketers are scared to death that their traditional means of "getting to market" with their message is under attack for their current customer base, and that the up and coming generation is so interactive media savvy that only the most naive marketer would think that they will sit still for all traditional commercial interruptions. Anyone who has a digital video recorder -- like TiVo -- only watches 20 percent of the commercial. Commercials will continue to be important, and even desired by some, but traditional approaches on reaching customers need to be rethought!
Why is all this happening now? Over the past ten years, there has been an emerging information network that has unloosened information from its traditional moorings. For example, information about consumer products used to be distributed through very traditional channels, like the television, your Sunday newspaper, and in the store itself. Today, you often see a beleaguered professional standing in the aisle of a super market, on the cell phone with his significant other, asking which of a set of options should be purchased. The information and influences are not tethered to their traditional moorings. Up until recently, information was usually situated in a given context. When you went to the doctor, you got medical information; when you visited the store, or in your newspaper you received coupons, and flyers. Today, you can get rich media when you are waiting for your dentist, or in line at the Dunkin Donuts. In fact, one of the largest television networks in the country is Wal-Mart TV; the television displayed inside Wal-Mart stores. This seems absurd until you realize that over 60 million Americans visit a Wal-Mart store each day. McDonald’s serves over 50 million people a day, but has yet to realize their potential as a information distribution network.
The general trend is that information is flowing to those places where there is a flow of people. With the growth of broadband wireless networks like the ones promoted by Verizon and other carriers, soon tens of millions of Americans will have the ability to see streaming TV, games and other media in their phone – this is yet another extension of the process of information being freed from its traditional distribution moorings. Already the vast majority of car shoppers come armed with information gathered outside the selling environment. But this is not isolated to cars – it is the core of a broad phenomenon of information flowing to where the people, and the curiosity are.
In an increasing number of cases, not only is the information moving, but also the ability to transact. The reason that Google is so scary to Wal-Mart, is not just the facilitation of comparison, but, given the interactive nature of the internet, people can also transact and then choose to pick up the goods, or get them shipped.
What’s Happening to the “4 Ps” of Marketing?
Why is this phenomenon so important to business leaders? – because the traditional marketing model is insufficient to address the reality of today’s customers. The entire science of marketing has been developed to understand who buys, why they buy, and how they buy. The traditional marketing mix is made up of product, place, promotion, price -- all consistent with a positioning for the product or service. The power of this model was to point out the key tools that firms have to bring their product or service to market successfully.
Each of these concepts becomes much more complex and diffuse in this new world. "Place" is not so obvious, for the place where people shop is now a combination of physical and informational environments.
Promotion is not so clear, because while formal, outbound efforts like advertising and couponing will continue, marketers must also acknowledge the self-organized nature of user-defined ratings of products and services. These are influential and out of control of the marketer. It is now much more about word of mouth -- turbocharged by peer-to-peer communications like the phone and the internet.
Product is still vital, but the service wrappers around product, and the ability to have that product be easy to purchase is more critical than ever.
Lastly, price is much more dynamic than it used to be. Price comparisons are much more transparent than just a few years ago, and getting more so. In many markets, from books to used cars, the influence of the used market is completely changing the pricing dynamics -- with new products competing with used substitutes that can be from 25% to 99% cheaper than their new alternatives.
What's a poor marketer to do? Well, it is time to do a remix of the marketing mix. Just as in any remix, the old notions are still there, and underlie the remix, but the new layer on top is hip, and makes the old song come alive again -- with a new audience, new buzz, and new power.
The Elements of the New Marketing ReMix
The remix is from Place to Presence; from Promotion to Persuasion; from Positioning to Preference; from Price (static) to Price (dynamic); and from Product to Personalization. These are the key elements of the new marketing remix. There are no organizations that I know of that contain all these elements but we know that those companies that want to win by having superior consumer insight must be able to manage both the mix, and the remix in the future. Let me explain.
In thinking through the remix, the most important thing is to understand where your demand chain begins, and to then make sure you have "presence" at the front end of the demand chain. There is nothing more powerful in all of marketing and selling than knowing when and how and "where" a customer begins to come into a market and think about buying something. These companies are first in line in the demand chain, have first and best knowledge of the flow of customer desires, what they look for and how they shop.
The new upstarts that are first in the demand chain right now are AOL, Yahoo!, and Google, among others. There are many others in category specific ventures like Edmunds, the popular car shopping site, and Netblue, the upstart company that bought up all many words concerning credit cards on Google. This company now captures those customers who are looking for a credit card, gets them to fill out credit information, credit scores them, and then sells the qualified leads to the highest bidder among the giant credit card issuers. This entrepreneur realizes that the most important thing is to be first in the demand chain, and by doing so having access to the flow of demand. By adding qualification through a credit score look up, this company has turned the raw material of customer search into the marketable, refined product known as a hot lead. If the credit card companies had been thinking presence, and not promotion, they would never have let someone jump line on them like this.
Wal-Mart's worry about Google is similar; that by dominating the cognitive space of search, Google will -- conceptually -- stand in front of every Wal-Mart store with a comparative shopping guide that may drive customers to a different place
From Place to Presence
The question senior managers need to ask themselves is, in this new world, in which information flows freely, and all customers can actively search for my product or service, and compare competitors and substitutes: Are we first in line? Are we in all the places we should be where people are searching for products and services? Do we have a presence in these new marketplaces and marketspaces? Or are we still lashed to offering our marketing and persuasive efforts to customer when they come to our distribution, store, or place of advertising? In the words of the old joke, are we looking for the key under the light?
This is a difficult and sometimes painful question to ask because most companies have become successful because they do have a good idea of who buys, why they buy, and how they buy. In this new setting, management teams need to question their core assumptions about how customers buy, what their buying influences are and where they are migrating. The other challenge for many management teams is that they are often so busy that their own information tools and techniques are set in concrete. It’s more comfortable to assume that their customer's information consumption patterns are just as stable -- which is a very bad assumption. Most senior executives don't even know how the Google page rank algorithm works, yet it is the most important thing to happen to advertising since television. This understanding is not a "technical" issue -- it is a business issue, and one which senior executive teams need to understand because search is at the beginning of every value chain. The movement from simple place to presence can’t be ignored.
From Promotion to Persuasion
From promotion to persuasion, is another trend. The fundamental thing going on here is that a company's outbound marketing activity is only a part of the process -- users now are in charge of indexing and evaluating the torrent of information and alternatives they now have. In fact, information and opinion is generated at such an increasing rate that the only practical way for it to be organized is by the users themselves.
Put another way, old methods of promotion tried to understand the psychological state of the buyer and put a promotional activity in front of them, preferably while in the middle of the buying process. Today you need to consider not only the psychological, but the social, aspects of the buying process -- hence the broader concept of persuasion, not just promotion. Google is the poster child for this notion of social persuasion. The entire Google business model is based on turbo charging word of mouth -- or social cues. How is this so? The way Google’s page rank algorithm works is to rate the most popular sites, by how many people point to them, and more subtly, assess whether other popular sites point to them too. In other words, it is the world's biggest and most dynamic popularity contest. The genius of Google is that they figured a scalable, fast, automatable algorithm to take all the bread crumbs of social interaction, and make them into bread pudding! So, when you think about it, Google is creating the space of the most persuasive sites on the internet for a given topic. Then, their business model is to let you as a marketer invite yourself to that party by either buying words or by buying a placement. They keep their space uncluttered by useless ads because if the community does not click on the ads, they are less likely to appear in the space for long. Hence, it is a persuasion network that is self adjusting. This is very powerful indeed, and is driven -- not by the promotional dollars of the businesses -- but by the search and rating behavior of the customers. It is a market in persuasion, if you will.
Google is not the only one. The articulation of social ranking of products and services is springing up all over the place. The most common, and perhaps longest standing metric is frequency. The New York Times bestseller list has been influential for many people are happy to use popularity as a way to sort through the mounds of book options. In the interactive marketspaces, this type of frequency ranking is frequent and trivial to implement. For any product or service Amazon can tell you its ranking in term of purchase behavior, and what products and services are usually purchased with it. Everything has a user rating on it, too. These user-driven evaluations, along with the articulation of buying behavior are huge new social influences on what persuades a customer to buy your product or service.
Word of mouth is a concept as old as commerce itself, but now your reputation is living in a world of forces that are moving with huge velocity, speed, and automated evaluation. In addition, there are new social worlds of chat and influence. P&G bought a company called Tremor, whose job it is to monitor and influence the influential teenagers – who can drive the perception of your brand or product. Now buzz management is part of the marketing plan. Buzz also happens on line, in user ratings, and passively in chat rooms. You might say that real men don't do chat rooms. Put another way, many senior managers are skeptical that anything valuable can be discovered in something like a chat room. However, according to a leading market research firm, by analyzing the content of chat room behavior, using a sophisticated scoring mechanism for the content of the chat rooms to identify positive and negative concepts embedded in dialog (and by the way all these dialogs are public domain, for chat behavior is not private speech) they have successfully predicted whether or not a TV pilot will make it to launch, based on the early buzz (or lack of buzz) reflected in the chat rooms. Again, at a conceptual level, TV shows were always previewed to audiences to see how well they resonated, but the scale and precision of this approach has never been done before, and it is possible to be more scientific about it, and not let a show live or die based on the prejudices of the producer or TV executive.
Likewise, all products and services out in the marketspaces are leaving many bread crumbs of information about how the customers perceive them and the competition. Given the “one click away” culture we are evolving towards, marketers need to begin to analyze persuasion, not just promotion. Also, it is just a matter of time before the traditional broadcasters, in their every widening thirst for content, will begin to report the "most popular" books on Amazon, and the best rated cars by Edmunds, etc. The traditional media will amplify the influence of the interactive media. The need for moving from simple promotion, to an understanding of persuasion is growing and now is the time to create or participate in those trusted influence points.
From Positioning to Preference
Positioning a product or service is one of the most difficult and most important processes of marketing. The difference between paying $50,000 for a Mercedes E class, and $30,000 for a Buick is the result of positioning for the product quality of the Buick today is higher than that of the Mercedes. Great positioning is easy to recognize once accomplished. Nike's positioning, Apple's, and IBM's are all great examples. Yet, today, we see the long tail of consumer preferences, and understanding them as vital in positioning a product or service.
Again, the core idea of marketing given someone's preference is as old as commerce. When I was a young man going to Filene's basement in Boston, Mr. Sugarman would see me coming down the aisle with my mother and he would not only pull out suits that were my size, but also within the range of suits he knew my mother would let me buy. By the time I was with him 5 minutes he already had me headed to the dressing room with three "perfect" suits to try on.
All of direct marketing is based on micro-adjustments made to offers based on analysis of preference. But today, we have at least four ways to capture preference. First, there is customer rating of what they like and don't like. Second, there is the transaction history, from which we can guess at what they like and don't like. Third, there is search behavior which shows people in the process of searching for information. And fourth, in some instances, there are configuration tools that let customers trade off what they are interested in by designing the product or service they'd like to have -- and this is a more comprehensive articulation of the tradeoffs that customers are willing to make across features and prices.
With these new tools we have, for the first time, a CAT scanner for customer preferences. By CAT scanner, I mean as marketers we can observe the customer in the process of searching for information on products and services -- and only intervene when we think it is efficient. Much of the interest by Yahoo! in buying a piece of AOL was that AOL has a much more complete history, specifically by individual, as to what their search behavior -- that is passive preference -- is. If an online advertiser has a richer search history, it is possible to increase the efficacy of the online ad by 80 percent (as measured by click through). Understanding preference is a way to get customers to stay with you. I have rated over 165 movies in my NetFlix account, and I can't easily move those ratings to another site. I can even share my ratings with others so I can build up a network of trusted advisors within the company as well – again something that increases stickiness. I like the recommendations NetFlix creates for me and I am unlikely to change suppliers. Furthermore, preferences help to energize the persuasion effect I mentioned earlier. The entire notion of "recommenders" is based on matching you preferences to those of others like you. In this situation, your product is positioned not as you want to position it, but it is positioned against other products that people see as similar in preference to yours. This is a new logic of interaction with your customers. They are driving the core reference of what you are compared to -- hence is more fundamental than positioning.
From Static to Dynamic Pricing
The shift from static to dynamic pricing is the place where there is the greatest potential to leak value from organizations. Any manager knows that pricing leverage is the best way to increase the bottom line, because all price increases flow straight to the bottom line. Unfortunately, creating price leverage is incredibly difficult; report after report states that most businesses are having a hard time creating pricing leverage. This is due to a combination of lower production costs -- for most businesses are not only more productive today -- they are sourcing from a global base of talent and manufacturing that is constantly lowering the costs of production. Furthermore, most industries have overcapacity. And if that were not enough, the internet has brought easy pricing comparison to many categories, and increasingly so. What is going on today is that price is much more transparent to the end customer. Price transparency in autos and travel is old news. What is interesting is that price transparency is becoming so widespread that even the global low cost leader, Wal-Mart, worries that Google may be able to find even lower prices. (It is important to remember here that Wal-Mart is single-handedly responsible for almost 2% of China's GDP. eBay is providing the world with real time prices for the value of anything they sell, used or new. Amazon is enabling the used book market which means that the profitability of a book is now truncated by the speed with which old inventory comes back into the market. It is no wonder that book sellers are now willing to sell excerpts from books on Amazon, for they must now look to the convenience of being able to download a book, or a section of a book. If you make the person wait for the physical artifact, they might as well buy used.
Lastly, the natural extension of this argument is that value creation is no longer about product (or service) -- it is about personalization. This forward looking for sure from today’s reality, but it is important to realize that when all these factors make it easier for people to find comparables, to discover price, to ignore your marketing efforts and listen instead to what other customers are saying, or what Google points them to, you want to make sure that your product/service offering is so well-tailored to their preference and needs, that they are willing to buy from you and buy quickly. Most organizations are working hard to create new consumer benefit, but you don't want all that consumer benefit to flow to the end customer in consumer surplus. You want to extract some of it out in higher margins.
What's a company to do?
First, organizations must realize that this is going on. Second, they need to analyze what the new marketing remix means for them, third, they need to build a plan to create enough experimentation capacity in their organizations so they can figure out what to do about these new marketing realities. Lastly, they need a clear plan to realize the value they find in the experiments. Given the dematuring of the marketing mix, now is the time to start!

Friday, April 21, 2006

Steve Jobs Stanford Commencement Speech 2005

what a touching speech! Click here

[Script]

I am honored to be with you today at your commencement from one of the finest universities in the world. I never graduated from college. Truth be told, this is the closest I've ever gotten to a college graduation. Today I want to tell you three stories from my life. That's it. No big deal. Just three stories. The first story is about connecting the dots. I dropped out of Reed College after the first 6 months, but then stayed around as a drop-in for another 18 months or so before I really quit. So why did I drop out? She felt very strongly that I should be adopted by college graduates, so everything was all set for me! It started before I was born. My biological mother was a young, unwed college graduate student, and she decided to put me up for adoption. to be adopted at birth by a lawyer and his wife. She felt very strongly that I should be adopted by college graduates, so everything was all set for me! It started before I was born. My biological mother was a young, unwed college graduate student, and she decided to put me up for adoption. to be adopted at birth by a lawyer and his wife. Except that when I popped out they decided at the last minute that they really wanted a girl. So my parents, who were on a waiting list, got a call in the middle of the night asking: "We have an unexpected baby boy; do you want him?" They said: "Of course." My biological mother later found out that my mother had never graduated from college and that my father had never graduated from high school. She refused to sign the final adoption papers. She only relented a few months later when my parents promised that I would someday go to college. And 17 years later I did go to college. But I naively chose a college that was almost as expensive as Stanford, and all of my working-class parents' savings were being spent on my college tuition. After six months, I couldn't see the value in it. I had no idea what I wanted to do with my life and no idea how college was going to help me figure it out. And here I was spending all of the money my parents had saved their entire life. So I decided to drop out and trust that it would all work out OK. It was pretty scary at the time, but looking back it was one of the best decisions I ever made. The minute I dropped out I could stop taking the required classes that didn't interest me, and begin dropping in on the ones that looked interesting. It wasn't all romantic. I didn't have a dorm room, so I slept on the floor in friends' rooms, I returned coke bottles for the 5¢ deposits to buy food with, and I would walk the 7 miles across town every Sunday night to get one good meal a week at the Hare Krishna temple. I loved it. And much of what I stumbled into by following my curiosity and intuition turned out to be priceless later on. Let me give you one example: Reed College at that time offered perhaps the best calligraphy instruction in the country. Throughout the campus every poster, every label on every drawer, was beautifully hand calligraphed. Because I had dropped out and didn't have to take the normal classes, I decided to take a calligraphy class to learn how to do this. I learned about serif and san serif typefaces, about varying the amount of space between different letter combinations, about what makes great typography great. It was beautiful, historical, artistically subtle in a way that science can't capture, and I found it fascinating. None of this had even a hope of any practical application in my life. But ten years later, when we were designing the first Macintosh computer, it all came back to me. And we designed it all into the Mac. It was the first computer with beautiful typography. If I had never dropped in on that single course in college, the Mac would have never had multiple typefaces or proportionally spaced fonts. And since Windows just copied the Mac, its likely that no personal computer would have them. If I had never dropped out, I would have never dropped in on this calligraphy class, and personal computers might not have the wonderful typography that they do.Of course it was impossible to connect the dots looking forward when I was in college. But it was very, very clear looking backwards ten years later. Again, you can't connect the dots looking forward; you can only connect them looking backwardsSo you have to trust that the dots will somehow connect in your future.You have to trust in something - your gut, destiny, life, karma, whatever. This approach has never let me down, and it has made all the difference in my life. My second story is about love and loss
I was lucky I found what I loved to do early in life.
Woz and I started Apple in my parents garage when I was 20. We worked hard, and in 10 years Apple had grown from just the two of us in a garage into a $2 billion company with over 4000 employees. We had just released our finest creation - the Macintosh - a year earlier, and I had just turned 30. And then I got fired.
How can you get fired from a company you started? Well, as Apple grew we hired someone who I thought was very talented to run the company with me,
and for the first year or so things went well.

But then our visions of the future began to diverge and eventually we had a falling out.
When we did, our Board of Directors sided with him. So at 30 I was out. And very publicly out.
What had been the focus of my entire adult life was gone, and it was devastating.
I really didn't know what to do for a few months.
I felt that I had let the previous generation of entrepreneurs down - that I had dropped the baton as it was being passed to me.
I met with David Packard and Bob Noyce and tried to apologize for screwing up so badly.
I was a very public failure, and I even thought about running away from the valley.

But something slowly began to dawn on me. I still loved what I did. The turn of events at Apple had not changed that one bit.
I had been rejected, but I was still in love. And so I decided to start over.
I didn't see it then, but it turned out that getting fired from Apple was the best thing that could have ever happened to me.
The heaviness of being successful was replaced by the lightness of being a beginner again, less sure about everything.
It freed me to enter one of the most creative periods of my life.
During the next five years, I started a company named NeXT, another company named Pixar,and fell in love with an amazing woman who would become my wife.

Pixar went on to create the worlds first computer animated feature film, Toy Story, and is now the most successful animation studio in the world. In a remarkable turn of events, Apple bought NeXT, I retuned to Apple, and the technology we developed at NeXT is at the heart of Apple's current renaissance. And Laurene and I have a wonderful family together.
I'm pretty sure none of this would have happened if I hadn't been fired from Apple. It was awful tasting medicine, but I guess the patient needed it. Sometimes life hits you in the head with a brick. Don't lose faith.
I'm convinced that the only thing that kept me going was that I loved what I did.
You've got to find what you love. And that is as true for your work as it is for your lovers.
Your work is going to fill a large part of your life,
and the only way to be truly satisfied is to do what you believe is great work.
And the only way to do great work is to love what you do.
If you haven't found it yet, keep looking. Don't settle. As with all matters of the heart, you'll know when you find it.
And, like any great relationship, it just gets better and better as the years roll on.
So keep looking until you find it. Don't settle.
My third story is about death.
When I was 17, I read a quote that went something like:
"If you live each day as if it was your last, someday you'll most certainly be right."
It made an impression on me, and since then, for the past 33 years! ,
I have looked in the mirror every morning and asked myself:
"If today were the last day of my life, would I want to do what I am about to do today?"
And whenever the answer has been "No" for too many days in a row, I know I need to change something.
Remembering that I'll be dead soon is the most important tool I've ever encountered to help me make the big choices in life.
Because almost everything ?
all external expectations, all pride, all fear of embarrassment or failure -
these things just fall away in the face of death, leaving only what is truly important. Remembering that you are going to die is the best way I know to avoid the trap of thinking you have something to lose. You are already naked. There is no reason not to follow your heart. About a year ago I was diagnosed with cancer. I had a scan at 7:30 in the morning, and it clearly showed a tumor on my pancreas. I didn't even know what a pancreas was. The doctors told me this was almost certainly a type of cancer that is incurable, and that I should expect to live no longer than three to six months. My doctor advised me to go home and get my affairs in order, which is doctor's code for prepare to die. It means to try to tell your kids everything you thought you'd have the next 10 years to tell them in just a few months. It means to make sure everything is buttoned up so that it will be as easy as possible for your family. It means to say your goodbyes. I lived with that diagnosis all day.

Later that evening I had a biopsy, where they stuck an endoscope down my throat, through my stomach and into my intestines, put a needle into my pancreas and got a few cells from the tumor. I was sedated, but my wife, who was there, told me that when they viewed the cells under a microscope the doctors started crying because it turned out to be a very rare form of pancreatic cancer that is curable with surgery. I had the surgery and I'm fine now. This was the closest I've been to facing death, and I hope its the closest I get for a few more decades. Having lived through it, I can now say this to you with a bit more certainty than when death was a useful but purely intellectual concept: No one wants to die. Even people who want to go to heaven don't want to die to get there.
And yet death is the destination we all share. No one has ever escaped it. And that is as it should be, because Death is very likely the single best invention of Life. It is Life's change agent. It clears out the old to make way for the new. Right now the new is you, but someday not too long from now, you will gradually become the old and be cleared away. Sorry to be so dramatic, but it is quite true. Your time is limited, so don't waste it living someone else's life. Don't be trapped by dogma - which is living with the results of other people's thinking. Don't let the noise of other's opinions drown out your own inner voice. And most important, have the courage to follow your heart and intuition. They somehow already know what you truly want to become. Everything else is secondary. When I was young, there was an amazing publication called The Whole Earth Catalog, which was one of the bibles of my generation. . It was created by a fellow named Stewart Brand not far from here in Menlo Park, and he brought it to life with his poetic touch. This was in the late 1960's, before personal computers and desktop publishing, so it was all made with typewriters, scissors, and polaroid cameras. It was sort of like Google in paperback form, 35 years before Google came along: it was idealistic, and overflowing with neat tools and great notions. Stewart and his team put out several issues of The Whole Earth Catalog, and then when it had run its course, they put out a final issue. It was the mid-1970s, and I was your age. On the back cover of their final issue was a photograph of an early morning country road, the kind you might find yourself hitchhiking on if you were so adventurous. Beneath it were the words: "Stay Hungry. Stay Foolish." It was their farewell message as they signed off. Stay Hungry. Stay Foolish. And I have always wished that for myself. And now, as you graduate to begin anew, I wish that for you. Stay Hungry. Stay Foolish. Thank you all very much.

The New iPod [from YouTube]


View My Stats