SAT閱讀課外擴展材料

雕龍文庫 分享 時間: 收藏本文

SAT閱讀課外擴展材料

  Since the start of the year, a team of researchers at Carnegie Mellon University supported by grants from the Defense Advanced Research Projects Agency and Google, and tapping into a research supercomputing cluster provided by Yahoo has been fine-tuning a computer system that is trying to master semantics by learning more like a human. Its beating hardware heart is a sleek, silver-gray computer calculating 24 hours a day, seven days a week that resides in a basement computer center at the university, in Pittsburgh. The computer was primed by the researchers with some basic knowledge in various categories and set loose on the Web with a mission to teach itself.

  For all the advances in computer science, we still dont have a computer that can learn as humans do, cumulatively, over the long term, said the teams leader, Tom M. Mitchell, a computer scientist and chairman of the machine learning department.

  The Never-Ending Language Learning system, or NELL, has made an impressive showing so far. NELL scans hundreds of millions of Web pages for text patterns that it uses to learn facts, 390,000 to date, with an estimated accuracy of 87 percent. These facts are grouped into semantic categories cities, companies, sports teams, actors, universities, plants and 274 others. The category facts are things like San Francisco is a city and sunflower is a plant.

  NELL also learns facts that are relations between members of two categories. For example, Peyton Manning is a football player . The Indianapolis Colts is a football team . By scanning text patterns, NELL can infer with a high probability that Peyton Manning plays for the Indianapolis Colts even if it has never read that Mr. Manning plays for the Colts. Plays for is a relation, and there are 280 kinds of relations. The number of categories and relations has more than doubled since earlier this year, and will steadily expand.

  The learned facts are continuously added to NELLs growing database, which the researchers call a knowledge base. A larger pool of facts, Dr. Mitchell says, will help refine NELLs learning algorithms so that it finds facts on the Web more accurately and more efficiently over time.

  NELL is one project in a widening field of research and investment aimed at enabling computers to better understand the meaning of language. Many of these efforts tap the Web as a rich trove of text to assemble structured ontologies formal descriptions of concepts and relationships to help computers mimic human understanding. The ideal has been discussed for years, and more than a decade ago Sir Tim Berners-Lee, who invented the underlying software for the World Wide Web, sketched his vision of a semantic Web.

  Today, ever-faster computers, an explosion of Web data and improved software techniques are opening the door to rapid progress. Scientists at universities, government labs, Google, Microsoft, I.B.M. and elsewhere are pursuing breakthroughs, along somewhat different paths.

  For example, I.B.M.s question answering machine, Watson, shows remarkable semantic understanding in fields like history, literature and sports as it plays the quiz show Jeopardy! Google Squared, a research project at the Internet search giant, demonstrates ample grasp of semantic categories as it finds and presents information from around the Web on search topics like U.S. presidents and cheeses.

  Still, artificial intelligence experts agree that the Carnegie Mellon approach is innovative. Many semantic learning systems, they note, are more passive learners, largely hand-crafted by human programmers, while NELL is highly automated. Whats exciting and significant about it is the continuous learning, as if NELL is exercising curiosity on its own, with little human help, said Oren Etzioni, a computer scientist at the University of Washington, who leads a project called TextRunner, which reads the Web to extract facts.

  Computers that understand language, experts say, promise a big payoff someday. The potential applications range from smarter search to virtual personal assistants that can reply to questions in specific disciplines or activities like health, education, travel and shopping.

  The technology is really maturing, and will increasingly be used to gain understanding, said Alfred Spector, vice president of research for Google. Were on the verge now in this semantic world.

  With NELL, the researchers built a base of knowledge, seeding each kind of category or relation with 10 to 15 examples that are true. In the category for emotions, for example: Anger is an emotion. Bliss is an emotion. And about a dozen more.

  

  Since the start of the year, a team of researchers at Carnegie Mellon University supported by grants from the Defense Advanced Research Projects Agency and Google, and tapping into a research supercomputing cluster provided by Yahoo has been fine-tuning a computer system that is trying to master semantics by learning more like a human. Its beating hardware heart is a sleek, silver-gray computer calculating 24 hours a day, seven days a week that resides in a basement computer center at the university, in Pittsburgh. The computer was primed by the researchers with some basic knowledge in various categories and set loose on the Web with a mission to teach itself.

  For all the advances in computer science, we still dont have a computer that can learn as humans do, cumulatively, over the long term, said the teams leader, Tom M. Mitchell, a computer scientist and chairman of the machine learning department.

  The Never-Ending Language Learning system, or NELL, has made an impressive showing so far. NELL scans hundreds of millions of Web pages for text patterns that it uses to learn facts, 390,000 to date, with an estimated accuracy of 87 percent. These facts are grouped into semantic categories cities, companies, sports teams, actors, universities, plants and 274 others. The category facts are things like San Francisco is a city and sunflower is a plant.

  NELL also learns facts that are relations between members of two categories. For example, Peyton Manning is a football player . The Indianapolis Colts is a football team . By scanning text patterns, NELL can infer with a high probability that Peyton Manning plays for the Indianapolis Colts even if it has never read that Mr. Manning plays for the Colts. Plays for is a relation, and there are 280 kinds of relations. The number of categories and relations has more than doubled since earlier this year, and will steadily expand.

  The learned facts are continuously added to NELLs growing database, which the researchers call a knowledge base. A larger pool of facts, Dr. Mitchell says, will help refine NELLs learning algorithms so that it finds facts on the Web more accurately and more efficiently over time.

  NELL is one project in a widening field of research and investment aimed at enabling computers to better understand the meaning of language. Many of these efforts tap the Web as a rich trove of text to assemble structured ontologies formal descriptions of concepts and relationships to help computers mimic human understanding. The ideal has been discussed for years, and more than a decade ago Sir Tim Berners-Lee, who invented the underlying software for the World Wide Web, sketched his vision of a semantic Web.

  Today, ever-faster computers, an explosion of Web data and improved software techniques are opening the door to rapid progress. Scientists at universities, government labs, Google, Microsoft, I.B.M. and elsewhere are pursuing breakthroughs, along somewhat different paths.

  For example, I.B.M.s question answering machine, Watson, shows remarkable semantic understanding in fields like history, literature and sports as it plays the quiz show Jeopardy! Google Squared, a research project at the Internet search giant, demonstrates ample grasp of semantic categories as it finds and presents information from around the Web on search topics like U.S. presidents and cheeses.

  Still, artificial intelligence experts agree that the Carnegie Mellon approach is innovative. Many semantic learning systems, they note, are more passive learners, largely hand-crafted by human programmers, while NELL is highly automated. Whats exciting and significant about it is the continuous learning, as if NELL is exercising curiosity on its own, with little human help, said Oren Etzioni, a computer scientist at the University of Washington, who leads a project called TextRunner, which reads the Web to extract facts.

  Computers that understand language, experts say, promise a big payoff someday. The potential applications range from smarter search to virtual personal assistants that can reply to questions in specific disciplines or activities like health, education, travel and shopping.

  The technology is really maturing, and will increasingly be used to gain understanding, said Alfred Spector, vice president of research for Google. Were on the verge now in this semantic world.

  With NELL, the researchers built a base of knowledge, seeding each kind of category or relation with 10 to 15 examples that are true. In the category for emotions, for example: Anger is an emotion. Bliss is an emotion. And about a dozen more.

  

主站蜘蛛池模板: 亚洲人成色77777在线观看| 秦老头大战秦丽娟无删节| 99国产成+人+综合+亚洲欧美| 久久精品无码aV| 亚洲精品视频网| 国产XXXX99真实实拍| 国产精品亚洲精品日韩电影| 小雪与门卫老头全文阅读| 最近中文字幕高清免费大全8| 男人插女人30分钟| 美女网站色在线观看| 4ayy私人影院| 亚欧色一区w666天堂| 啊灬用力啊灬啊灬快灬深| 国产真实乱了在线播放| 天天操天天干天天玩| 我和小雪在ktv被一群男生小说| 欧美国产日韩a在线观看| 疯狂三人交性欧美| 美女扒开腿让男生桶爽网站 | 久久精品亚洲中文字幕无码网站| 亚洲精品成人av在线| 免费看一级性生活片| 四虎成人永久地址| 国产一级毛片视频在线!| 国产又爽又黄又无遮挡的激情视频| 朱竹清被吸乳羞羞漫画| 欧美性视频在线播放黑人| 永久黄网站色视频免费观看| 看**一级**多毛片| 男生与女生差差| 第一福利社区导航| 福利一区二区三区视频在线观看 | free哆拍拍免费永久视频| 一本丁香综合久久久久不卡网站| 中文网丁香综合网| 中文字幕永久视频| 一级特黄a大片免费| а√在线地址最新版| aaa国产一级毛片| 99re在线这里只有精品免费|