snowdeal logo

archives archives

{bio,medical} informatics


Sunday, August 19, 2001

bookmark: connotea :: del.icio.us ::digg ::furl ::reddit ::yahoo::

find rt skeptics saw something entirely different in today's announcement. One fund manager, an Humelated articles. powered by google. Stanford Medical Informatics Preprint Archive Management of Data, Knowledge, and Metadata on the Semantic Web: Experience with a Pharmacogenetics Knowledge Base

"Biomedical researchers are decoding the human genome with astonishing speed, but the clinical significance of the massive volumes of data collected remains largely undiscovered. Progress requires communication and data sharing among scientists. These data may be in the form of (1) raw data, derived data, and inferences that result from computational analyses, or (2) text documents published by experts who present their conclusions in natural language. The World Wide Web provides a valuable infrastructure for enabling researchers to share the rapidly growing knowledge about biology and medicine, and a fully functional Semantic Web is necessary to support data submission and retrieval, the sharing of knowledge, and interoperation of related resources."

find rt skeptics saw something entirely different in today's announcement. One fund manager, an Humelated articles. powered by google. The Second International Workshop on the Semantic Web Proceedings

"The "Semantic Web", a term coined by Tim Berners-Lee, is used to denote the next evolution step of the Web. Associating meaning with content or establishing a layer of machine understandable data would allow automated agents, sophisticated search engines and interoperable services, will enable higher degree of automation and more intelligent applications. The ultimate goal of the Semantic Web is to allow machines the sharing and exploitation of knowledge in the Web way, i.e. without central authority, with few basic rules, in a scalable, adaptable, extensible manner. With RDF as the basic platform for the Semantic Web, a multitude of tools, methods and systems have just appeared on the horizon. The goal of the workshop is to share experiences about these systems, exchange ideas about improvements of existing tools and creation of new systems, principles and applications. Also an important goal is to develop a cooperation model among Semantic Web developers, and to develop a common vision about the future developments."

redux [05.10.00]
find related articles. powered by google. The XML Cover Pages XML and Semantic Transparency

"We may rehearse this fundamental axiom of descriptive markup in terms of a classical SGML polemic: the doubly-delimited information objects in an SGML/XML document are described by markup in a meaningful, self-documenting way through the use of names which are carefully selected by domain experts for element type names, attribute names, and attribute values. This is true of XML in 1998, was true of SGML in 1986, and was true of Brian Reid's Scribe system in 1976. However, of itself, descriptive markup proves to be of limited relevance as a mechanism to enable information interchange at the level of the machine.

As enchanting as it is to contemplate the apparent 'semantic' clarity, flexibility, and extensibility of XML vis-à-vis HTML (e.g., how wonderfully perspicuous XML <bookTitle> seems when compared to HTML <i>), we must reckon with the cold fact that XML does not of itself enable blind interchange or information reuse. XML may help humans predict what information might lie "between the tags" in the case of <trunk> </trunk>, but XML can only help. For an XML processor, <trunk> and <i> and <booktitle> are all equally (and totally) meaningless. Yes, meaningless.

Just like its parent metalanguage (SGML), XML has no formal mechanism to support the declaration of semantic integrity constraints, and XML processors have no means of validating object semantics even if these are declared informally in an XML DTD. XML processors will have no inherent understanding of document object semantics because XML (meta-)markup languages have no predefined application-level processing semantics. XML thus formally governs syntax only - not semantics."

redux [05.10.00]
find related articles. powered by google. The Rand Corporation : Scaffolding the New Web: Standards and Standards Policy for the Digital Economy The Emerging Challenge of Common Semantics

"With XML has come a proliferation of consortia from every industry imagineable to populate structured material with standard terms (see Appendix B). By one estimate, a new industry consortium is founded every week, perhaps one in four of which can collect serious membership dues. Rising in concert are intermediary groups to provide a consistent dictionary in cyberspace, in which each consortium's words are registered and catalogued.

Having come so far with a syntactic standard, XML, will E-commerce and knowledge organization stall out in semantic confusion?"

"How are semantic standards to come about?"

find related articles. powered by google. SemanticWeb.Org Tutorial on Knowledge Markup Techniques

"There is an increasing demand for formalized knowledge on the Web. Several communities (e.g. in bioinformatics and educational media) are getting ready to offer semiformal or formal Web content. XML-based markup languages provide a 'universal' storage and interchange format for such Web-distributed knowledge representation. This tutorial introduces techniques for knowledge markup: we show how to map AI representations (e.g., logics and frames) to XML (incl. RDF and RDF Schema), discuss how to specify XML DTDs and RDF (Schema) descriptions for various representations, survey existing XML extensions for knowledge bases/ontologies, deal with the acquisition and processing of such representations, and detail selected applications. After the tutorial, participants will have absorbed the theoretical foundation and practical use of knowledge markup and will be able to assess XML applications and extensions for AI. Besides bringing to bear existing AI techniques for a Web-based knowledge markup scenario, the tutorial will identify new AI research directions for further developing this scenario."



[ rhetoric ]

Bioinformatics will be at the core of biology in the 21st century. In fields ranging from structural biology to genomics to biomedical imaging, ready access to data and analytical tools are fundamentally changing the way investigators in the life sciences conduct research and approach problems. Complex, computationally intensive biological problems are now being addressed and promise to significantly advance our understanding of biology and medicine. No biological discipline will be unaffected by these technological breakthroughs.

BIOINFORMATICS IN THE 21st CENTURY

[ search ]

[ outbound ]

biospace / genomeweb / bio-it world / scitechdaily / biomedcentral / the panda's thumb /

bioinformatics.org / nodalpoint / flags and lollipops / on genetics / a bioinformatics blog / andrew dalke / the struggling grad student / in the pipeline / gene expression / free association / pharyngula / the personal genome / genetics and public health blog / the medical informatics weblog / linuxmednews / nanodot / complexity digest /

eyeforpharma /

nsu / nyt science / bbc scitech / newshub / biology news net /

informatics review / stanford / bmj info in practice / bmj info in practice /

[ schwag ]

look snazzy and support the site at the same time by buying some snowdeal schwag !

[ et cetera ]

valid xhtml 1.0?

This site designed by
Eric C. Snowdeal III .
© 2000-2005