snowdeal logo

archives archives

{bio,medical} informatics


Wednesday, September 19, 2001

bookmark: connotea :: del.icio.us ::digg ::furl ::reddit ::yahoo::

find related articles. powered by google. GenomeWeb Paper Calls for Broader Use of AI-Based Methods in Bioinformatics

"Future understanding of genomic data may be severely limited unless bioinformaticists gain a better understanding of knowledge representation, according to Peter Karp, director of SRI International's Bioinformatics Research Group."

"As biological research grows more and more dependent on information technology to make sense of increasing amounts of genomic data, Karp wrote, it will be crucial for bioinformaticists to keep up with new developments in symbolic computing. "The genome revolution is increasing the need for pathway databases in the biological sciences, and similar developments will occur in other sciences. However, effective implementation of this paradigm is hampered because most biologists (and most other scientists) receive essentially no education in databases or knowledge representation."

According to Karp, equipping scientists with a better understanding of knowledge representation concepts--such as data models, ontologies, database query languages, logical inference, database design, and formal grammars--will be necessary in order to carry the field forward."

redux [08.19.01]
find rt skeptics saw something entirely different in today's announcement. One fund manager, an Humelated articles. powered by google. Stanford Medical Informatics Preprint Archive Management of Data, Knowledge, and Metadata on the Semantic Web: Experience with a Pharmacogenetics Knowledge Base

"Biomedical researchers are decoding the human genome with astonishing speed, but the clinical significance of the massive volumes of data collected remains largely undiscovered. Progress requires communication and data sharing among scientists. These data may be in the form of (1) raw data, derived data, and inferences that result from computational analyses, or (2) text documents published by experts who present their conclusions in natural language. The World Wide Web provides a valuable infrastructure for enabling researchers to share the rapidly growing knowledge about biology and medicine, and a fully functional Semantic Web is necessary to support data submission and retrieval, the sharing of knowledge, and interoperation of related resources."

redux [05.10.00]
find related articles. powered by google. The XML Cover Pages XML and Semantic Transparency

"We may rehearse this fundamental axiom of descriptive markup in terms of a classical SGML polemic: the doubly-delimited information objects in an SGML/XML document are described by markup in a meaningful, self-documenting way through the use of names which are carefully selected by domain experts for element type names, attribute names, and attribute values. This is true of XML in 1998, was true of SGML in 1986, and was true of Brian Reid's Scribe system in 1976. However, of itself, descriptive markup proves to be of limited relevance as a mechanism to enable information interchange at the level of the machine.

As enchanting as it is to contemplate the apparent 'semantic' clarity, flexibility, and extensibility of XML vis-à-vis HTML (e.g., how wonderfully perspicuous XML <bookTitle> seems when compared to HTML <i>), we must reckon with the cold fact that XML does not of itself enable blind interchange or information reuse. XML may help humans predict what information might lie "between the tags" in the case of <trunk> </trunk>, but XML can only help. For an XML processor, <trunk> and <i> and <booktitle> are all equally (and totally) meaningless. Yes, meaningless.

Just like its parent metalanguage (SGML), XML has no formal mechanism to support the declaration of semantic integrity constraints, and XML processors have no means of validating object semantics even if these are declared informally in an XML DTD. XML processors will have no inherent understanding of document object semantics because XML (meta-)markup languages have no predefined application-level processing semantics. XML thus formally governs syntax only - not semantics."

redux [05.10.00]
find related articles. powered by google. The Rand Corporation : Scaffolding the New Web: Standards and Standards Policy for the Digital Economy The Emerging Challenge of Common Semantics

"With XML has come a proliferation of consortia from every industry imagineable to populate structured material with standard terms (see Appendix B). By one estimate, a new industry consortium is founded every week, perhaps one in four of which can collect serious membership dues. Rising in concert are intermediary groups to provide a consistent dictionary in cyberspace, in which each consortium's words are registered and catalogued.

Having come so far with a syntactic standard, XML, will E-commerce and knowledge organization stall out in semantic confusion?"

"How are semantic standards to come about?"

find related articles. powered by google. SemanticWeb.Org Tutorial on Knowledge Markup Techniques

"There is an increasing demand for formalized knowledge on the Web. Several communities (e.g. in bioinformatics and educational media) are getting ready to offer semiformal or formal Web content. XML-based markup languages provide a 'universal' storage and interchange format for such Web-distributed knowledge representation. This tutorial introduces techniques for knowledge markup: we show how to map AI representations (e.g., logics and frames) to XML (incl. RDF and RDF Schema), discuss how to specify XML DTDs and RDF (Schema) descriptions for various representations, survey existing XML extensions for knowledge bases/ontologies, deal with the acquisition and processing of such representations, and detail selected applications. After the tutorial, participants will have absorbed the theoretical foundation and practical use of knowledge markup and will be able to assess XML applications and extensions for AI. Besides bringing to bear existing AI techniques for a Web-based knowledge markup scenario, the tutorial will identify new AI research directions for further developing this scenario."

redux [05.01.00]
find related articles. powered by google. Stanford Medical Informatics Preprint Archives Ontology-Oriented Design and Programming

"In the construction of both conventional software and intelligent systems, developers continue to seek higher level abstractions that both can aid in conceptual modeling and can assist in implementation and maintenance. In recent years, the artificial intelligence community has placed considerable attention on the notion of explicit ontologies -- shared conceptualizations of application areas that define the salient concepts and relationships among concepts. Such ontologies, when joined with well defined problem-solving methods, provide convenient formalisms for modeling and for implementing solutions to application tasks. This chapter reviews the motivation for seeking such high-level abstractions, and summarizes recent successes in building systems from reusable domain ontologies and problem-solving methods. As the environment for software execution moves from individual workstations to the Internet at large, casting new software applications in terms of these high-level abstractions may make complex systems both easier to build and easier to maintain."

redux [02.07.01]
find rt skeptics saw something entirely different in today's announcement. One fund manager, an Humelated articles. powered by google. BioLisp.Org Intelligent applications in BioComputing

"BioLisp.org is a public resource supporting scientists who use Lisp to develop intelligent applications in the biological sciences. We collect and disseminate Lisp biocomputing code, and gather pointers to Lisp and other Intelligent BioComputing methods. Please contribute, or make suggestions by writing the editor."



[ rhetoric ]

Bioinformatics will be at the core of biology in the 21st century. In fields ranging from structural biology to genomics to biomedical imaging, ready access to data and analytical tools are fundamentally changing the way investigators in the life sciences conduct research and approach problems. Complex, computationally intensive biological problems are now being addressed and promise to significantly advance our understanding of biology and medicine. No biological discipline will be unaffected by these technological breakthroughs.

BIOINFORMATICS IN THE 21st CENTURY

[ search ]

[ outbound ]

biospace / genomeweb / bio-it world / scitechdaily / biomedcentral / the panda's thumb /

bioinformatics.org / nodalpoint / flags and lollipops / on genetics / a bioinformatics blog / andrew dalke / the struggling grad student / in the pipeline / gene expression / free association / pharyngula / the personal genome / genetics and public health blog / the medical informatics weblog / linuxmednews / nanodot / complexity digest /

eyeforpharma /

nsu / nyt science / bbc scitech / newshub / biology news net /

informatics review / stanford / bmj info in practice / bmj info in practice /

[ schwag ]

look snazzy and support the site at the same time by buying some snowdeal schwag !

[ et cetera ]

valid xhtml 1.0?

This site designed by
Eric C. Snowdeal III .
© 2000-2005