snowdeal logo

archives archives

{bio,medical} informatics


Tuesday, November 07, 2000

bookmark: connotea :: del.icio.us ::digg ::furl ::reddit ::yahoo::

find related articles. powered by google. Wired News Virtual Cells May Aid Drug Tests
"Pharmaceutical companies want more drugs and they want them cheaper and faster. After all, that was at least part of the promise of the Human Genome Project."

"Bioinformatics -- the mining of biological data using information technology -- is the buzzword du jour in biotech. Bioinformatics companies, however, are using the brute force of old technologies to crunch their data.

"If researchers are going to extract drug discoveries out of GenBank, the Human Genome Project's free database, and the genome map created by Celera -- without getting buried in information -- biotech will need revolutionary, eureka-style discoveries.

Jeremy Levin, CEO of Physiome Sciences in Princeton, New Jersey, is shouting "Eureka!" and some experts say he's worth a listen."
redux [07.17.00]
find related articles. powered by google. Physiome.Com PHYSIOME SCIENCES GIVES BIOLOGICAL COMPUTER MODELS A SINGLE NEW LANGUAGE
"...the new Cell Markup Language, or CellML, will enhance and facilitate the exchange and validation of information among laboratories with a speed and accuracy not previously possible. CellML is an Extensible Markup Language (XML) application that provides a single means of integrating biological models, experimental data and text documents in a platform-independent and web-accessible way.“

We are organizing a global network of academic centers to aid us in this important effort,” said Tom Colatsky PhD, Executive Vice President and Chief Scientific Officer. “Computer-based models are an important means of integrating gene and protein data to understand cell and organ function. Having a common language to describe these data will speed model development and enable researchers to access the massive amounts of information pouring out of biomedical laboratories world wide.”

Physiome Sciences, Inc., will develop and maintain a website as the primary source of information about CellML and its development. The website will also provide access to cell models in the public domain that can be downloaded, run, modified and updated. Researchers will be invited to learn about CellML and to use a wide range of computer models in their research."

find related articles. powered by google. CeLLML.Org Future Directions of CellML
"The CellMLTM language is under active development. In the near future, the CellML development team will consider how to implement the following:
  • Ontology — CellML will provide the capability to fully define a rule set (an ontology) that specifies how different classes of components may interact. For instance, the ontology may specify that components of type ``channel'' may not be put into a geometric containment relationship with a component of type ``cytoplasm'' (i.e., channels belong in membranes, not in the cytosol). These rules could be used by CellML processing software to help modellers build biologically reasonable models. We are looking at existing XML standards for storing ontology information, such as the Ontology Interface Layer (OIL).
  • Spatially varying variables — CellML will include spatially varying variables. We will probably implement this with FieldML.
  • Integration of cellular models with organ and tissue models — CellML and AnatML will be more tightly coupled.

Longer-term development goals include:

  • Biological metadata — CellML will allow modellers to incorporate more detailed information about the biological entities (i.e., cells, proteins, signalling pathways) that their models represent. We intend to accomplish this by allowing the inclusion of other markup languages in a CellML document, much as we currently support math by allowing the inclusion of MathML.
  • Experimental and simulation data — CellML will allow modelers to associate experimental data used to validate their models with the model documents. Modelers may also want to associate simulation data produced by running the model using a given parameter set with the model document. The inclusion of this information will be made possible by leveraging existing languages designed specifically for data storage. "
redux [05.10.00]
find related articles. powered by google. The XML Cover Pages XML and Semantic Transparency
"We may rehearse this fundamental axiom of descriptive markup in terms of a classical SGML polemic: the doubly-delimited information objects in an SGML/XML document are described by markup in a meaningful, self-documenting way through the use of names which are carefully selected by domain experts for element type names, attribute names, and attribute values. This is true of XML in 1998, was true of SGML in 1986, and was true of Brian Reid's Scribe system in 1976. However, of itself, descriptive markup proves to be of limited relevance as a mechanism to enable information interchange at the level of the machine.

As enchanting as it is to contemplate the apparent 'semantic' clarity, flexibility, and extensibility of XML vis-à-vis HTML (e.g., how wonderfully perspicuous XML <bookTitle> seems when compared to HTML <i>), we must reckon with the cold fact that XML does not of itself enable blind interchange or information reuse. XML may help humans predict what information might lie "between the tags" in the case of <trunk> </trunk>, but XML can only help. For an XML processor, <trunk> and <i> and <booktitle> are all equally (and totally) meaningless. Yes, meaningless.

Just like its parent metalanguage (SGML), XML has no formal mechanism to support the declaration of semantic integrity constraints, and XML processors have no means of validating object semantics even if these are declared informally in an XML DTD. XML processors will have no inherent understanding of document object semantics because XML (meta-)markup languages have no predefined application-level processing semantics. XML thus formally governs syntax only - not semantics."

redux [10.13.00]
find related articles. powered by google. Scientific American Hooking up Biologists: Consortia are forming to sort out a common cyberlanguage for life science
"Imagine that your co-worker in the next cubicle has some information you need for a report that's due soon. She e-mails it to you, but the data are from a spreadsheet program, and all you have is a word processor, so there's no possibility of your cutting and pasting it into your document. Instead you have to print it out and type it in all over again. That's roughly the situation facing biologists these days. Although databases of biological information abound--especially in this post-genome-sequencing era--many researchers are like sailors thirsting to death surrounded by an ocean: what they need is all around them, but it's not in a form they can readily use.

To solve the problem, various groups made up of academic scientists and researchers from biotechnology and pharmaceutical companies are coming together to try to devise computer standards for bioinformatics so that biologists can more easily share data and make the most of the glut of information resulting from the Human Genome Project. Their goal is to enable an investigator not only to float seamlessly between the enormous databases of DNA sequences and those of the three-dimensional protein structures encoded by that DNA. They also want a scientist to be able to search the databases more efficiently so that, to use an automobile metaphor, if someone typed in "Camaro," the results would include other cars as well because the system would be smart enough to know that a Camaro is another kind of car."

"Eric Neumann, a member of both the Bio-Ontologies and BioPathways consortia, is a neuroscientist who is now vice president for life science informatics at the consulting firm 3rd Millennium in Cambridge, Mass. (no relation to Millennium Pharmaceuticals). He says Extensible Markup Language (XML) is shaping up to be the standard computer language for bioinformatics."

redux [09.15.00]
find related articles. powered by google. The Rand Corporation : Scaffolding the New Web: Standards and Standards Policy for the Digital Economy The Emerging Challenge of Common Semantics
"With XML has come a proliferation of consortia from every industry imagineable to populate structured material with standard terms (see Appendix B). By one estimate, a new industry consortium is founded every week, perhaps one in four of which can collect serious membership dues. Rising in concert are intermediary groups to provide a consistent dictionary in cyberspace, in which each consortium's words are registered and catalogued.

Having come so far with a syntactic standard, XML, will E-commerce and knowledge organization stall out in semantic confusion?"

"How are semantic standards to come about?"

find related articles. powered by google. SemanticWeb.Org Tutorial on Knowledge Markup Techniques
"There is an increasing demand for formalized knowledge on the Web. Several communities (e.g. in bioinformatics and educational media) are getting ready to offer semiformal or formal Web content. XML-based markup languages provide a 'universal' storage and interchange format for such Web-distributed knowledge representation. This tutorial introduces techniques for knowledge markup: we show how to map AI representations (e.g., logics and frames) to XML (incl. RDF and RDF Schema), discuss how to specify XML DTDs and RDF (Schema) descriptions for various representations, survey existing XML extensions for knowledge bases/ontologies, deal with the acquisition and processing of such representations, and detail selected applications. After the tutorial, participants will have absorbed the theoretical foundation and practical use of knowledge markup and will be able to assess XML applications and extensions for AI. Besides bringing to bear existing AI techniques for a Web-based knowledge markup scenario, the tutorial will identify new AI research directions for further developing this scenario."
redux [02.24.00]
find related articles. powered by google. HMS Beagle Virtual Cures
[requires 'free' registration]
"For a brief period, supplying the data was enough. More genes meant more potential drug targets. But now the victims of the data flood are crying for help. Companies like Entelos, Inc. (Menlo Park, California) are coming to the rescue by building models that integrate all those data into a single, homeostatic, interconnected whole. The models allow researchers to run virtual drug trials to determine the best drug targets, treatment regimens, and patient populations."

Modelers feel that their time has come. "Leaders in the genomics field are all coming to this realization that model building is becoming the rate-limiting step," says Palsson. "There's a major shift taking place in the biological sciences." Math is back, he says, and "biology is going to become quantitative."
find related articles. powered by google. Biospace Virtual Drug Development: Start-ups Put Biology in Motion
"One way of animating our growing store of static information is through computer simulation. It is an area that is beginning to emerge slowly in the life sciences, with only a handful of academic and commercial players active in the area. But for a fledging discipline, there is a great variety in the scope of work being undertaken. While academic labs try to create accurate simulations of red blood cells and simple bacteria, the private companies are taking on bolder projects--simulating human organs and even human diseases in their entirety."

Science find related articles. powered by google. Revealing Uncertainties in Computer Models
[summary - can be viewed for free once registered]
"Computer simulations give the impression of precision, but they are founded on a raft of assumptions, simplifications, and outright errors. New tools are needed, scientists say, to quantify the uncertainties inherent in calculations and to evaluate the validity of the models. But making uncertainties evident is a tough challenge, as evidenced by several recent workshops.”



[ rhetoric ]

Bioinformatics will be at the core of biology in the 21st century. In fields ranging from structural biology to genomics to biomedical imaging, ready access to data and analytical tools are fundamentally changing the way investigators in the life sciences conduct research and approach problems. Complex, computationally intensive biological problems are now being addressed and promise to significantly advance our understanding of biology and medicine. No biological discipline will be unaffected by these technological breakthroughs.

BIOINFORMATICS IN THE 21st CENTURY

[ search ]

[ outbound ]

biospace / genomeweb / bio-it world / scitechdaily / biomedcentral / the panda's thumb /

bioinformatics.org / nodalpoint / flags and lollipops / on genetics / a bioinformatics blog / andrew dalke / the struggling grad student / in the pipeline / gene expression / free association / pharyngula / the personal genome / genetics and public health blog / the medical informatics weblog / linuxmednews / nanodot / complexity digest /

eyeforpharma /

nsu / nyt science / bbc scitech / newshub / biology news net /

informatics review / stanford / bmj info in practice / bmj info in practice /

[ schwag ]

look snazzy and support the site at the same time by buying some snowdeal schwag !

[ et cetera ]

valid xhtml 1.0?

This site designed by
Eric C. Snowdeal III .
© 2000-2005