snowdeal logo

archives archives

{bio,medical} informatics


Monday, November 06, 2000

bookmark: connotea :: del.icio.us ::digg ::furl ::reddit ::yahoo::

find related articles. powered by google. GenomeBiology Accessing and distributing EMBL data using CORBA (common object request broker architecture)
"The EMBL Nucleotide Sequence Database is a comprehensive database of DNA and RNA sequences and related information traditionally made available in flat-file format. Queries through tools such as SRS (Sequence Retrieval System) also return data in flat-file format. Flat files have a number of shortcomings, however, and the resources therefore currently lack a flexible environment to meet individual researchers' needs. The Object Management Group's common object request broker architecture (CORBA) is an industry standard that provides platform-independent programming interfaces and models for portable distributed object-oriented computing applications. Its independence from programming languages, computing platforms and network protocols makes it attractive for developing new applications for querying and distributing biological data."
find related articles. powered by google. CCP11 Efficient access to biological databases using CORBA
"CORBA's interface definition language (IDL) is seen by some as an adequate way to represent data to be shared in a distributed environment. So why should anyone need a data model? One needs to be aware of what objects are in the CORBA world. As seen in Figure 1, to the onlooker IDL can look remarkably like a schema definition language used with an object database. However, more detailed examination shows that it plays a very different role. IDL is used to declare types which can be used in programs written in different languages and at different sites, and data values conforming to the IDL declarations can be passed between these programs. Rather than describing long term persistent data, it is better to think of IDL as a way of declaring structs of the kind seen in C, or equivalent type definitions in an OOPL. In doing this, CORBA IDL provides for language and platform independence but, significantly, it does not provide for data independence in the way that a data model does. Thus, while CORBA IDL provides a good interface for programs, it provides nothing special for databases and must not be seen as a substitute for a proper data model."

"The relationship between CORBA and distributed databases is described by Brodie and Stonebraker (1995). They advocate that these should be viewed as complementary technologies, and that there are advantages in using software architectures which combine these. In Section 8.1.5 of their book they suggest variants of a combined architecture, ranging from a "minimal database architecture" in which distributed DBMSs support just the data management functions declared in the IDL, to a "maximal database architecture" in which an entire distributed database solution is constructed and then a bridge is built to make this accessible from a CORBA environment. We believe that a "maximal database architecture" with a semantic data model at its heart is the best way forward when data integration, rather than distributed computation, is the main goal."

find related articles. powered by google. Proceedings of Tenth Knowledge Acquisition for Knowledge-Based Systems Workshop Reuse For Knowledge-Based Systems and CORBA Components
"The observation that CORBA does not provide component semantics contrasts with research in problem-solving method ontologies: An IDL specification of a component describes the syntax for the sharable elements of that component and enables interoperation, whereas ontologies for knowledge-based systems capture some of the component's semantics. In the CORBA approach, the semantics of what a component is or does is hidden in the server-side implementation. From our perspective, a significant weakness of IDL is that the specification places only syntactic constraints on how methods are implemented.

If a component is to be reused by a community of developers, the shared IDL specification should communicate something about the semantics of the methods that are available for reuse."

"Unfortunately, specifying semantics for an arbitrary piece of software (or a problem-solving method) is an open research problem."

find related articles. powered by google. developerWorks The Tao of e-business services
"The concept of Web services is the beginning of a new service-oriented architecture in building better software applications. The change from an object-oriented system to a service-oriented one is an evolutionary idea that sublimated from the global Internet and Web system."

"The lesson we should have learned from the failure of object-based systems is that the way services are described, organized, specified by potential users, and discovered amidst the clutter of the Internet will determine the success of B2B services. That is why we reserve the term service-oriented for architectures that focus on how services are described and organized in a way that supports the dynamic discovery of appropriate services at runtime.

"The semantics of services -- what they do and what data elements they manipulate mean -- is the key issue. Business value results from B2B collaborations that do the right thing. If they do something else, the damage may be dramatic. How, then, do we trust that a service does the right thing before it is used? And how do we make that determination at Internet speeds?

In small-scale OO systems, interface compatibility usually implies semantic compatibility. That is, an object that implements the right set of messages with the right types of arguments probably does "the right thing." This is true, in part, because small-scale systems tend to be built by a small team of programmers with shared understanding of how the system operates and, in part, because small systems offer little opportunity for ambiguity. However, in large-scale OO systems, the semantics provided by a given class cannot be reliably deduced from the message interface alone. Clearly, in an Internet populated with many thousands of services offered by thousands of different companies with very different agendas, compliance with some specified message set will not be sufficient to deduce the semantics of the service."

find related articles. powered by google. Stanford Medical Informatics Preprint Archives Integration and Beyond: Linking Information from Disparate Sources and into Workflow
"The vision of integrating information -- from a variety of sources, into the way people work, to improve decisions and process -- is one of the cornerstones of biomedical informatics. Thoughts on how this vision might be realized have evolved as improvements in information and communication technologies, together with discoveries in biomedical informatics, and have changed the art of the possible. This review identified three distinct generations of "integration" projects. First-generation projects create a database and use it for multiple purposes. Second-generation projects integrate by bringing information from various sources together through enterprise information architecture. Third-generation projects inter-relate disparate but accessible information sources to provide the appearance of integration. The review suggests that the ideas developed in the earlier generations have not been supplanted by ideas from subsequent generations. Instead, the ideas represent a continuum of progress along the three dimensions of workflow, structure, and extraction."


[ rhetoric ]

Bioinformatics will be at the core of biology in the 21st century. In fields ranging from structural biology to genomics to biomedical imaging, ready access to data and analytical tools are fundamentally changing the way investigators in the life sciences conduct research and approach problems. Complex, computationally intensive biological problems are now being addressed and promise to significantly advance our understanding of biology and medicine. No biological discipline will be unaffected by these technological breakthroughs.

BIOINFORMATICS IN THE 21st CENTURY

[ search ]

[ outbound ]

biospace / genomeweb / bio-it world / scitechdaily / biomedcentral / the panda's thumb /

bioinformatics.org / nodalpoint / flags and lollipops / on genetics / a bioinformatics blog / andrew dalke / the struggling grad student / in the pipeline / gene expression / free association / pharyngula / the personal genome / genetics and public health blog / the medical informatics weblog / linuxmednews / nanodot / complexity digest /

eyeforpharma /

nsu / nyt science / bbc scitech / newshub / biology news net /

informatics review / stanford / bmj info in practice / bmj info in practice /

[ schwag ]

look snazzy and support the site at the same time by buying some snowdeal schwag !

[ et cetera ]

valid xhtml 1.0?

This site designed by
Eric C. Snowdeal III .
© 2000-2005