Advantages Of An ODBMS

Advantages Of An ODBMS

We can list some of the advantages that we have found in using an ODBMS environment for a website.

Simplicity
We have no doubt that the simplicity of using an ODBMS contributes significantly to our ability to rapidly develop and deploy new functionality and bug fixes. It is extremely straightforward to create and persist Abstract Data Types, since the persistence mechanism directly supports classes and objects, requiring no extra work to map classes to tables (as required by an RDBMS implementation). An ODBMS also directly supports inheritance, which means that no difficult design decisions have to be made regarding the splitting of instance variable storage across tables.
Development cycles are measured in weeks and days - not the months that have typically been the case in many projects that we have worked on. In our opinion much of our efficiency can be attributed to our use of a Java ODBMS. The transactional model of the ODBMS is easy to understand, there is no database language syntax to learn, over and above that of the implementation language. Developers do not need to
use another language to express the database schema, because the Object Model as defined by the Java application classes is the schema.

Schema Management
The persistent Object Model is identical to that of the application class hierarchy. Managing a persistent Object Model is far simpler than managing an environment where the in memory structures are different than those on disk. However, care must be taken in designing the entry points to the large collections of root objects in the system. Less experienced developers often err in creating multiple root objects for the same underlying tree of objects, leading to confusion and incoherency in the model.

Code Independence
We have found that the completely non-invasive transactional model makes it possible to insulate most business objects in the system from having any knowledge of the underlying database. This is important in the event that we might have to migrate to some other environment at a later stage.
Knowledge of the persistence mechanism system is encapsulated in 27 of the 1000 (or so) application classes. While some O/R mapping environments will allow for a clean separation of this knowledge, many OO systems making use of an RDBMS must directly embed knowledge of SQL and tables deeply in the application classes themselves.
It is important to clarify the above point, no-where in our system does any object have to actively participate in the mechanism used to persist it. This is often the case in either purchased or home-grown O/R mapping frameworks, since classes need to have some knowledge
of what tables they fit in. It is only due to the fact that objects are persisted as objects and not fragmented into various atomic database types that this is possible. The knowledge of the persistence mechanism that is embedded in the 27 or so “aware” classes is purely related to obtaining connections to the database and management of database specific transactional mechanisms.

Natural Object Model
In an environment that uses an ODBMS, objects that refer to other objects always “contain” the object that they reference. What we mean by this is that there is no distinction between an “in-memory” object and an “on disk” object. In systems that make use of an RDBMS, objects are often found to contain a key or an index to the referenced object. Once again, some O/R mapping environments remove this complexity but more often than not, the fact that the referenced object is not actually stored as an object leaks through into the Object Model. Objects are always objects in the ODBMS model, and are always available. This naturalness of expression for the developer as well as the designer is very difficult to achieve in any RDBMS environment.

Collections
Many ODBMS vendors provide highly optimized Collection classes that can be used to provide efficient management of vast numbers of objects. Iteration over (and management of) the collections is straightforward and can be used without any complicated setup routines. We believe this is a distinct advantage to developers.

Performance
Performance is vital to the online experience, online users expect snappy responses. Some architects believe that doing things in a pure OO manner leads to poor performance. In many situations involving the storage and retrieval of objects, an ODBMS does deliver significant performance gains over other persistence mechanisms. We hasten to add that there are some instances when an RDBMS will outperform an ODBMS. These instances often occur when it is
necessary to perform arbitrary queries over large collections of objects.

Design Recovery (quickly and easily correct design mistakes)
By this odd term we mean the ability to quickly and easily correct design mistakes in the class hierarchy/ schema. Since the schema of the database is the class hierarchy as defined by the Java classes in the system, changing a class definition to correct a mistake in design or implementation is relatively simple.
In systems where there is a distinction between the class hierarchy and the database schema correcting flaws can become a complicated matter. Because the schema of an RDBMS is available to other modules (possibly developed outside of the scope of the larger system) errors are often introduced when making changes to the database structure. Changes to the RDBMSs schema may differ from the expectations of external programs. Such conflicts cannot occur in a true ODBMS since there is no difference in structure between in-memory in-use instances and those persistently stored on disk. There are however other problems that can occur when changing schemas.

Retention of Objects
Developers make mistakes. In an RDBMS these mistakes may occasionally lead to situations where inadvertently deleting a row in a table can lead to dangling references, since triggers are often imperfectly implemented. An ODBMS with Garbage Collection based object removal completely eliminates this problem. Persistent objects are only “deleted” once they are no longer referenced by any other instances reachable from a well-defined root object. The ability to recover from an accidental deletion has often allowed system to recover gracefully from potentially embarrassing situations. This has occurred when flawed code eliminates the primary reference to an object. Because other objects still referenced the “deleted” object, it was possible to restore the reference and thus recover the object. Obviously this only works if a persistent Garbage Collection has not been performed.

Implementation of The Secure Hospital Database

The hospital database should be implemented on a relational DBMS that are protected from user access both by the permissions on directories containing the database files and the permissions on the database files themselves. Users cannot look at the files in the database except through DBMS and its protection scheme; that is users cannot access the database at the operating system level. Additional binary files with special format are included, making decoding of any information more difficult. Moreover, DBMS has a built-in hierarchical security system. The database administrator (D.B.A.) controls the types of access allowed to various levels of users using this system.

The security labels which were defined during the secure conceptual design phase were implemented using the notions of roles and groups supported by DBMS. The user roles handled in our implementation are: doctor, normal nurse, and special nurse. The notion of user roles was implemented using the notion of user groups, provided by DMBS (a group is an identifier that can be used to apply permissions to a list of DBMS
users associated with the identifier). After performing a study on the considered users roles, the following clearances were assigned to them: clearance level 3 to the user role doctor, clearance level 2 to the user role special nurse, and clearance level 1 to the user role normal nurse.

The data sets loaded on the hospital database are: the nurse record, the doctor record, the patient record, which contains the medical information, the laboratory information, the follow-up information and the personal patient information. Being a relational database management system, DBMS stores data in tables. The labelling of the data sets was described above. However, it can change dynamically, if one of the security
constraints is satisfied, leading to upgrading and/or fragmentation. The classification level of each tuple of a table was implemented by adding at each table a column. This column contained the tuple classification; that is 1,2,3 instead of confidential, secret, top secret. Zero (0) indicates that the data contained in the tuple is cover story. Of course, the tuple class data was only visible to the D.B.A.

Apart from that, an additional column was added to the related medical data tables, containing the flag ‘h’ (history) or ‘l’ (last). This flag was automatically updated, when new relevant data was inserted. This is one of the integrity constraints that were implemented in the database.

Users were allowed to have access to the data only trough predefined forms. An DBMS form is the electronic equivalent of a paper form; it is displayed on the computer screen and is used for data input and data display. Forms consist of trim, text that provides helpful information, and fields. The latter display the data and accept data entry.

The access types supported from the secure prototype application are insert, read, update, execute, cancel. Thus, the possibility of deleting information is excluded. This was necessary, not only for reasons of better control of the information flow, but also for reducing the possibility of fatal mistakes.

The security constraints defined earlier were implemented using rules and procedures written in SQL.

Implementation of Secure Database On a Hospital

A hospital, with health care information systems is one case of a security critical environment. It is one of the few environments, in which a confidentiality breach, wrong information or even a relatively minor loss of access to information may be life-threatening. Security is therefore an important issue which encompasses all aspects of the organization, from patient and staff safety to deeply personal information about staff and patients that is distributed throughout the organization. Due to the widespread use of the database technology, database security plays today a significant role in the overall security of health care information systems.
The development of a secure database for health care information system requires an appropriate multiphase design methodology which will guide the steps of the development and will provide tools supporting the automatic execution of some steps. The proposed methodology and security policy helps ensure all three aspects of security (secrecy, confidentiality and availability), without introducing significant overheads. It is based on the integration of mandatory and discretionary security policies and takes security into consideration from the very first steps of the design.
The choice of an appropriate security policy and a suitable secure database design methodology is crucial in each of health care environments. The two most well known proposals for the database security policies are the mandatory and the discretionary ones. Discretionary security policies govern the access of users to the information on the basis of the user’s identity and the rules specifying, for each user and each object in the system, the types of access the user is allowed on the object. Discretionary security policies are flexible and suitable for a variety of implementations. However, they provide insufficient control of the information flow (e.g. they are vulnerable to malicious attacks such as the Trojan Horses). On the other hand, mandatory security policies provide a high level of certification for security, based on the use of unforgeable security labels, which are assigned both to users and data. Thus, they allow one to track the flow of information. They are however mainly suitable to certain kinds of environments where the users and the objects can be easily classified (i.e. the military one).
None of these two major policies are sufficient by itself to cover the security needs of the health care environments. Hence, it has been necessary to propose a new security policy, which has been based on the integration of mandatory and discretionary control policies. In order to maximize the effectiveness and decrease the complexity of implementing this policy, a step by step design methodology with integrated security has been proposed. In particular, the responsibility of the role in the application determines the security label (clearance) of the user role. The security label (classification) of the data represents its level of sensitivity. The user roles are assigned a node at the user role hierarchy (URH). Then, beginning from the lower level of the hierarchy, the data items are assigned a security label equal to that of the users that must be cleared to access it. In the end, the security requirements are examined. This may lead to fragmentation of some relations and/or upgrading (since we support tuple level granularity). It must be noted that polyinstantiation - which is a characteristic of the multilevel security policy - is supported only in the form of cover stories. This is possible, due to the support of the write down mechanism (with no fear of inference), which is essential for the hospital environment.