BusinessAsk the expert

Main functions of the DBMS

Modern database management systems are used on many objects, but not everyone knows what they are and how to use the functions of the DBMS. Such tools are distinguished by a huge number of features, so to fully use them, you need to understand what they can do and what are useful to the user.

Data management

First of all, the function of the DBMS includes processing information in external memory, and this function is the provision of the basic structures of the VI, which are needed not only for storing information directly included in the database, but also for performing various tasks such as obtaining an accelerated Access to any files in different cases. In certain modifications, the capabilities of various file systems are actively used, while others allow for work even at the level of external memory devices. But in this case it is worth noting that in the function of DBMS having a high degree of development, the user in any case is not informed about whether any system is used, and if so, how the files are organized. In particular, the system maintains its own order of naming objects included in the database.

Managing the buffers of RAM

In the majority of cases, DBMS functions are used in fairly large databases, and this size is at least often much more than the available RAM. Of course, if in the case of access to each data element, it will be exchanged with external memory, the speed of the latter will correspond to the speed of the system itself, so practically the only option for its real increase is the buffering of information in RAM. In this case, even if the OS implements system-wide buffering, for example, with UNIX, this will not be enough to provide the DBMS with the purpose and basic functions, since it has much more data on the useful properties of buffering each specific part of the database used. Due to this, the developed systems support their own set of buffers, as well as the unique discipline of their replacement.

It is worth noting the fact that there is a separate direction of control systems, oriented to the continuous presence in the RAM of the entire database. This direction is based on the assumption that in the near future the amount of RAM of computers can expand so much that any buffering will no longer be of concern, and the main functions of this type of DBMS here will come in handy. At the moment, all these works remain at the testing stage.

Transaction management

Transaction is a sequence of operations with the database used, which the management system considers as a whole. If the transaction is fully successfully executed, the system records the changes that it made in the external memory, or none of the indicated changes will be reflected in the state of the database. This operation is required in order to support the logical integrity of the database used. It should be noted that maintaining the correct course of the transaction mechanism is a prerequisite even when using single-user databases, the purpose and functions of which differ significantly from other types of systems.

The property that any transaction begins only with a complete state of the database and still leaves it in the same state after the end of the procedure makes its use extremely convenient as a unit of activity regarding the database. With proper management of concurrently executing transactions on the part of the management system, each individual user, in principle, can feel part of the whole. However, this is somewhat of an idealized view, since in many situations when working, people will still feel the presence of their colleagues if they use a multi-user system, but in fact this provides for the very concept of a DBMS. The functions of a multi-user-type DBMS also relate to the management of transactions such concepts as a serial execution plan and serialization.

What do they mean?

Serialization of concurrent transactions involves the creation of a special plan for their operation, in which the overall effect of the mixture is equivalent to the result obtained because of their consistent execution.

A serial execution plan is a defined structure of actions that leads to serialization. Of course, if the system is able to provide a truly serial execution of a mixture of transactions, then for any user who generates a transaction, the presence of others will be completely invisible, except that it will work a little slower than a single-user mode.

There are several basic algorithms for serialization. In centralized systems, the most popular algorithms today are based on synchronization captures of various database objects. In the case of using any serialization algorithms, it is possible to create conflicts between two or more transactions on access to certain objects of the database. In such a situation, to provide support for this procedure, you need to rollback, that is, to remove any changes made in the database through one or more processes. This is only one of the situations when in a multi-user system a person feels the presence of others.


One of the main requirements for modern systems is to ensure the reliability of storing information in external memory. In particular, this provides that the main functions of the DBMS include the ability to restore the last agreed state of the database after the occurrence of any software or hardware failures. In the majority of cases, it is customary to consider two versions of hardware failures:

  • Soft, which can be interpreted as an unexpected stopping of the computer (the most common case - emergency power off);
  • Hard, which are characterized by a partial or complete loss of data stored on external storage media.

As examples of software failures, you can cause the system to crash when you try to use some feature that is not among the main functions of the DBMS or the emergency shutdown of any user utility, as a result of which a particular transaction was not completed. The first situation can be considered as a special kind of mild failure, while when the latter occurs, you need to eliminate the consequences of a single transaction.

Of course, in any case, for a normal database recovery, you need to have a certain amount of additional information. In other words, in order to maintain the reliability of data storage in a database, it is necessary to provide redundancy of information storage, and the part of the data used for recovery should be carefully guarded. The most common method for maintaining such redundant data is keeping a change log.

What is it and how is it used?

The log is a special part of the database, access to which is not included in the number of functions of the DBMS, and it is maintained especially carefully. In some situations, even support is provided for two copies of the journal, which are on different physical media. These stores receive information about any changes that occur in the main part of the database, and in different management systems, the changes can be logged at various levels. In some situations, the log entry is fully consistent with some particular logical change operation , somewhere - a minimal internal operation related to the modification of the external memory page, while some DBMSs use a combination of the two approaches.

In any case, the so-called "strategy of anticipatory recording" is used in the journal. When you use it, a record that indicates the change of any database objects falls into the external log memory before the object being changed. It is known that if the functions of the Access DB provide for the normal observance of this protocol, the log resolves any problems associated with restoring the database in the event of any failures.


The most simple recovery situation is an individual transaction rollback. For this procedure, you do not need to use a system-wide change log, and it's enough to use a local modification log for each transaction, then roll back transactions by performing backtracks, starting at the end of each record. The structure of the function of the DBMS often involves the use of this structure, but in most cases, local logs are not supported, and individual rollbacks are carried out on a system-wide basis for individual transactions, and for this purpose all records of each transaction are combined in a reverse list.

In the event of a mild failure, the external database memory may include various objects that have been modified by transactions not completed by the time the failure occurred, and there may be no various objects upgraded by those that were successfully completed before failure by using the RAM buffers, the contents Which completely disappears when there are similar problems. If the protocol that uses local logs is followed, there will always be records in the external memory that relate to the modification of any such objects.

The main purpose of the recovery procedure after the occurrence of mild failures is the state of the external memory of the main database that would occur if any completed transactions were fixed in the VI and did not contain traces of unfinished procedures. To achieve this effect, the main functions of the DBMS are in this case the rollback of unfinished transactions and the replay of those operations whose results were not eventually displayed in external memory. This process involves a fairly large number of subtleties, which are mainly related to the management of the journal and buffers.

Hard crashes

If you need to restore the database after a severe failure occurs, not only the log is used, but also an archive copy of the database. The latter is a complete copy of the database by the time the journal began to be filled. Of course, in order to carry out a normal recovery procedure, it is necessary to preserve the log, so, as mentioned earlier, extremely serious requirements are imposed on its preservation in external memory. In this case, the restoration of the database is based on the fact that, based on the archive copy, all the transactions that were completed by the time the failure occurred are reproduced from the log. If necessary, even the work of unfinished transactions can be reproduced and continued their normal operation after the restoration procedure is over, but in most real systems such a procedure is not carried out because the restoration itself after severe failures is in itself a rather lengthy procedure.

Language support

To work with modern databases, different languages are used, and in early DBMS, the purpose, functions and other features of which differed significantly from modern systems, support for several highly specialized languages was provided. Basically it was SDL and DML, designed to determine the schema of the database and manipulate the data, respectively.

SDL was used to determine the logical structure of the database, that is, to recognize the specific structure of the database, which is presented to users. DML also included a whole complex of information manipulation operators, allowing to enter information into the database, and to delete, modify or use existing data.

The DBMS functions include different types of support for a single integrated language, which provides for any means necessary for normal work with databases, from its initial creation, and provides a standard user interface. As the standard language, which provides the basic functions of the DBMS of the most common relational systems these days, SQL is used.

What is he like?

First of all, this language combines the basic functions of DML and SDL, that is, it provides the ability to determine the specific semantics of a relational database and manipulate the necessary information. In this case, the naming of various database objects is supported directly at the language level in the sense that the compiler translates object names into their internal identifiers based on specially maintained service directories-directories. The core of the control systems does not interact with the tables or their individual columns in principle.

The SQL language includes a whole list of special tools that allow you to determine the integrity constraints of the database. Again, any such restrictions are included in special catalog tables, and integrity control is performed directly at the language level, that is, during the reading of individual database modification operators, the compiler, based on the integrity constraints existing in the database, generates the corresponding program code.

Similar articles





Trending Now






Copyright © 2018 Theme powered by WordPress.