Quantcast
Channel: SCN : All Content - All Communities
Viewing all articles
Browse latest Browse all 8212

Archiving for External Log of UI Logging

$
0
0

Introduction

 

The time and effort required to administer external log data depends to a great extent on its size. Database size may also affect the performance of some database accesses. This means that freeing the external log of “old” data should reduce the administrative burden significantly. “Old” refers to the age of the data and is given in weeks and months rather than in years. Old data belongs to closed business transactions and is therefore no longer needed for every-day operations. Moreover, and depending on the growth rate of the database, the removal of old data may also be a prerequisite for ensuring the operation of the database.

 

However, simply deleting such data from the database is not feasible since it may still be needed due to legal or business requirements. And keeping it in the database indefinitely is not an option either. This is exactly where data archiving gets involved. It ensures that external log data that is no longer required is moved from the database to archive files and then deleted from the database. This guarantees that the database remains manageable in the long run and that the archived external log data can still be accessed if the need arises.

 

Comprehensive archivability checks at the application level guarantee that the data archived is consistent and complete and that only data from completed business transactions is considered.

 

Archiving data means:

  • Easier database administration
  • Faster backup and recovery
  • Faster release upgrades
  • Efficient use of resources
  • Reduced hardware costs (fewer hard disks, less CPU and memory needed)
  • Reduced administration costs
  • Shorter response times in transactions

ArchitectureUntitled.jpg

The product standard archiving describes 4 scenarios. We will follow the recommended scenario 2 which contains three steps: preprocessing, writing, deletion, and addtional steps: read, reload.


Description of Scenario 2  


A preprocessing program carries out the archivability check and sets a status (for example, “Archivable”) for all archivable external log data. Following this step, a write program writes all the data that has that status to the archive. It does not carry out any additional business checks. And finally, a delete program carries out the check read and deletes the data from the database.

 

The use of this status can prevent data loss that may occur if data that has already been archived but not yet deleted is changed. Another advantage is the separation between the relatively complex business checks and the (also relatively complex) process of reading and writing the complete data of a business object to the archive. This may improve the manageability of the programs, but on the other hand will increase the overall time and effort needed for data archiving. Nevertheless, this should not be viewed as too critical. Under normal circumstances, the archivability check programs process only a small portion of the data that belongs to an archiving object. The program, however, must read additional data in order to verify the business context in which the actual data is embedded. The amount of data that needs to be read more than once is usually small, even if the check and write phases are separated.

 

The read program reads the archive files and displays the data.

 

You can reload archived data from the archive files into the database using this function through the reload program.


Viewing all articles
Browse latest Browse all 8212

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>