Skip to content

Why the database is the real key to DMS success

When implementing a new document management system (DMS), often the database component on which the system relies is not given as much consideration as perhaps it deserves. Many IT professionals will look at the key performance areas of processing (CPU cores), memory (RAM), storage performance / capacity and networking on the host but often fail to appreciate the impact of the database as a key component that will impact the platform as a whole. A few typical scenarios for database installation include:

  • One server for all: Often with little existing infrastructure, the solution used by small to medium sized businesses is to host the whole platform from one server. This can have distinct advantages but there are specific considerations to this model especially if the DMS platform expands over time.
  • Shared database host: Businesses can often already have a database host platform installed within their IT infrastructure that is compatible and able to accommodate the new application databases. This comes with distinct advantages around resilience and scalability but again, there needs to be considerations around performance.
  • Dedicated database host: For the larger or enterprise size platform, a dedicated database host is usually the preferred option. Whilst this is likely the most costly solution, it can provide significant performance advantages on much larger platforms but the positives can easily be negated with poor planning and maintenance.
  • DBaaS (Cloud): Hosted database services are the relatively new ‘kid on the block’ and can offer excellent resilience with the lowest maintenance overheads for continuous, even performance but there are again key considerations that may not make this offering ideal especially with networking. 

Like the DMS application itself, processing capacity, memory and storage will all impact the performance of a database host but there are a few other factors, not just physical resource related, that need to be considered and should be regularly reviewed to keep your document management system running efficiently, reliably and resiliently. 

So here are the basics to consider. This is not an exhaustive, detailed list (that would produce a small essay, potentially even a #1 bestselling handbook) but hopefully a few pointers to ensure your platform is working at its best for your business.

  1. Maintenance: Many database installations are forgotten about as a rule after the DMS platform goes live with the exception of a regular backup / server snapshot. However, that backup only helps to maintain the resilience / recoverability of the platform, not maintain the performance. Transaction logs need to be cleared, indexes updated and resources (disk / cpu / memory) optimised to keep the system performing. It is worth taking the time to establish a regular database and transaction log maintenance regime that keeps the performance levels optimised. To ensure performance does not degrade and therefore ensure the platform is providing the best service to your organisation, you can automate a regular task to:
    • Backup the current transaction log (every 24 hours)
    • Shrink the database files and release now unused space back to the server (every 24 hours AFTER the transaction log task)
    • Perform a basic re-index of table contents (every 7 days)
  1. Processor: Especially important if you are sharing the installation with the DMS software. Check and review the CPU core performance logged during peak usage to ensure there is generally more than 25% headroom available. If the server is often maintaining 95% – 100% processing activity, this needs to be addressed.
  2. Memory: Database software can reserve memory even if it is not being used (Other applications can also be doing this). On shared installation servers or servers with multiple database instances, capping the maximum memory allocation value on each database instance or application will ensure that resource usage remains balanced as will performance. If the server is regularly using greater than 90% of its RAM, consider allocating more.
  3. Storage: It is not just about space for the database and associated log files, although disk performance does reduce the closer to capacity the disk gets. The general read / write performance of the physical media is potentially of greater impact to applications, especially databases that perform multiple transactions involving data changes every second. For example, a single SSD disk will have a far faster read response time than an older HDD device - 3500Mb per sec vs 250Mb per sec. 

    Storage installed in the server itself can be seen as faster but if the drive is hosting the operating system, application files and stored documents, as well as the database files, then the operational capacity is shared. When using local storage, storing the database files on a volume that is physically located on a separate hard disk to the rest of the files will have significant benefits for performance. Hosting the database files on dedicated NAS or SAN devices improves the disk performance but then introduces potential communication delays. Various storage disk configurations (often referred to as RAID versions) can also change resilience levels (error handling / failure recovery) as well as performance. Some RAID configurations are tailored to performance, others towards data protection and resilience. And then there is…
  4. Network (LAN/WAN): The performance of a platform is only as good as the weakest component and often it is the network that lets the side down – especially true with cloud or cloud hybrid installations. You can have the fastest SAN disk array available but a dodgy network connection to that array will null any performance gains afforded by your dedicated storage device.

PacSol recommends that your organisation's existing document management system database is regularly reviewed to ensure that any backup schedule in place has completed successfully and meets the needs of the business for compliance. That database review should also extend to ensuring the regular maintenance schedule has completed successfully, ensure transaction logs are being cleared down (not growing exponentially) and table indexes optimised.

It is also worth reminder to ensure your disaster recovery plan for any platform is up to date (includes any recent changes), has the contact details for any relevant service providers (such as PacSol) and is ideally tested annually to ensure it not only functions but meets the requirements of the organisation should the worst happen.


Toby Gilbertson. PacSol UK Director of Operations

Toby Gilbertson, Director of Operations. May 2021 (updated November 2023)

 

 


#PacSolUK #Database #PlatformMaintenance #DocumentMangement #DocumentMangementSystem #EnterpriseContentManagement #ECM