December 27 2022

Website performance problems on 1C-Bitrix

Website performance problems on 1C-Bitrix



We have conducted hundreds of site performance audits, and the problems we find in projects are quite typical. In today's article, we will talk not only about these common problems, but also about interesting rare situations that we had to face.
Since 2015, we have been providing the official 1C-Bitrix service — site performance audit. As part of the audit, we not only analyze the configuration and configuration of the server, but also review the code: we transfer the project to our booth, where we study what exactly can be changed in the code and configuration of components so that the site works faster. 

However, first let's talk about what the page loading speed and site performance generally depend on. There is, in our opinion, an erroneous opinion that all problems of site loading speed can be solved by optimizing settings or changing hosting. This really helps in the case of several classic configuration errors, which we will discuss below, however, the main problems most often lie in suboptimal algorithms, 1C-Bitrix settings, caching problems.

This is explained quite simply: modern processor technologies have reached the peak of the frequency of one core. What does this mean for us? In conditions when one request is executed on one processor core, there are practically no opportunities to speed up the query execution time twice. Most likely, the processor on your server already has a high frequency and with the change of the server it will not become twice as high. Servers on hosting sites often differ in the amount of RAM and the number of cores/processors located on them.

The situation can be compared to a checkout counter in a supermarket — one checkout can serve one person for a certain amount of time. There is a limit to human capabilities, faster than which it is impossible to serve the buyer in the given conditions. If during the purchase process the cashier has to hammer the number of each item into the cash register with his hands, changing the cashier may help, but they will not make purchases twice as fast. You need to change the purchase procedure itself (and in our case, optimize the code). Increasing the number of processors/the number of cores on the server, respectively, is comparable to an increase in the number of cash registers at the counter — more customers can be served at the same time, but each customer individually will be dissatisfied.

Thus, a competent analysis of project performance includes:

analysis of the correct server configuration, audit of the server configuration and capacity (suddenly, after all, the "cashier" works very slowly by itself - then any procedure will seem slow to us, and we will not be able to understand what exactly to optimize);
algorithmic analysis of the project, profiling of page loading time, analysis of the operation of these pages both with and without caching (analysis of the optimality of the purchase procedure on the site);
analysis of the site loading speed in the browser, the optimality of the frontend.
Let's look at these points in detail.

 

Audit of the server configuration
First of all, even before the performance audit begins, we check the synchronization of the raid array (mirroring of hard drives) and its availability, as well as the correctness of the backup system. Very often, once configured mirroring and backup were not checked once after that, and yet, according to our statistics, backup breaks down once a month. If you have not checked backups for more than this period, it is very likely that you do not have them. Next, we proceed to the performance audit — it consists of quite specific, but critical elements of the checklist.

1. Is the site located on a virtual hosting, VPS/VDS, virtual machine?

Unfortunately, in almost all cases, virtual hosting somehow sells the client more hardware capacity than it actually has physically. The consequence of this may be a shortage of disk subsystem during peak hours, there may be a shortage of processor time. In addition to several situations, we recommend using a dedicated server on a real hardware base for an e-commerce project that brings money. Yes, the cost difference between a virtual and a "real" dedicated server can be dozens of times. However, this saving is not worth the potential damage (both financial and reputational) that can be caused in the event of an accident on the hosting.

2. Which software versions are installed on the server?

PHP under version 5.5 and MySQL versions under 5.5 are recommended for upgrade. Important: if the project's problems are primarily related to the code, upgrading PHP will not speed up the site. But if the code is optimal and supports such a transition, using PHP version 5.5 and higher can lead to an acceleration of page generation time by 20-30%, and using PHP7 (if the PHP7 project is compatible) — to a twofold increase in performance at the PHP level. It is worth remembering if your page is generated in 1000ms, of which 800ms are occupied by suboptimal database queries — even a twofold acceleration of PHP7 will lead to an acceleration of only 100ms (the response time will be 900ms instead of 1000ms). The time of working with the database will not change from the change of the PHP version.

3. Is the PHP code optimizer worth it?

At the moment, the industry standard and recommended PHP optimizer is opcache. In addition to completely unique cases, the use of opcache is mandatory. Precompilation of PHP code, which is carried out by a PHP optimizer, speeds up the processing time of a PHP script by 80-90%. The PHP optimizer translates PHP code into bytecode, which is much closer to machine code. When using the optimizer, this translation becomes a one-time procedure, and in its absence, PHP performs a translation for each user request.

4. Is the XDebug debugger disabled?

The XDebug debugger is a popular module among developers designed for debugging code. It is convenient and useful to use it on development servers, but on a combat machine, XDebug slows down the code execution time by half. We see this situation quite often: XDebug remains running after launching a new site into battle, or it remains installed after urgent debugging on the combat server. XDebug must be turned off.

5. Checking engine tables in MySQL

Despite the fact that in 2016 MyISAM is considered an outdated "engine" of tables, we see its use on combat sites. Perhaps the reason is the transfer of tables from the database used in the development, where the type of engine is not very critical. Perhaps they simply did not pay attention to this, or perhaps this type of tables was used out of historically formed prejudices. In any case, InnoDB and its derivatives should be used for typical tasks. MyISAM in its structure is not much different from the well-known old dbf file format, which is not intended for industrial use in a multi-user environment. MyISAM is often subject to table-level locks (when one site request blocks all other requests), is very sensitive to MySQL crashes (an emergency crash can lead to critical data loss), is difficult to backup on the fly (backing up a large table will lead to a site lock). InnoDB does not have all these problems, however, it must be configured correctly.

6. Correct InnoDB configuration

First of all, we are talking about the size of the buffer pool — the internal MySQL cache. If there is enough RAM and the data size is small, we recommend setting the buffer pool higher than the size of the data directory (thus caching all data). Percona Tools help well with auditing — there is a review article in our blog. The topic of MySQL tuning is quite voluminous, we will talk about it in more detail in one of the next articles.

7. Monitoring the server operation, checking the use of swap

No optimization will help if the server is under a huge load due to the influx of visitors, or processes do not have enough memory and some of them are unloaded into swap. The swap file works from a hard disk, which in any case is hundreds of times slower than RAM. Moreover, processes usually use RAM so actively that if it is uploaded to disk, the server's disk subsystem immediately reaches its limit and the site stops responding. There should be monitoring on the server: at the moment of "brakes", it is important to understand what caused them — heavy use of the disk subsystem, lack of server CPU, or maybe even exhaustion of the network interface bandwidth?


Project code audit
After making sure that the site's problems are not related to hosting problems (or having solved them), we proceed to code analysis. Very often it may seem that there can be no problems with the code in an initially well-planned project. In our experience, problems arise not as a result of a one-time mistake, but from a number of changes, each of which leads to a slight slowdown of the site, and in total these problems lead to a large delay.

Before you search for problem areas in the code, you need to understand what tools you will use for these searches, what information they provide and what their degree of application is.

For the initial analysis of long sites, we use the 1C-Bitrix performance monitor. We find the most visited of the long—responding pages - their acceleration will lead to a noticeable reduction in the load on the server. If you do not take into account the popularity of the page, and pay attention only to the response speed, then you can spend time optimizing the page that is requested once a month and which does not create a real load.

The performance monitor is enabled as follows: in the administrative panel of the site, select the "Settings" menu -> Performance Panel -> "Test performance" button. At the end of data collection, the "Development" tab will have a table with all the pages visited and the time of their generation.

After we have selected long pages on a separate server (we know the capacity of this server, and we know that the server is not loaded) we analyze the creation time of these pages in debugging mode, look at the generation time of individual components and understand what exactly you should pay attention to in analyzing the code of these pages. In the first iterations, we check how the pages work in caching mode, and in the last ones, we study the operation of the site without a cache. It is not uncommon for pages that open fairly quickly using the cache to start opening in 30-60 seconds without it. It would seem that the situation when the cache has expired on rarely visited pages is not terrible, however, a search bot that has come to the catalog can seriously load the server by starting to crawl such pages.

Debugging mode is enabled directly on the site page: in the admin panel, you need to find the "Debugging" item and select the "Summary statistics" item in the drop-down menu. If you need to check the operation of the cache, then additionally tick the item "Detailed cache statistics".

In some cases, this information is not enough — when some parts of the code do not fall into debugging mode or when you want to study in more detail the structure of the nesting of PHP calls. In such cases, we connect the XHProf profiling module, which allows you to track the execution time of both the entire PHP script and nested functions.

Let's consider using the debug mode and the XHProf utility on the example of a directory page with sections. Problem: When caching is enabled, the Bitrix:catalog.section component executes slowly.

Step 1. Please note that the component is not cached and runs extremely slowly. At the same time, the execution of requests takes approximately 16% of the execution time of the component. Obviously, there are some problems in the implementation of the component.

Step 2. To debug PHP, we use XHProf. We wrap the component we need in the XHProf utility code.

Step 3. Update the page so that XHProf collects the data we need.

Step 4. Open the page with the XHProf data.

Step 5. We find the connection of the Bitrix component we need:catalog.section. The execution of the Bitrix:catalog.section component template takes 1.5 seconds, which is most of the generation time of the entire page.

Step 6. Go further down the call stack and find the script call result_modifier.php . If you go further, you can see a custom function in which queries are executed.

Actually, this add-in is in the form of result_modifier.php is the cause of slow generation. Thus, by fixing the code problem in this add-in, you can solve the problem with slow loading of the page with catalog sections.

Let's now go through the most common problems that we encounter as a result of the 1C-Bitrix configuration audit.

 

1C-Bitrix configuration audit
Incorrect structural organization of information blocks

Perhaps this is the most common problem — it occurs in 90% of projects. When all the goods are "stacked" in one information block. Such an information block accumulates a large number of properties of all goods, this is fraught with large samples and slow imports. Let's take the example of an abstract sports store, which has an assortment of clothes, skis, bicycles (and each type of goods has its own properties). The properties of goods fall into one information block, bicycles get the properties of skis and clothes (and, accordingly, vice versa — these types of goods get the properties of bicycles). Imagine how many "redundant" properties accumulate if the store has a lot of product categories.


At the same time, one day we faced the opposite problem: the project had the correct structure of information blocks, but there were also a lot of aggregating samples (new items, promotions, hits). It turned out that instead of one request to the "long" data block to generate a sample, 50+ fast data blocks were sent on request and then they were combined in the code. Because of this, the component as a whole began to work slower. So if the project needs such functionality, then it is necessary to develop a backup plan — to create a separate table in the database (to "denormalize" the structure of information blocks), either through external aggregation, or something else.

"Self-made components"

There are often situations when programmers and web studios use third-party, "author" components that do not use the available capabilities of the 1C-Bitrix platform. For example, self—written "smart filters" do not use faceted indexes - this significantly increases the execution time of the component. It happens that programmers do not study the component being installed in detail and overlook the errors and problems that it adds. We recommend carefully analyzing the installed components.

Problems with external services

This is not related to the 1C-Bitrix code itself, but is found in many clients. Anyway, many sites use external services for additional functionality on the site: updating the exchange rate, calculating the time and cost of delivery of goods, API partners, etc. When such an external service begins to respond for a long time, the site itself begins to respond for a long time. Unfortunately, many external services always take a long time to respond. A possible solution to the problem is the periodic loading of data on a task via Cron, and it is better if this happens during the least busy time of the project.

Problems with faceted indexes

In most cases, not using a faceted index in a smart filter is just an oversight (unless we are talking about the problem from the previous paragraph). The faceted index must be used, it significantly speeds up the operation of the smart filter, and, accordingly, reduces the load on the processor and page loading time. When adding new properties, the faceted index needs to be rebuilt, however, rebuilding it during the daytime (during peak attendance times) can lead to a vicious circle: when the faceted index is disabled, a fatal load on the server will be created, as a result of which it will not be possible to recreate the facets. The only correct solution is to rebuild the index during the minimum load.

Errors in using the 1C-Bitrix API

Fetching properties in a loop after getList, when they can be selected in getList. Sorting and filtering by PHP instead of using arFilter and arSort. In the 1C-Bitrix API, there are a large number of functions that are optimal for the load on the server. The developer needs to know these functions and be able to apply them. A regular error is, for example, getting a list of products and then getting their properties in a loop, at a time when properties can be obtained in the main request. Thus, for example, after receiving 100 products, we create an additional 100 queries to the database to obtain properties. It is also not necessary to choose products for their calculation — the 1C-Bitrix API has special functionality for this. Surprisingly, very often we see how products are selected in order to count some of them by some property.

Problems with menu generation

In large projects, it becomes a problem to generate menus (or catalog sections) with counting the number of items in categories/sections. Based on this number of elements, the above-mentioned areas are being built. To optimize the speed of the project, we do not recommend using this function. A detailed case was reviewed in detail in the Bitrix developer community.

We also quite often see problems with modified menus. For example, the developers did not notice that the menu does not cache the output of the template and wrote a menu with the inclusion of the most popular product in each category. It turned out that when the user views the menu, queries to the database are created for each element (which, of course, slows down the work of the project). 1C-Bitrix itself implements this "custom" functionality through the inclusion of components with such elements in result_modifier.php .

Antivirus enabled

The web antivirus checks the pages of the site for malicious content that could have been added as a result of hacking/modification of the pages. Although this functionality is useful, we usually recommend disabling this module, leaving the proactive protection module enabled — disabling web antivirus can speed up page loading by 100-200 ms.

Installed compression module

The compression module is designed for hosting sites where it is not possible to enable compression at the web server level - mainly on "ancient" shared hosting. If you can enable gzip compression at the web server level (and in most cases it is possible), we recommend disabling this module.

Agents are executed on hits

Another historical functionality that is very common on clients' projects. "Agents" — regularly performed tasks in 1C-Bitrix, can be performed either through the traditional cron for us, or inside the code caused by a user request — this mode is called "Agents on hits". In this mode, 1C-Bitrix, each user request checks whether it is time to perform a regular procedure (for example, mailing), and if the time has worked, it begins to perform this task. The problem is that the user will not receive the page result all the time the task is completed, and if the script execution time limit is set in the web server settings, such an agent will never be executed and the mail will not leave. "Agents on hits" are created for those sites where you cannot put regular execution of agents on cron, now there are very few such hosting sites left and we recommend everyone to abandon the execution of agents on hits. You can read more about switching agents to the crown in the article by Nikolai Ryzhonin and in the entry by Anton Dolganin. By the way, when we turn on the stand where we check the project code, the first step is to check in MySQL whether agents are enabled on hits (and whether it turns out that when we visit the site, we will perform some regular task).

Incorrect use of caching

In many audits, we encounter a situation where caching was disabled for debugging purposes and was no longer enabled — do not forget to enable it. There was also a case when we discovered during the audit that caching of some components on the page did not work. The nuance was that after opening the page, the cache file was created, and on the next opening it was already disabled. The reason was the counter for viewing products in the component template, which made changes to the information block during the update, which disabled the cache.

On the other hand, in a number of self-written components, we have encountered cases when too large a sample of data was loaded into the cache. There is a serialized array inside the 1C-Bitrix cache, and if the size of one entity lying in the cache becomes more than several hundred kilobytes (and there is a lot of that), the overhead of unloading data from the cache became stronger than the savings of accessing the database.

Composite Cache Issues

Composite cache is really a good and convenient technology, but it's amazing how often it is used incorrectly. The simplest and most common mistake is saving the cache limit with a default value of 100 megabytes, which is very small.

Often, as a result of development, a new small component is added that does not support the composite, which is why the composite "flies off" the pages. You need to make sure that it is preserved.

The cache itself can be given not only from files and memcache, but also in the form of static files at the nginx level, but this is used very rarely. Read more about configuring nginx to work with composite on the website of the Bitrix developer course.


Checking the client part
Images without optimization

The most common problem is the use of image scaling, as well as the use of uncompressed images. It's very simple, less weight of images — faster loading. Developers are well aware of this and do not make such mistakes, but clients or content managers very often upload huge, heavy images. It is necessary either to connect automatic services and modules, or to teach users to optimize images manually.

Problems with JS/CSS files

There are two problems regularly encountered here: this is the connection of JS/CSS files in a non-standard way for Bitrix and the use of unconnected JS/CSS files. A post in the developer community on this topic.

The most important thing is that an incorrect connection cancels the possibility of automatically combining JS/CSS files, and also cancels the automatic provision of the correct order of their loading (first CSS files, then JS files).

Several times there was a problem with an excess of JS — literally an event was hung on each element of the catalog by a separate script. When the option "move JS to the end of the page" was enabled, there was a powerful load on the server.

In this regard, I would like to remind developers once again about the need for minification of scripts, if possible, asynchronous connection.

Static output by Apache Web server

Surprisingly, we still quite often encounter situations when lightweight static files are given not through a frontend in the form of nginx or a similar lightweight web server, but through Apache, from which dynamics are often heard (especially when using Bitrix Env). At the same time, each such request creates additional RAM consumption (Apache is much heavier) and CPU load. We strongly recommend that you make sure that nginx deals with static output.

Lack of compression at the web server level

Everything is simple here — static should be given with gzip compression, this is periodically missed, unfortunately. To make sure that the static is given with compression, you can, for example, here.

 

Summing up the results
The importance of site loading speed is growing: search engines are preparing fines in the issuance positions for slow sites, users are punished with a "ruble" and go to more nimble competitors. We want as many good and fast sites as possible. Use the resources of the hosting and 1C-Bitrix platform competently to get a high-quality project in all respects, which is not a shame to place in the portfolio. We hope our recommendations will help you in solving the problems of project performance. And don't forget to share the article in social networks if it turned out to be useful for you.



Information

Visitors who are in the group Guests they can't download files.
Log in to the site under your login and password or if you are a new user go through the process registrations on the website.

Comments:

This publication has no comments yet. You can be the first!

Information the publication:

Related News

27 December 2022
General questions 1c-bitrix
Which hosting should I

Which hosting should I choose? 1С-Bitrix

Read more
14 November 2022
Releases OpenCart»,OpenCart modules
NitroPack Cache v3.5.16

NitroPack Cache was created to solve the web performance issues that most sites face. As the network evolves and

Read more
27 December 2022
General questions 1c-bitrix
1c-bitrix how to install

1c-bitrix how to install on the server Installing CMS 1C Bitrix in LEMP (Linux, Nginx, MariaDB, PHP 7) for CentOS

Read more
27 December 2022
General questions 1c-bitrix
12 MYTHS ABOUT

12 MYTHS ABOUT 1C-BITRIX: SITE MANAGEMENT"

Read more
26 December 2022
General questions 1c-bitrix
Advantages and

1C-Bitrix is one of the most popular CMS with thousands of websites running on it. However, when choosing a

Read more

Information

Users of 🆅🅸🆂🅸🆃🅾🆁 are not allowed to comment this publication.

Site Search

Site Menu


☑ Websites Scripts

Calendar

«    December 2024    »
MonTueWedThuFriSatSun
 1
2345678
9101112131415
16171819202122
23242526272829
3031 

Advertisement

Survey on the website

Evaluate the work of the site
 

Statistics

  • +5 Total articles 6749
  • +16 Comments 4224
  • +28 Users : 6030