Wednesday, January 04, 2017

Is Safari to follow Internet Explorer and die?

Recently I've had some of my spare time invested in building a new mobile application.
As part of the development I started with building it with standard web technologies having in mind to wrap it with Cordova for cross-platform support later on and publishing it as a native app on the corresponding app stores.

As part of the development I decided to start with sticking to the standards as much as possible to allow the different native mobile Webviews to render the app correctly. I also decided that the MVP of the application should be good enough to live inside the browser and to not start with wrapping anything with Cordova until I really have to.

So I started with a simple Web App (a Single Page App).
Pretty soon I found that it is actually working quite good and development time was shorter than expected. I got the MVP working with the basic functionality - viewing content, publication of new content and authentication. Then I wanted to improve it - I wanted the web app to work smoothly while there's no network (or really bad one), allow it to run in full screen (without the browser being visible). I also wanted to have notifications, sleek responsiveness (make the application responding really fast for end user interactions) and finally allow it to be "installed" on the apps drawer so it can be launched outside of the browser, like any other native application.

This is where I came across ideas such as Mobile-First, Offline-First, Service Workers and the recent publicly available implementation of the Progressive Web Apps concept which brings everything together.

In a nutshell Progressive Web Apps allow web developers to build state-of-the-art mobile application based on pure web technologies. The concepts of PWA have been laying around for over two years now (which is a lifetime in our ultra-fast changing world of web and mobile). So far these concepts are being worked-on and as part of the W3C. Thankfully they have been implemented (and being worked on) by the Mozilla and Chrome teams and recently, as already mentioned, part of our reality.

So what this prologue to do with the post title? The answer is that while PWA is already implemented for some time in Chrome and Firefox (and partially in Opera) - Safari has never published any reference or road map to support this huge change to the way we are going to consume tomorrow's mobile apps. This practically means that they are NOT going to support PWA technologies in the near future which in turn means they will not allow any of the really important features such as Web Notifications, Offline support and improved responsiveness via Service Workers, improved Web Workers, Installable Web Apps and others...

Well we all agree IE has lost the browsers war because Microsoft was slow to respond to the improvements and changes which were introduced by Chrome along with getting really odd and non-standartized. It seems to me that claiming not to support PWA by Safari on iOS is another serious signal Safari is going the wrong way getting out-dated and odd.

While Safari for other platforms but iOS and Mac OS ceased to exist - I wonder if are we about to see Chrome or Firefox taking on the native Apple browser on its own playground - the same way it happened for Internet Explorer on Windows ten years ago?

And just to be fair - the motivation from Chrome is clear - making stronger web techonologies is something they must push to make the ChromeOS and Chrome-based devices and apps robust. PWA just happen to also run on Android and other operating systems as a secondary benefit IMHO.

Some other related posts I found after publishing my post:
Safari is the new IE
Safari is the new IE 2: Revenge of the Linkbait

Wednesday, May 13, 2015

SAP Blogs performance

I've recently published a new post on the SCN (SAP Community Network) about the new blogging platform we're building there and how it is going to be super fast.

Read on at:

If you're interested on how we do that, read some of my previous posts here and wait for future posts as I'll share more of the improvements we do to build our social blogging platform based on Wordpress.

Thursday, April 02, 2015

Does HHVM really improve performance in real life?

HHVM in one sentence is a relatively new implementation for the PHP runtime, built by Facebook as open source project and focused on improving PHP performance.

For about half a year (somewhere around late 2014) Wordpress is fully supported by the HHVM runtime.

There are plenty of posts around the net claiming you can get a boost of 5 times faster page loading times (reduction of 80% of the original response times) and up to 40 times faster page loading times (reduction of 97.5% of the original loading times) by simply switching from the original PHP runtime to HHVM.

As I'm in the seek to improve overall response times for our new Wordpress based platform, such improvement figures sounds like I must give it a try.

I've created a new application server into our load testing environment, installed it by following installation guidelines from Building-and-installing-HHVM-on-RHEL-7, made few tweaks and got it to work with Apache via FastCGI.

With that I've started to manually browse across the Wordpress system and feel if there's any improvement to performance. Using Chrome developer tools I've noticed a decrease with page response times from around 400ms to 300ms for viewing different blog posts. Meaning a 1.5X times improvement (or reduction of 25% of the original response times).

After validating that the Wordpress application works fine and resolved several issues, I've decided it is time to start load testing the cluster.

At that point - I had a cluster of 3 "normal" PHP app servers (php 5.5 with opcache enabled) and 1 HHVM-based app server, all in the same network, having the same sizing, connected to the same DB, having the same application installed with as close as possible configuration.

Starting the load test while monitoring application response times and comparing between the 3 normal PHP-based app servers and 1 HHVM-based app server (shown as app08 below).
Here are some figures for few Wordpress activities I've been looking into:

If you've been reading my blogs for a while, you would know that my load tests are basically hitting the system with production-like workloads, meaning that the entire setup is not under stress and doesn't suffer from any resource starvation, but rather is kept pretty idle, not using more than 20-30% of CPU across the different components (app/db servers).

What is fairly easy to see here is that also with load test running, the performance is a lot like what I saw with my manual, single user browsing around the system.
Here is a summary:
  • Viewing a blog post improved from 475ms to 300ms (37% reduction / 1.6X times faster).
  • Viewing the homepage improved from 850ms to 600ms (30% reduction / 1.4X times faster).
  • Login improved from ~350ms to ~220ms (37% reduction / 1.6X times faster).
  • Posting a comment degraded from 500ms to 5500ms (-1000% reduction / 10X times slower).
    I've decided to ignore this for now, as it was only evaluation of HHVM.
    Not sure what was the cause for this decrease in performance.
Basically HHVM was tested as is, without any other optimization or configuration changes.
I found that the version I used was already with JIT enabled and with the defaults to JIT code after few executions (cannot recall but I think it was 10 or 12 runs).

Looking back to the original numbers which led me to give that a try, this was somewhat disappointing. I've been looking to give a try with HHVM precompiling but I'm not sure I got it right or just didn't see any significant improvements. Maybe that's obsolete? Anyone knows?

At this point we decided not to use HHVM yet, as we are so close to the initial go-live and using HHVM will also force us doing some infrastructure changes to comply with the HHVM prerequisites, but at least we know what is it that we are missing.

Sunday, March 22, 2015

Wordpress load testing framework

Following my last two posts, about how to create realistic load tests and Wordpress performance I guess that the next post is just on time.

In the last post I've shared that we build up a new Wordpress platform and we continuously load test and improve its performance. In this post I'll share the resources and the concepts behind how we do that, while I hope some people will find it useful and will contribute back to improve it.

I have created a public github repository under about a week ago. In this repo there are all required resources for load testing a Wordpress site.

It is basically focused on the features we use on our own website, such as high rate of authenticated users, viewing blog posts (including AJAX calls interacting with the WP popular posts plugin for view counts and with Shariff for social figures), searching content, publishing new posts, posting comments, liking and unliking posts and comments (interacting with Like posts and comments plugin) and consuming RSS feeds. Anyone can extend this skeleton to have more features covered with his load tests, to reflect the relevant realistic scenario or usecase.

One of the most important parts of the JMeter script here is that it is modular. Each piece of interaction with a feature, is broken into a separate JMeter script, which is then loaded by the load test script and placed on the right place in the user flow. This allows each module to be changed while being focused only on a specific feature and allow re-usability of that module also on the single user script - which is used for base-lining performance or just as functional test.

Other resources in this repository are few drop-in plugins for Wordpress to allow:

  1. User impersonation - in case you don't want to create test users or just because you don't want to have all users passwords in a plain text file.
  2. Data export - which you can access with the third JMeter script in this repo - to extract all required objects to interact with during the performance test, such as existing users, blogs, posts, comments and tags.
The idea is that these tests should become part of your continuous integration flow and get you with up-to-date performance figures.

One of the most popular approaches is that for tests you should have an isolated "playground" - a predefined dataset where the test is running against, which is maintained (usually this is done automactically) to have predefined users and data to test with your automation and is deleted or recovered back to the same state as before the test was running, right after each test is finished.
While this concept may work for Functional testing, I think this is a wrong approach for load testing, as you either over-populate or over-interact with specific set of items in the application - thus over time they are getting overloaded and change the test results. You may clean them up with every load test, but you loose any natural / realistic growth of data in your load testing environment that way.

The right approach in my opinion is that your load tests should generate close to realistic workloads on the system, thus, they should provide at least the realistic growth and distribution of data along the way, so once you hit a bottleneck - you'll see it on your testing system before you'll get it on the real system. This is why this framework doesn't provide any mechanism for pre-populate the target Wordpress system with any dummy content. It will create it along the way, as you continuously load test the system. This framework should be agile enough to use anything you already have in the system to continue producing realistic load tests.

A final note is that I hope this github organization will become home to other load testing setups for popular frameworks, such as drupal, django, joomla and others.

Sunday, March 08, 2015

Wordpress Performance

4 months to go-live, more than 6 months after we started to work on the new Wordpress platform for our Social Platform, viewing a blog post takes about 400ms (median during our realistic load tests).
This is quite fast compared to the Internet in overall, but somewhat slower than other popular WP based websites such as TechCrounch or Time where loading times are about 250ms (TTFB to be specific) and I'm including the network latency in those web sites, which is almost 0 when I'm measuring our new platform, so the diffs here are even bigger.

I'm looking back to the very first days (on the left side) where the project just started, with the default theme, without any code charges or plugins and with almost no content in the DB, a blog post TTFB was about 150ms (again during realistic load test). It means that we have an addition of 250ms or 160% decrease in performance, that's bad!

Looking at the homepage loading times, I see that we started with about 170ms and went to 650ms today, that's about 3 times slower (you can see that we had some major slowdown - about 2 seconds - for a while until we resolved that around test number 145):

There are few things to mention at this point which were done or happened over that period of time:
  1. We have changed the theme.
  2. We have installed / removed countless WP plugins and upgraded couple of WP versions, including switching from WP 3.x to 4.x.
  3. We have added a Load Balancer and additional servers.
  4. We have switched to HTTPs and changed apache and php versions and configuration.
  5. We have improved the coverage of the load test with more types of activities.
  6. We have imported content into the system from our existing blogging platform.
  7. Every night our realistic load test is actually creating more and more content, as we expect it to happen in real life.
All of those may and most likely did change our application performance and the observed response times.

What we do to measure and compare our application performance?

  1. We use fully virtualized / on the cloud servers. This allows us to get 1:1 relation in sizing between our load testing environment, which we call staging and our production one.
  2. We use fully automated construction or build and configuration of these machines - so we are always 1:1 in configuration of all infrastructure components between the staging and the production systems.
  3. We have production like content volume and distribution in our staging system.
  4. We have built up a comprehensive load test framework for Wordpress, which allow us to define the realistic workload (as well as other workloads, such as stress test) and use that to hit our most recent application builds, on a regular basis (at least once a day, depending on amount of changes that day).
So some of you may notice that we follow the Continuous Delivery moto -  fail fast, fix fast - we use CI framework to always get a valid build version to our staging system and test it with predefined and (almost) constant workloads to verify we will be able to use that on our production system.

There are lots of things to do which can improve the performance and I'll post updates along the way, on the different improvements that we have both tried and rejected or tried and implemented to get to the grail - sub 250ms response times.

Monday, December 22, 2014

How to create a realistic load test

For years I come across many existing load tests or people who try to build a new load test and I find them to build an unrealistic load test, which generate unrealistic load on the target application or System Under Test (SUT).

The problem

I come across load tests which generate either really low or really high loads on the SUT, which both are not what the performance tests were planned to do.
With too low load on the system, you are not putting enough pressure on it, thus you won't see the problems you are looking to find, before they become visible on production, for your real users.
With too high load on the system, you will see problems which may or may not occur in real life. Thus if you are in a hurry, or need to clearly prioritize which issue is a blocker and which can be dealt with later, you'll have a problem.

An even worse type of issue I see with load tests is where they generate an unexpected load on the SUT. Seriously!
People come up with complex and wrong scenarios, mostly because the scenario is based on 'business' view of how the load test should look like. They try to build a scenario based on business perspective and figures such as:
We have a potential of 100,000 users per day, let's build a load test for simulating their behavior. 10% will create content, 20% will like content and 70% will view content.
Now the poor guys who need to build that load test says, "OK, I will build a scenario with three types of users, one user type (10,000 users in total) is doing login, viewing some content, then creating some content and logout. Second type (20,000 users in total) will login, view a piece of content, like it and logout. The third type (70,000 users in total) will login, view a content and logout."

Make sense right? Well, hell no!
What you end up with is with two main problems:
  1. You have a load test script with REAL 100,000 threads (and let me explain why that's a bad idea, below).
  2. You have a load test which generate an unexpected load or in other words you have no idea what kind of throughput is actually generated against the SUT. I'll explain that below too.

Real 100,000 threads for load testing

Unless you are working for Facebook, Twitter or Google there is no way you need 100,000 real clients (threads) hitting your system at once. Do you have 100,000 requests per second (real requests for content/actions, I'm not talking about hits which include static resources)? Probably not. That means you don't need 100,000 concurrent / parallel threads for generating the required load on your system.
Not mentioning that you may end up with complex load test setup with unnecessary amount of load generators as well as session time out issues as most of your threads / virtual users will wait for such a long time, that the corresponding application session may time out until their next iteration will take place.

Unexpected throughput

Having a summary report that claims that the system supports 100,000 users, doesn't mean anything.
Someone provided you with a load test results report, with the scenario described above, with 10,000 users which created new content, 20,000 users which liked content and 70,000 users which viewed content, over 8 hours. What does that mean? Well, not much really.

Why not? Because you don't know what was the generated throughput. Was it generating 100 likes per second or 1 per second? How many content views we had per second? Was the workload constant with the generated traffic or generated lots of spikes where some intervals had 10 times more load than the others?

Usually with such approach you will not have a good control on those figures, as you will try to mimic a real user flow, with think times between interactions, which you may believe they reflect a realistic human behavior. While your behavioral assumptions may be true or completely wrong, the given fact is that this kind of realistic behavior will not create the realistic load on the system and that's what you care about - "Will my system handle the load?".

A more scientific approach would be to build the load test to generate configurable throughput on different types of features or activities in the system.
For example you would build the following load test scenario, which is easier to monitor and measure:
100 content views per second
10 logins per second
10 create content per second
10 likes per second
1 logouts per second

In total this will generate 131 requests per second (depending on the actual application, you may end up with more requests as you may need to generate additional requests for loading the content editor before you actually submit/publish it or if you have AJAX calls with every content view than you should generate those too).

With such approach you are generating constant and controlled throughput on your SUT. You can configure each type of activity to generate a different throughput to reflect realistic usage patterns.


In terms of amount of users, well, usually it doesn't matter, as the only thing we care about in terms of load from the SUT perspective, is most likely amount of sessions, which bound to memory. If you want to have 100,000 sessions in any given time, you can also take care for that with this approach, so you will cover the expected memory usage. To do that - you'll need to generate enough logins or requests without active session, which both may create new sessions on the SUT.
Assuming session timeout of 1 hour in the SUT, you should generate 100,000 session-less requests per hour or 100,000 / 60 / 60 = 27.7 per second. This means that it should be enough to make only 21% of the total requests to trigger new session on the server side and you will generate the required amount of sessions (21% out of 131 requests per second are 27.5 but you get the point).

Bottom Line

So to sum things up, building a realistic load test scenario doesn't mean you should have a realistic behavior from a single end user perspective (i.e. a realistic flow in the SUT) but rather you should build the load test in a way that the absolute minimal set of steps are done by each thread to allow the generation of the required realistic workload / throughput. I.e. a user cannot like content if he hasn't first login, so you'll need to consider that. But you don't need to create a user that does everything, or interact with a set of features like a real user, because than it is hard to control the generated throughput and you end up with unexpected load being generated against the SUT.

So how many threads you actually need to have in your load testing tool to generate this required load? This is fairly simple - assuming you set a response timeout of 10 seconds (in which the load testing tool will consider the request as a failure and allow the thread to continue to work), you'll need to promise that you have enough threads to generate 131 requests per second while some requests will take up to 10 seconds to finish. The calculation here is that a single thread can generate 6 requests per minute in the worst case or one request per 10 seconds. We need 1310 requests per 10 seconds, so we need up to 1310 threads in the worst case to promise generating the required load, no matter how responsive or slow the SUT gets. (the calculation is: Amount of required RPS * Maximal response time = Required threads)

1310 threads instead of 100,000 threads is much easier to work with, isn't it?

Thursday, July 24, 2014

Why all load testing tools are the same

Over the last few years I've noticed more and more new load testing services and tools, but especially cloud-based SaaS solutions, as part of the movement towards "everything as a service".

There are so many to chose from: Load Impact, BLITZ.IO,, Load Storm, Load Focus, SOASTA, LoadUI, Locust and much more, each has its pros and cons, but I'm getting really mad with the fact that they are just more of the same.

Why do I say that they are all more of the same? You need to go back to early 90's where an Israeli company was one of the first to come with commercial load testing tool for WEB and few other protocols, this tool was (Win/) Load Runner and the company named Mercury (later purchased by HP).
Since early 90's the technology of load testing was not changed at all, it was and still is about simply sending HTTP requests and measure responses (I'm being focused on WEB/HTTP load testing for simplicity).

Now, over time, Mercury created good stuff to make things easier for getting the work done, mostly with auto-correlation of parameters and predefined macros for known applications and platforms where it was complicated to develop the load testing scripts due to application complexity with massive usage of http parameters being send back and forth between the browser and the server.
(and I ignore for the scope of the post, all of the other good stuff they've done with systems monitoring, application insight monitoring aka diagnostics, and bringing several perspectives into single powerful analysis tool which was, in my opinion, technological break-through)

At some point, somewhere around 2010, where AJAX was getting popular, generating load testing scripts got more complex and one of the ways to deal with it was to try and change the way load testing was done until that point. If up until now it was all about sending HTTP requests and measure responses (with some parsing), the change was to try and run kinda of real browsers, in memory, without UI, so the load testing tool is now starting to run UI-less browsers and manage them (you could imagine Selenium headless browser running multiple times). This makes it easy to deal with frequent changes in the application UI, API and any changes with AJAX calls will simply require no change with the load testing scripts, as the scripts now interacting on the UI level. Sounds great, right? Well..

Basically the main down side of this technology is that while it works great for functional testing where a single user, or maybe few, running from a host to test the application, the problem is that it is extremely Memory and CPU intensive to run a real browser with MANY windows. So in practice when it comes to really load you web application with more than tens or few hundreds of users, such approach is simply too expensive. Indeed memory and CPU in general are getting cheaper, and maybe in few short years we will get there, but you still need about 100MB (gross) of memory for every virtual user, depending on the complexity of the client side (css/js). So for intranet / internal web systems - it might work, you can create few load generator hosts which with 8-16GB RAM can create a load for about 80-160 virtual users, where it might be enough to load test such internal systems.

When it comes to load testing big, Internet facing applications, you would probably want to load test with thousands of users, which based on the gross numbers above, will require 1000GB of RAM for 10,000 concurrent users. Based on Amazon pricing this will cost about 32$ USD per hour (based on c3.8xlarge 60GB instance which is about 1.7$ per hour - I'm rounding up to 2$ as there are other expenses like network and disk usage).

Now, given the fact that you never run one load test and then wrap things up and never do that again.If you are serious, you will run load tests regularly and at least, once before every production release. We are talking about hours of load tests every month.
With one of my clients, we run 8 hours of load tests every day and 24 hours load test every weekend, with a similar scale to the example above - it would cost more than 9000$ per month to run load tests with real / headless browsers (and that's assuming that we use Amazon On-Demand instances and shut them down between load tests).

If you can have such budget spent on load testing infrastructure alone, you should also consider the fact that, such approach with driving a headless browser, is still considered inaccurate and inconsistent. The timing mechanism is still considered immature, virtual browsers may affect each other due to spikes with CPU consumption and thus running same tests may get you with pretty different results from my experience.

So now, I come back again to why I say all load testing solutions are more of the same.
All of those services and tools I've mentioned above, focus on running load tests with the very first technology that Mercury came with in early 90's, not with running real browsers, which is still immature and expensive but with scripts that define what kind of HTTP requests to hit the system with and measure the response, that's all. So they all provide with different scripting languages or ways to control the test. Some allow you to work with UI and modules to create the required script, some with XML, some with coding style in either proprietary scripting engine or wide spread scripting engine, but the very bottom line is that they all ask you to provide with same data to get your load test running.

Go and try few of them, you will see and understand that each solution wraps the same idea with different UI. They all work the same and if they work the same, I urge you to use those who are based on JMeter. Why? Because it is the most popular load testing tool in the world, it is free and open source and most importantly - it has the biggest set of features and it continue to grow with regular releases backed by an awesome core team of commiters which push it forward. If you go with other proprietary scripting engine, you will soon find show stoppers, blocking you from executing a script which should interact with your application due to missing functionality, as all other engines are trying to keep up with JMeter.

So why I mad about all those tools and competitors? They all try to re-invent the same wheel. Not a smarter one, not a better shape nor better material. Just re-creating same old stuff which was invented about quarter of a century ago!!

Last point is that 5 years ago - the idea of driving real/headless browsers was really promising, but so far no real solution is doing that successfully.
I'll create a technical post on this topic in the future to show timing issues with such approach, currently I have in mind to show results from Selenium browser running from by JMeter with the JMeter Plugins, but any other ideas or pointers are welcome here.

Saturday, October 12, 2013

Is JMeter the most popular performance testing tool?

About a year and a half ago, at mid of 2012, I gave a talk in the Israeli QA Conference and part of the session was about the increasing popularity of JMeter (and the decrease of popularity of the old leader - HP Load Runner). Back then I said that I expect that by the end of 2012 the most popular performance testing tool will be JMeter (JMeter introduction presentation for Israeli audience).

Well, it seems I was right. It happened even faster!
Looking at the job posts popularity by keywords, the are more job opening now with JMeter, than with Load Runner, actually since mid of 2012.

(looking back at the presentation from that time, at mid 2012, I only had figures from January 2012 which showed this about to happen, while in fact - it already did)

As I am trying to be active on the JMeter users list, as well as monitoring the developers list I see this is not out of the blue - the maintainers are doing excellent work with new important features, bug fixes, new technologies adoption and so on... Also taking seriously community comments and change requests.

One of the best things about the JMeter users list is not only that it is being used for discussion about the tool but it also a place to discuss about methodologies, best practices and how-to's, regarding proper load testing, integration with continuous delivery and other conceptual topics.

Aside of the properly maintained project in Apache, there are other projects, both free open source and commercial, which complement the tool, including the most popular free open source and must have JMeter Plugins project (fka JP@GC, which went crazy bigger over the last 6 months with merging of other smaller jmeter plugins projects) and the commercial Ubik Load Pack (Which I must admit, I have never used).

There are also JMeter cloud services like, (fka and

Two important notes I have with the trending of popularity:

  1. The popularity growth of JMeter seems to be on hold over the last couple of years.
    I am unsure if that's because there's another 3rd major player I am not familiar with or it is that the human kind found another solution where performance testing is not required. It might also be related with the overall job openings in the USA (where this graph is based upon) which feels to me like the most likely. It is also known that over bad financial times, the "fats" are cut off, so quality is something easier to cut, than developers who actually develop software. Time will tell.
  2. The second important note is that the comparison between load runner and jmeter is incorrect, as the comparison should have been with the term "loadrunner" and not "load runner". Seems like the right term (or the more popular term) is non-spaced "loadrunner":
    So overall loadrunner is still more popular (especially if you sum it with "load runner" job openings), but the shrinking trend with the popularity of Load Runner cannot be ignored.
Let's see how it looks like in few months.

Tuesday, April 23, 2013

Gmail - free Giga e-mail account!!

Long time ago I decided that lot of stuff is being stored on my Gmail account, stuff which I need so backup is required here...

The recent announcement of shutting down the Google reader account got to push me to take an action now and backup my Gmail account.

Lots of guides on how to backup Gmail accounts, but the interesting thing is that I've looked what's the oldest email I got there on my account and found that the first email on my account is the invitation I've sent to my friend.. Yeah what a crazy days... Back then you couldn't simply sign-up to Gmail, you had to get invited by someone you know and then you would get 20 invitations to invite your friends...

Shmuel Krakower has invited you to open a free Gmail account. The invitation will expire in three weeks and can only be used to set up one account. To accept this invitation and register for your account, visit Once you create your account, Shmuel Krakower will be notified with your new address so you can stay in touch with Gmail! If you haven't already heard about Gmail, it's a new search-based webmail service that offers: - 1,000 megabytes (one gigabyte) of free storage- Built-in Google search that instantly finds any message you want- Automatic arrangement of messages and related replies into  "conversations"- Text ads and related pages that are relevant to the content of your  messages Gmail is still in an early stage of development. But If you set up an account, you'll be able to keep it even after we make Gmail more widely available. We might also ask for your comments and suggestions periodically and we appreciate your help in making Gmail even better. Thanks, The Gmail Team To learn more about Gmail before registering, visit:

Sunday, April 14, 2013

Chrome dev tools missing data on network tab

While analyzing page loading times I noticed the following strange timing:

No matter how you sum this up, it ain't gonna get to 47.84 seconds!
What's wrong here? I don't know, not the first time I see this strange timings... any ideas?