Performance Means Business

SOASTA Blog

Subscribe to SOASTA Blog: eMailAlertsEmail Alerts
Get SOASTA Blog: homepageHomepage mobileMobile rssRSS facebookFacebook twitterTwitter linkedinLinkedIn


Blog Feed Post

Part Two: Driving Performance from the Left Side of the SDLC Line

Greetings all and happy April. Black Friday is just a little over 6 months away. Have you started planning how you will ensure your website is tuned for optimal performance with the best chance to drive revenue? I can pretty much guarantee that your site revenue targets are higher this year than last year. No pressure or anything.

So, with the above in mind, welcome to part two of my three-part series on getting ready for the big event. If you recall from part one, the main purpose of this series is to discuss tasks on the left side of the magical Software Development Life Cycle line that address performance before the code base is delivered to the QA group for larger scale testing.

First, I am going to go out on a limb and assume the business has provided you with a comprehensive set of Non Functional Requirements (NFRs). Thus, you have a wonderful set of performance goals to code and design your site to achieve performance nirvana. What if you do not have a documented set of NFRs? No need to panic, I’ll also discuss working around this issue.

Unless you are a brand-new retailer, or a retailer that has never had a website before, there should be plenty of data to assist you. Every retailer I have ever worked with has a bunch of tools capturing a ton of data each day. This data covers all aspects of what is happening on the site. You just might need to do some digging to find it.

  • Step one – visit your ops team. In my experience they have the best handle on what data collection tools are on your site.
  • Step two – visit your counterparts on the business side. They will have the sales numbers for last year and should have the growth goals for this year.
  • Step three – take the site data from the tools and the info from the marketing team and develop a set of goals. I will discuss the specific mechanics of extracting info to establish the goals in a follow-on series.

Let’s proceed with what I call the golden rule — if you are not coding for performance, you will never achieve it. Sure, one can throw hardware at a problem but that is costly and a never-ending proposition year over year. Strong coding practices along with a good automated code review tool AKA a static analysis tool are the keys to success. I have put this into practice with several of my clients and have achieved a history of success.

There are several automated code review tools on the market and there are some really fine open source tools. My favorite tool, FindBugs, is built on top of a static code analysis rules engine out of the University of Maryland which is constantly being updated. The last time I worked with a customer to deploy a static analysis tool built on FindBugs, we defined 88 rules that had an effect on performance. Each rule can be configured to define a severity. When code is run through the tool and something is reported, you quickly identify what needs to be resolved ASAP vs. what can be resolved later. I love the ability to find resources that have not been deallocated. Most tools in the static analysis class can be integrated into your build system to run automatically and generate a report after the build. Why wouldn’t you want to do this?

When you are doing your internal testing, you can also generate a PCAP file and examine it through your favorite PCAP analyzer. Specifically, use a PCAP file to find out how chatty the code is. The chattier the application, the more susceptible your application is to latency and it all goes downhill from here.

Now let us discuss coding. None of this is rocket science, it’s just good common sense and strong personal practices. Here are some of my favorites to concentrate on:

Prioritize capabilities, implement and harden according to the value to the user:

Once again a positive end user experience should be the ultimate goal. When prioritizing performance optimization, remember this simple rule: 20% of an application’s functionality will be accessed 80% of the time a user spends on the application.

Own your code:

Performance requires ownership — do not rely on nor blame others to ensure the performance of your code.

  1. Measure the time required to execute each unit and component assembly test, then fix when slower than spec or previous build.
  2. Think globally, act locally: Assume a sparse resource condition for execution (Allocate / De-allocate).
  3. Test for all positive and negative conditions in the unit test phase: Assume failure will happen often.
  4. Leverage key capabilities of the database. Use stored procedures. Assign uniqueness in the database, don’t manage it in your code. Don’t use your code to allocate database resources.
  5. Assume all other code running follows bad practices.

Don’t attach a discrete (RAM, CPU, DISK, NETWORK) resource until absolutely required:

Tying up a resource before it is required will prevent others from using the resource when needed as well as possibly impacting your performance.

Minimize external calls across the network: Here are some valuable tips:

  1. Bandwidth costs money! When your code is running over the network understand the network footprint you are consuming.
  2. Understand the payload size you are sending per packet and understand the number of application turns you are using per transaction. Balancing the payload size per packet will achieve excellent performance gains.
  3. Whenever possible take advantage of sending your data through multiple network ports. This is especially crucial when using the TCP protocol.

These Three factors alone can make the difference between a positive or negative end user experiences when accessing your application.

Minimize writes to disk:

Not to overstate the obvious, but Disk I/O encompasses the physical act of inputting and outputting data on a physical drive. If you are reading from or writing to a disk, the CPU must sit idle as the I/O action happens. Since a disk is an electro mechanical device spindle, speed plays a major factor in the ability to enter or retrieve data from a disk.

Consider language weighted code for efficiencies:

Choose lighter weight language for heavy weight jobs. When using a repetitive algorithm, it is beneficial to write the algorithm using a faster language Examples: Assembler, vs. C, C vs. C++ / C#, C++ / C# vs. Java, Java vs. VB. There is also a performance side benefit to this practice as it forces you peruse using active vs. passive memory MGMT.

In closing, let’s not forget page delivery. The order of delivering page content matters. The amount of content you load on your home page also matters. I had a client whose home page was taking up to eight seconds to load. When I reviewed the page load, I found tags being loaded along with the home page and the home page was designed to wait until everything was ready to load. Deferred loading is your friend! Make your homepage usable as soon as possible.

In my next blog, I will discuss the right-hand side of the line. Hope you tune on in!

Read the original blog entry...

More Stories By SOASTA Blog

The SOASTA platform enables digital business owners to gain unprecedented and continuous performance insights into their real user experience on mobile and web devices in real time and at scale.