CPWR

Compuware Corporation (CPWR)

$10.345
*  
0.055
0.53%
Get CPWR Alerts
*Delayed - data as of Nov. 28, 2014 12:02 ET  -  Find a broker to begin trading CPWR now
Exchange: NASDAQ
Industry: Technology
Community Rating:
View:    CPWR Real Time
 
 
Symbol List Views
FlashQuotes InfoQuotes
Stock Details
Summary Quote Real-Time Quote After Hours Quote Pre-market Quote Historical Quote Option Chain
CHARTS
Basic Chart Interactive Chart
COMPANY NEWS
Company Headlines Press Releases Market Stream
STOCK ANALYSIS
Analyst Research Guru Analysis Stock Report Competitors Stock Consultant Stock Comparison
FUNDAMENTALS
Call Transcripts Annual Report Income Statement Revenue/EPS SEC Filings Short Interest Dividend History
HOLDINGS
Ownership Summary Institutional Holdings Insiders
(SEC Form 4)
 Save Stocks

Compuware Corporation (CPWR)

November 28, 2012 12:00 pm ET

Executives

Nagraj Seshadri

Alois Reitbauer

Presentation

Operator

Good day. My name is Andrea, and I will be your conference operator today. At this time, I would like to welcome everyone to the Application Performance Academy Webcast Part 1: Performance Concepts. [Operator Instructions] I would now like to turn the call over to our host, Mr. Nagraj Seshadri, Director of Product Marketing at Compuware APM. Please begin, sir.

Nagraj Seshadri

Hello, everyone. Welcome. First, housekeeping issues. [Operator Instructions] Today's session is being recorded, and all registrants will be provided links to the replay, as well as the deck within a few days. We will be doing a Q&A at the end of the webcast, so please submit your questions throughout the webcast using the Q&A function on your WebEx control panel.

I'm pleased to introduce our speaker today, Alois Reitbauer. Alois Reitbauer is the Technology Strategist at Compuware. As a major contributor to dynaTrace Labs, he influences the company's future technological direction. He also helps Fortune 500 companies implement performance management successfully. He has authored the Performance Series of the German Java magazine, written articles and other online and print publications and also, contributed to several books. At blog.dynatrace.com, he regularly writes on performance and architectural topics to an audience reaching up to 100,000 visitors.

So now without further ado, Alois Reitbauer. Alois, take it away.

Alois Reitbauer

Hello. Thank you, Nagraj, and welcome, everybody, also from my side. So I'm very excited to today start our Application Performance Academy where we try to bring really basic concepts of application performance to a broad audience, and what we will start with today is performance concepts. So in this first webinar of the series, we will deal with like the starting points once you enter the APM field. But even if you're an experienced performance guy who works with APM or in the APM space every day, it's sometimes good to get kind of a recap what are we talking about and a refresh of some of the very basic and vital concepts that we use in our work every day.

To get started, we will look at first into performance series. So don't be scared, it won't be too much theory. But a bit of theory is good because it will help us throughout this webinar and also things to come to better understand what we're actually talking about and just lay the groundwork so that we are all on the same page here.

First, very important concept is actually queuing theory. I think many of you have heard the queuing theory and probably using it also. And I think it's a very vital concept to understand performance and how performance in kind of every software system actually works. So on the right-hand side, we have this picture of a very simple application, which consists of the server, the database and the network in between the 2. If we look into the server from a queuing theory standpoint, we see all the typical resources we have in this server modeled as so-called queues. It's like we have 1 for server threads, CPU, the network and the database connection.

So as everybody, in fact, knows, resources are limited. For the server threads here, we have a couple of those. The CPU, in this case, is just one. The same is true for the network and for the database connection. We also have the connection for a couple of those resources.

Whenever a request now hits the system, like in our case, it would hit the server, we first try to access the first resource in here, like our service reps. As long as the resource is available, we take this resource right away. If it's not available, we have to wait in a queue, and that's where the name queuing actually comes from. Once we have like our service reps, we can go into some of the other queues to get the CPU -- access to the CPU for doing our computation, the network and also, getting database connections and other resources.

So while the model is very simple, it helps us to understand kind of every performance-related computing problem we might experience in a system. Because what we typically do, we consume new resources, kind of requesting the system to try to consume these resources as well. And the more the resources are used, typically their wait times are longer. So whenever we see an increase in our response times, it's typically because we have to wait on certain resources or that the resources in a system are used much more extensively. So whenever we you want to find out why something is actually slower, slowing down, queuing theory is a good model to see which resources we actually use. At the same time, it tells us that all resources that we use in our systems have to be monitored carefully.

Now we're talking about a couple of laws which actually already use queuing theory to derive some -- what we would call like the laws of physics of performance. We won't take too many, but just 2 of them that are really important to understand for everyday performance work.

The first one is a Little's Law. So what Little's law says in its original text is that the long-term average number of customers in a stable system is equal to the long-term average effective arrival rate, multiplied by the average time a customer spends in a system, which then also leads to a formula. So what this more or less means is a system is stable when a number of new requests is not higher than the maximum load that the system can actually process.

Read the rest of this transcript for free on seekingalpha.com