标签:
上次提到Little定律, 我也解释过它跟另外一个公式有一些内在的关系,但是其实我自己对LL在当前复杂架构系统中到底是怎么应用的也没有完整的想法,于是我在Linkedin上把这个问题抛了出来,没有想到的是得到了很多大师级前辈回复,有些甚至非常详细:
我的问题:
I‘m confusing that the meaning of parameters of the formula N=X*R which N represent number of concurrent users, but X is not denote the rate of of people arrival, it is the rate of transaction in almost papers I read. So it means the user has only one transaction? And does this means the law only applicable to small project or study?
到目前为止全部回复:
15 comments Jump to most recent comment
Consulting Member of Technical Staff, Performance Engineering at Oracle
People arrivals don‘t impact system, what they do does. For example, if you load a static home page and just sit there a couple of hours reading, the only request your system processed is one home page request. So the rate of real requests matters, not the number of online users (at least from the point of Little law - online users may hold some resources, which is a separate topic).
Founder/Computer Scientist, Performance Dynamics
The unstated assumption that you are missing is that the system is in steady state. In other words, the number of arrivals (A) and the number of completions (C) are the same value (on average).
Since all computer systems are stochastic, steady state is true in the long run, i.e., during a long enough measurement period (T). Then, A =C and the arrival rate (λ = A/T) on the input side of the system will be equivalent to the completion rate (X = C/T) on the output side, i.e., λ = X. Performance engineers have a habit of calling the completion rate, the "throughput," for some reason.
Thus, you can write LL the way you did, viz., N = X * R or equivalently, N = λ * R. LL says that although the number going in must equal the number coming out (in steady state), there can be another number, N, in the system that spend some time, R (the residence time), doing something in the system before departing. This how the checkout at a grocery store works: people can be arriving and departing, but there are also people waiting, as well as having their groceries rung up. LL tells you how many people will be in the queue if you know either their arrival rate or their departure rate and their residence time (on average).
BTW, there are actually 3 ways of writing LL,
http://perfdynamics.blogspot.com/2014/07/a-little-triplet.html
Performance Test Engineer at ENNIU
Thanks for your answers.
Neil‘s "A Little Triplet" gives me a heuristic thinking on this formula. What‘s the N represent is none of business of arrival rate but is determined by R. When R denote service time, the N denote the number of transaction of people who is serving in system. And if R denote resident time which contain thinking time, queuing time and service time, the N can be read as the online users, because the transaction has not thinking time except human.
Is that make sense for the understanding of LL? or give me a practical example or approach.
Founder/Computer Scientist, Performance Dynamics
Now you‘ve shifted the question slightly, so we have to be very clear what we are talking about by adjusting the notation a little bit.
In my books and classes, I write the version of LL discussed above as Q = X * R or Q = λ * R, where Q means the total of all the processes/requests/transactions in the queues belonging to the system under test (SUT), for example. Then, I can use N to represent the total number of "users" or load generators (GEN). You also tossed a new term, "think time," which I write as Z. What is the relationship b/w these metrics?
N = X * R + X * Z = X * (R + Z).
Once again, LL tells the story. The total number of user (N) is composed of 2 parts. Why? Because every load test system is composed of 2 parts: the GEN component exerting the load on the system, and the SUT component exhibiting the response you are interested in measuring. In steady state, some portion of user requests are in the SUT, while the remainder are in the GEN side.
The 1st term, X * R, is simply Q, the number of requests either waiting or being serviced in the SUT. The 2nd term is not so obvious, but it‘s simply the number of requests not in the system. How do I know that? Because the time they spend on the GEN side is determined by the think time Z. And X * Z, according to LL, is that number (on average, in steady state).
Similarly, LL tells us that N can also be thought of in terms of the system throughput (X) times the total round-trip time (R+Z) in the test rig, i.e., time on the SUT + time on GEN.
Since LL is immutable, if the test measurements do *not* jive with LL, that‘s a way you can tell that something is wrong with the test setup. Of course, all good performance engineers always check their results, especially against Little‘s law. :D
Performance test lead at one of the MNC‘s
Based on my understanding, Little law states that Number of users/requests coming to the system/existing in the system is equal to rate at which they are entering the system multiplied by time they spend in the system. This time includes Response time, think time and Pacing time.
C=R*T
C - No. of users/requests
R - Rate at which they enter
T - Time(Resp. time + Think time + Pacing time)
It is possible that pacing time, think time doesn‘t exist at all in some cases...
Performance Architect, Capacity/Availability Planning Architect at Cognizant Technology Solutions
This will need a lot of parameters to consider coming up with the performance model.
1) User count at peak hour = X
2) Transactions distribution = Y1, Y2, Y3,...., Yn
Pick up the top few transactions to make it close to 100%
Spread the X the Y1 to Yn. This will be your performance model. This again, will just give you one of the characteristics. You might have to try various other transaction mix to ensure all aspects are covered.
The choice of transactions is critical to cover the business requirement and help obtain the performance/capacity/availability/scalability view from an IT perspective.
If the system is already in live and you are testing for a functional modification, it‘s easy to get the usage and pacing from the existing live.
Performance Test Engineer at ENNIU
Thanks All.
Neil‘s answer gives me a more comprehension for LL, but I still have another question:
we always divide system into two types: open system which represent by arrival rate , close system which represent by think time(maybe there is the 3rd type, but I don‘t want to talk about here). And from above knowledge, LL can contain arrival rate λ and also can have think time Z, So what type of system can we apply LL for? what‘s the relationship between LL and open system and close system?
Director at SMT Data A/S
N = X * R is a nice little (no pun intended) formula that I use quite often.
In order to benefit from it, you have to know:
N is "number in system", meaning the number of units of work queuing or being serviced in the system (not number of users).
X is "arrival rate", meaning the number of units of work entering the system per time unit (and this is where the number of users and their "think time" could have an impact).
R is "response time" (service plus queuing) expressed in the same time units as X.
And as others have already explained we‘re talking averages and assuming that units of work arrive at constant intervals and also ends within the observation period.
Despite all the assumptions, I have more than a few times used Little‘s Law to show that bad performance was caused by the application‘s inability to reach the desired degree of parallellism (and not HW bottlenecks, as most programmers instinctively assume).
Example: In a serial application N will always be 1, so try to increase X and see what happens to R (yes, it‘s logic but it helps you to understand how the formula works).
Does Little'law really applicable to apply performance model now?
标签:
原文地址:http://www.cnblogs.com/hundredsofyears/p/4276560.html
Neil
Neil Gunther
Founder/Computer Scientist, Performance Dynamics
LL can be applied very generally: including to both so-called "open" and "closed" queueing systems. Think of any computer system as being comprised of a set of nested boxes. You just need to specify which box you‘re considering and LL will hold locally in steady state.
An open queueing system can have an arbitrary value of λ, as long as it doesn‘t exceed the service rate; otherwise, the waiting line will become infinitely long. The mean arrival rate, λ, is a fixed or constant value, e.g., the average number of httpGets/second.
That can‘t happen in a closed system b/c there can only be a maximal number of possible requests (N) that can be in the system, e.g., N load generators, so it is self-throttling. Moreover, λ is no longer an arbitrary constant but determined by N and Z.
Alaister
Alaister Boyd
ITSM Service, Capacity Availability & Continuity manage
The Gurilla Capacity Planning book by Neil is a great read of your getting into this seriously. I have used his methods and build models and tested them against real life observations and found them very close. This allowed me to use on new and developing systems using a smaller sample of generators against the system under test to allow good forecasting when the system would become saturated. The tests where build using Selinium controlled by Husdon and the sample observations put into a spreadsheet I developed based on Neil‘s techniques. Well with some mental gymnastics.
Yong Fu
Performance Test Engineer at ENNIU
Thank you. Actually I have been interesting in Neil‘s theory about two years ago, and they give me a lot of thinking on performance testing. But it seems I still missing guides on practice, I will planning to read his books seriously(Chinese can‘t have these books easily, you know).
Neil
Neil Gunther
Founder/Computer Scientist, Performance Dynamics
Thanks all for your endorsements: it‘s nice to know I didn‘t spend years writing books totally in vain. Unfortunately, however, books alone (even mine) can only take you so far.
As Yong Fu asks implicitly: where‘s practical beef? Actually, it‘s there. But the starting point in my books is the data that you‘ve already collected; that is important to you; that you know best. Clearly, I can‘t know that what that is, a priori. For data generation/collection, you need to understand how to sling the appropriate tools, e.g., LoadRunner, JMeter, etc., for your shop. That‘s a given. But that‘s only half the story. What‘s the other half?
All performance data should be assessed within the context of a validation method. How else can you know when it‘s wrong? [I know, that never happens. (right)] The various performance models and laws in my books *are* the validation methods. This is the aspect that almost all performance engineers fail to fully appreciate [present company excepted. :) ]. The question then becomes, how do you connect your data with the appropriate validation method?
The methods I discuss in my books are completely general and therefore guaranteed to be applicable to your data. I do give examples and war stories of how I made it work for me. Of course, those are not your data. So, making the connection is the trick.
That‘s where my Guerrilla training classes come in. There, you get to ask questions of me directly and also tell me more about your particular circumstances so that I can figure out the connection for you. That‘s also a way that I learn new things. What emerges is that you don‘t need to understand *all* the performance modeling methods in my books, but only one or two. Once again, I can‘t know what they are, a priori.
The other thing you may learn is that your data is not in the right form to be validated or the collection strategy is broken, etc. That‘s the most common problem b/c practitioners place far too much faith in the data collected by sophisticated (and often expensive) tools; just b/c
they‘re sophisticated and/or expensive. Nothing could be further from the truth. Once again, How else can you know that w/o a validation method?
So, it‘s by virtue of this back and forth that you begin to see how all this can come together to meet your particular needs. I say this without hesitation b/c I‘ve seen it happen a million times in my classes. Conversely, I‘m sometimes astounded that something I regard as trivial turns out to be the most important thing to a particular student.http://www.perfdynamics.com/Classes/comments.html Otherwise, you can be doomed to muddle on for years.
Tom
Tom Shuttleworth
Service Delivery Manager at Proact IT UK
For myself, as an absolute amateur at such things, this - "I‘m sometimes astounded that something I regard as trivial turns out to be the most important thing to a particular student" struck a chord. Very simple things, like visualizing a computer as a queue is incredibly powerful. It gave me a structure to think about how I expect a system to perform. All of a sudden I could interpret iostat and defend my interpretation to people who knew far more about UNIX than I ever will. Knowing that performance doesn‘t scale linearly with utilization, even if sometimes it looks like it, is huge.
In my experience, remarkably few people working at the coal face of IT have any idea of this stuff and, in the scheme of things, at the level we are talking about here, it really isn‘t hard.
Leonid
Leonid Grinshpan, Ph.D.
Practice Manager (North America): IT Performance at Tata Consultancy Services
@Tom Shuttleworth
The queues are major phenomenon defining application performance. In general any hardware and software resource that is needed to process initiated by a user transaction might be in short supply and transaction will wait in a queue. Than means interpretation of a distributed application by a queuing network is a powerful abstraction helping in application performance troubleshooting. Check the link http://tinyurl.com/m99enoh, it points at the article that introduces enterprise applications conceptual models uncovering performance related fundamentals. The value of conceptual models for performance analysis is demonstrated on two examples of virtualized and non-virtualized applications conceptual models.
Henry
Henry Steinhauer
Systems Engineer-ESM ITM / Capacity Planning + Performance Management at Glacier Technologies, LLC
- interesting phrase by Tom - ‘people working at the coal face of IT‘ I agree that many times the Rule of Thumb that people have learned does not apply when you look at the system as a whole.
Also Neil states the applications are systems within systems. Understand the flow of work and how things are interconnected. Today with SOA and other abstraction taking place it is often hard to see those connections unless you can trap the calls made outside the application. Each of those calls are another chance to have delays in the application. They are entering a different queue for service.
That is what makes this profession so interesting after so many years of working with it. It is a murder mystery to be solved. Performance was killed - who done it.