The Lecture Notes Blog

Home » Posts tagged 'Capacity'

Tag Archives: Capacity

Scrapping or reworking?


Should a damaged unit be dropped from the process or should it be reworked? In order to answer that question it has to be noted, that reworking defects can turn a process step into a bottleneck, which has not been the bottleneck before. Reworking defects (and thus, defects themselves) can have a significant impact on the process flow and on the location of the bottleneck. The bottleneck can therefore not longer be determined by just looking at the capacity of the process steps. Instead, one has to take into account the capacity changes in relation to the scrap and reworking rates.

To figure out where the new bottleneck is, we have to assume that the process as a whole will be executed in a way in which the demand is met, so that there is a match between the process output and the demand at the end of the process. The process therefore needs to start with more flow units then actually needed, so that enough flow units will be left over to satisfy demand. By working the process diagram backwards and determining the new demand for each process step, we can then discover where the new bottleneck will be located.

Instead of completely scrapping a flow unit, flow units can also be reworked, meaning that they can be re-introduced to the process and given a work-over to get rid of defects. This must also be taken into account when trying to figure out whether the location of the bottleneck changes, because some of the process steps will now have to process the same flow unit twice in rework, which will have an impact on their implied utilization. The location of the bottleneck can be determined by finding the process step with the highest implied utilization.

If the demand is unknown, the bottleneck can be located through four simple steps:

(1) Assume that the flow rate is an unknown demand D (e.g. 100 flow units).
(2) Figure out the demand D_x for each process step if D is to be reached.
(3) Divide D_x by the capacity of the process step to get the implied utilization.
(4) Identify the process step with the highest implied utilization. This step is the bottleneck.

These lecture notes were taken during 2013 installment of the MOOC “An Introduction to Operations Management” taught by Prof. Dr. Christian Terwiesch of the Wharton Business School of the University of Pennsylvania at Coursera.org.
Advertisements

The concept of responsiveness


Responsiveness is the ability of a system or process to complete tasks within a given time frame. E.g. how quick can a business respond to customer demands? If customers are made to wait, they are turned into inventory, potentially resulting in a unpleasant customer experience. Any customer waiting time is also an indicator of a mismatch between supply and demand.

Concepts for solving waiting time problems can include increasing the capacity of the resource at the bottleneck as well as increasing process flexibility in order to ensure, that capacity is available at the right time. It has, however, to be kept in mind that waiting times are most often not driven by either the capacity or the flexibility of a process but rather by variability. Variability in the process flow (e.g. customers arriving at random) can lead to unwanted waiting times even when the implied utilization is clearly below 100%. If analysis builds solely on averages and fails to consider process variability, it can thus be wrongfully concluded that there is no waiting time, when, in fact, there is.

To solve this problem, new analysis methods are needed when dealing with process variability. It is noteworthy, that those methods are only requisite when a process has more capacity than demand – if demand exceeds capacity, it can be safely concluded that there will be waiting time even without looking at the process variability.

These lecture notes were taken during 2013 installment of the MOOC “An Introduction to Operations Management” taught by Prof. Dr. Christian Terwiesch of the Wharton Business School of the University of Pennsylvania at Coursera.org.

Advantages of mixed-model strategies


Since larger batch sizes lead to (the need for) more inventory, the batch size has to be balanced and chosen with both the positive (greater capacity) and negative (larger inventories) results in mind. A strategy, in which smaller batch sizes (down to a size of one just flow unit) are chosen, is called a mixed-model strategy. Since smaller batch sizes have a negative impact on capacity because of the set-up time, reducing the set-up time is an important enabler for running a mixed-model strategy.

These lecture notes were taken during 2013 installment of the MOOC “An Introduction to Operations Management” taught by Prof. Dr. Christian Terwiesch of the Wharton Business School of the University of Pennsylvania at Coursera.org.

The effect of set-up times


Set-up times often have a significant effect on the performance of a process and can even determine a process bottleneck. The most important definition here is that of the batch: A batch is the number of flow units, which are produced between two set-ups. To calculate the capacity of a process with respect to the batch size, the following formula is needed:

capacity = (batch size) / (set-up time + batch size * time per unit)

Note, that this is the capacity of the process for producing a batch. For example, if the batch size happens to be 10 flow unites per minute, this calculation answers the question of how long it will take to produce one complete batch. This is a deviation from the previous definition of capacity, which did not take the batch size into account and was simply calculated as:

capacity = number of resources / processing time

The capacity can by this basic definition be calculated for every station in a process. It is always m / processing time with m being the number of resources (e.g. workers) being devoted to this process step. If, for example, one worker needs 40 seconds to put together a sandwich, the capacity of this station is 1/40 per second or 1,5 sandwiches per minute. If there are two workers on the same station, the capacity increases to 2/40 per second or 3 sandwiches per minute.

Usually, the larger the batch, the more efficient the production process becomes (economics of scale). Companies with custom-made batches are therefore trying to get their customers to order large batches (sometimes even forcing them). The bigger a batch grows, the more irrelevant the set-up time becomes with the process capacity getting closer and closer to the original definition of m / processing time. This is because the processing time is less and less determined by the set-up time with larger batch sizes. Thus, set-ups reduce capacity – and therefore, companies have an incentive to aim for such large batches. However, large batches also increase inventory – with all of the negative consequences (e.g. storage costs, ageing of shelved products etc.).

These lecture notes were taken during 2013 installment of the MOOC “An Introduction to Operations Management” taught by Prof. Dr. Christian Terwiesch of the Wharton Business School of the University of Pennsylvania at Coursera.org.

What is productivity?


A very basic definition of productivity – the average measure for the efficiency of a production process – would just be the ratio between process output units and process input units. The labor productivity might, for example, be four output units (such as car or netbook parts) per labor hour.

If one does not focus on a specific form of process input units (such as labor hours), but instead takes all kinds of input units into account (such as materials, energy, labor etc.), we are talking about the so-called multifactor productivity. Since both process input units and process output units are often difficult to measure and compare, fiscal measurements such as revenue and costs can be used instead. Fiscal measurements are also needed for comparing different input units (work time, materials) in the already mentioned multifactor scenarios.

These considerations lead to some basic mathematical expressions:

productivity = output / input
labor productivity = output / labor time
transformation efficiency = output / capacity
multifactor productivity = output (in $) / (capital + labor + materials + services + energy) (in $)

The two main drivers that reduce productivity are waste and inefficiencies. Inefficiencies and waste can be seen as the distance between a company and the efficiency frontier.

These lecture notes were taken during 2013 installment of the MOOC “An Introduction to Operations Management” taught by Prof. Dr. Christian Terwiesch of the Wharton Business School of the University of Pennsylvania at Coursera.org.

Finding the bottleneck in processes with attrition loss


A process with multiple steps might have an attrition loss (of flow units) on every step. Take, for example, a process in which 300 people apply for a job opening and go through a four-step process:

Step A: 300 people apply per mail
Step B: Out of those, 100 people are invited (100/300)
Step C: Out of those, 20 people can make an internship (20/100)
Step D: One of the people doing the internship is hired (1/20)

Unlike the previously analysed processes, not every flow unit makes it to the end of the process (in the example, actually just one flow unit comes through). In this case, it would be misleading to just look at the overall capacity of each resource involved to determine the bottleneck. Instead, the following three steps need to be taken:

Step 1: Determine the capacity of each resource in the process (m / activity time or units per hour)
Step 2: Calculate the service demand for each of these resources considering the drop-out of units
Step 3: Divide the total workload by the available time to calculate the implied utilization

The bottleneck is the resource with the highest implied utilization, which does – again – not need to be the resource with the lowest capacity.

These lecture notes were taken during 2013 installment of the MOOC “An Introduction to Operations Management” taught by Prof. Dr. Christian Terwiesch of the Wharton Business School of the University of Pennsylvania at Coursera.org.

Capacity, bottleneck, process capacity, flow rate and utilization


In order to perform the following calculations, processing time has to be defined as the time that is spent on a certain task (e.g. one station in a sandwich restaurant). We will also need the previously introduced definitions of flow rate and flow time.

Capacity: The capacity can be calculated for every station in a business process. It is always m / processing time with m being the number of resources (e.g. workers) being devoted to the station. If, for example, one worker needs 40 seconds to put together a sandwich, the capacity of this station is 1/40 per second or 1,5 sandwiches per minute. If there are two workers on the same station, the capacity increases to 2/40 per second or 3 sandwiches per minute.

Bottleneck: The bottleneck is defined as the process step (station) in the flow diagram with the lowest capacity (the “weakest link”). Although the bottleneck is often the process step with the longest processing time, it is important to always look at the capacities for making a judgement.

Process capacity: The process capacity is always equivalent to the capacity of the bottleneck. It is useful, to calculate a comprehensible number, such as customers per hour or parts per day (instead of a hard to comprehend number such as 1/40 customer per second or 1/345 part per second).

Flow rate: Even though the flow rate was previously defined, the definition needs to be augmented as the flow rate being the minimum of demand and process capacity. While the flow rate logically can never be higher than the capacity of the bottleneck, it can very well be lower, if the demand is insufficient.

Utilization: The utilization tells us, how well a resource is being used. It is calculated as flow rate divided by capacity (e.g. 1/40 / 1/25). The utilization always lies between 0% and 100%.

These lecture notes were taken during 2013 installment of the MOOC “An Introduction to Operations Management” taught by Prof. Dr. Christian Terwiesch of the Wharton Business School of the University of Pennsylvania at Coursera.org.
%d bloggers like this: