The Lecture Notes Blog

Home » An Introduction to Operations Management

Category Archives: An Introduction to Operations Management

Statistical process control


There are two types of variation in production and service processes – common cause variation and assignable cause variation. The common cause variation is the “usual” variation that is caused by statistical anomalies in the production or service processes. The assignable cause variation is variation caused by a specific change in the underlying process structure.

There is a simple way to distinguish between common cause variation and assignable cause variation: Common cause variation falls within the previously mentioned control limits (upper control limit, lower control limit). Assignable cause variation falls outside those control limits. The control limits can be set as three standard deviations in each direction from the mean. A field with a width of six standard deviations will contain 99,7% of all cases. If a sample lies notably outside of these limits, assignable cause variation is likely to be the reason.

The purpose of so-called statistical process control is to constantly monitor the process output in order to be alerted to the occurrence of assignable cause variation almost immediately. In the famous Toyota production system, this is realized through a detect – stop – alert framework which catches defects quite quickly. This is critical, because defects tend to (a) produce more defects over time and (b) cause higher monetary losses once the defective flow units get through to the process bottleneck. Both problems provide huge incentives for figuring out how to detect defects as soon as possible. Some techniques that can be effectively used here are the drawing of fishbone diagrams as well as laddering. The idea behind both techniques is to basically ask “why-questions” over and over again until the actual root cause(s) of defects is (are) identified.

These lecture notes were taken during 2013 installment of the MOOC “An Introduction to Operations Management” taught by Prof. Dr. Christian Terwiesch of the Wharton Business School of the University of Pennsylvania at Coursera.org.
Advertisements

Measuring quality with the capability score


To calculate the capability score (e.g. of a machine used in production or of a supplier), only three variables are needed: The lower and the upper specification level as well as the standard deviation:

LSL = Lower Specification Level = flow unit needs to have at least XXX (measurable size)
USL = Upper Specification Level = flow unit needs to have no more than XXX (measurable size)

capability score = width of the specification level (USL – LSL) / 6 * standard deviation

Because of the sixfold multiplication of the standard deviation (sigma), this is also known as the six sigma method. The capability score of a distribution can be translated into the probability of defect through the normal distribution function of standard spreadsheet software such as Excel.

These lecture notes were taken during 2013 installment of the MOOC “An Introduction to Operations Management” taught by Prof. Dr. Christian Terwiesch of the Wharton Business School of the University of Pennsylvania at Coursera.org.

The Kanban cards concept


A Kanban system is a visual way to implement a pull system. The basis of the system are the so-called Kanban cards, which basically are work authorization forms. Only if such a Kanban card is issued (which happens due to demand), work on the next set of units is begun. Since the inventory kept can never grow bigger than the sum of Kanban cards in circulation, the system allows users to keep a definitive cap on the inventory.

By setting the right number of Kanban cards, one can adjust the inventory to the needs of the current demand structure. This corresponds with the idea of a pull system: Rather then having everybody in the process work as hard as possible and pushing the flow units forward, the demand pulls the flow units through the process via the Kanban cards. Action is only taken if demand is there, not because there is idle time to spend or flow units are arriving from some other station in the process. Kanban cards thus support a just-in-time production.

These lecture notes were taken during 2013 installment of the MOOC “An Introduction to Operations Management” taught by Prof. Dr. Christian Terwiesch of the Wharton Business School of the University of Pennsylvania at Coursera.org.

Calculating defect costs


When trying to calculate the costs associated with the production of defective parts and / or service results, it is pivotal to determine where in the process the defect has arisen. If an error occurs during the very first process step, little more than simple material costs are lost. If it occurs on the very last process step, we have to forfeit the value of an entire flow unit (including profit). The location of the bottleneck is especially important here. This is because defective flow units that are produced before the bottleneck have to be calculated with input prices while defective flow units that are produced after the bottleneck have to be calculated with the opportunity costs of lost sales. This insight drives the location of testing points in the process, who have to be arranged in a way which maximizes the chances of identifying and catching defective flow units before bigger losses occur.

By implementing a buffer between steps who can produce defective parts, the process flow can be protected against errors. However, producing too much inventory through so-called error buffering might conceal the necessity for improvements of the error rate.

These lecture notes were taken during 2013 installment of the MOOC “An Introduction to Operations Management” taught by Prof. Dr. Christian Terwiesch of the Wharton Business School of the University of Pennsylvania at Coursera.org.

Scrapping or reworking?


Should a damaged unit be dropped from the process or should it be reworked? In order to answer that question it has to be noted, that reworking defects can turn a process step into a bottleneck, which has not been the bottleneck before. Reworking defects (and thus, defects themselves) can have a significant impact on the process flow and on the location of the bottleneck. The bottleneck can therefore not longer be determined by just looking at the capacity of the process steps. Instead, one has to take into account the capacity changes in relation to the scrap and reworking rates.

To figure out where the new bottleneck is, we have to assume that the process as a whole will be executed in a way in which the demand is met, so that there is a match between the process output and the demand at the end of the process. The process therefore needs to start with more flow units then actually needed, so that enough flow units will be left over to satisfy demand. By working the process diagram backwards and determining the new demand for each process step, we can then discover where the new bottleneck will be located.

Instead of completely scrapping a flow unit, flow units can also be reworked, meaning that they can be re-introduced to the process and given a work-over to get rid of defects. This must also be taken into account when trying to figure out whether the location of the bottleneck changes, because some of the process steps will now have to process the same flow unit twice in rework, which will have an impact on their implied utilization. The location of the bottleneck can be determined by finding the process step with the highest implied utilization.

If the demand is unknown, the bottleneck can be located through four simple steps:

(1) Assume that the flow rate is an unknown demand D (e.g. 100 flow units).
(2) Figure out the demand D_x for each process step if D is to be reached.
(3) Divide D_x by the capacity of the process step to get the implied utilization.
(4) Identify the process step with the highest implied utilization. This step is the bottleneck.

These lecture notes were taken during 2013 installment of the MOOC “An Introduction to Operations Management” taught by Prof. Dr. Christian Terwiesch of the Wharton Business School of the University of Pennsylvania at Coursera.org.

The two dimensions of quality


There are two basic dimensions of quality: Performance quality measures to what extent a product or service meets the expectations of the customer. Conformance quality measures if processes are carried out the way they were intended to be carried out.

The root cause for quality problems is process variability. Were it not for process variability, every run through a process would result in the optimal output or in the very same error, which would then be easy to detect. However, due to process variability, some runs through a process result in optimal outcomes while others result in different kinds of errors. With some very basic statistical probability tools, we can assess the chances of such errors and defects occurring during a process. To calculate total error probabilities for an assembly line, one has to look at the error rate of each work step and calculate their yields (the percentage of error-free flow units the work step produces).

The yield of the process is defined as the percentage of error-free parts which are produced by the process – which of course depends on the yields of the various work steps. The total process yield is thereby simply the product of the individual yields:

process yield = yield 1 * …. * yield n

It is noteworthy, that even small defect probabilities can accumulate to a significant error rate, if there are many steps in a process. For example, if a process workflow consists of 10 steps with every step having a low defect probability of only 1%, the chances of an completely error-free product leaving this workflow are only 0,99^10 = 89,5%

The Swiss Cheese model explains, why defects or procedural errors sometimes do not get noticed, even if there are highly efficient quality checks in place: Since every slice of Swiss Cheese has some holes (defects) in it, there is a small probability that holes will line up in a way that creates a hole through a staple of cheese slices. This is then akin to multiple quality checks failing during the production of the same flow unit – though the chances of this happening might be low, it is bound to happen from time to time. This insight is also the main reason behind redundant checks, which means checking a quality attribute more than once to etch out all errors that might occur. With redundancy, a process hat to fail at multiple stations in order for the process yield to be affected.

These lecture notes were taken during 2013 installment of the MOOC “An Introduction to Operations Management” taught by Prof. Dr. Christian Terwiesch of the Wharton Business School of the University of Pennsylvania at Coursera.org.

Waiting behaviour of customers


The previous models all assumed, that customers would wait as long as it takes in order to get processed. In more realistic models, customers will leave the process before getting served because the waiting time gets too long. Some customers will not even enter the system, if the the demand is visibly too high (long waiting lines). In these cases, the outflow (completely served customers) will differ from the inflow (customer demand). But what fraction of demand will a business be able to serve?

There are four basic models of possible customer behaviour (which can additionally be mixed):

(1) All customers wait in line forever
(2) Some customers leave the line after a while
(3) Some customers do not enter if the line is too long
(4) Waiting time is absolutely impossible (inventory = 0)

Once one knows the probability, with which an incoming customer is not served, one can calculate how much business a company is missing because of waiting times. Instead of working with the rather complex formula for calculating this probability, the Erlang Loss Table can be used. This table denotes all probabilities for combinations of m (number of resources) and r (for the ratio between the processing time p and the arrival time a).

These lecture notes were taken during 2013 installment of the MOOC “An Introduction to Operations Management” taught by Prof. Dr. Christian Terwiesch of the Wharton Business School of the University of Pennsylvania at Coursera.org.
%d bloggers like this: