caution … the usual rough drafting ahead … : ) … one option is to skip the drafting that gave rise to the first interim conclusion by clicking here, or jump to the second interim conclusion here, or go to the third and fourth and fifth conclusions here and here and here … or jump over the whole page and start with a recap on Inventory Systems II …
i wrote, “the usual rough drafting,” but since it’s a lot about TOC stuff, i could also have written, “the usual verbalization of intuition, old and new knowledge and experience” …
a few other places to jump to if you like:
. redefining activity-based costing (abc) . detective columbo toc tva decision planning chart . true toc cause-and-effect planning . roi . eva . dcf . npv . hijklmnop . eli s . decisively nailing allocation-based product costs for decisions .
Jan 7, 2011, around noon
WIP Inventory Systems as Both Problem and Opportunity
One of the key elements of a manufacturing or supply chain information system is the inventory system. The inventory information in a supply chain involves the information about inventories in all the manufacturing companies and warehouses in the chain. Each manufacturing company normally has its own separate computer system, though it may have interfaces to other manufacturing companies, suppliers, warehouses, or distributors. Within each manufacturing company’s system, the part of the inventory system that creates the most headaches for both systems designers and users is the part that helps people manage so-called, “work-in-process”, or WIP, inventories. Dealing with the computer aspects of tracking, scheduling, and managing WIP inventory, and the related computer issues of “work orders” and “lot sizing” has caused many problems that have been viewed as opportunities by Japanese manufacturers beginning in the 70s or maybe before, and by others since, to create Just In Time, Flow Manufacturing, and other manufacturing approaches. But Just In Time and the other innovations that ignored or worked around the traditional WIP management computer features aren’t applicable to all businesses, or sometimes introduce unnecessary costs or restrictions in competitive agility. This page returns to an idea I was working on in the late 90s about creating a uniform new industry standard for computerized manufacturing inventory information management systems that will work for all types of manufacturing plant and product structures without introducing unnecessary problems or wasteful zero-value steps and activities.
Experience Suggesting A New WIP Inventory Paradigm Was Needed
I’m not sure if that idea I had was right, but it seemed to have promise. It was an extension of they experiences I had working with and thinking about TOC manufacturing information systems based on Eli Goldratt’s 1990 book, The Haystack Syndrome. There were two of those. One was a research and development program that was eventually called, “The Goal System.” In the first successful implementation of that system, my friend, Bob Vornlocker, spent a lot of his time working out the issues “work order-based” WIP tracking and management introduced to the project. The other was a commercial software system called “Resonance.” The idea was also based on my experience in modifying an existing MRP system to implement a TOC drum-buffer-rope (DBR) scheduling and shop floor control system. It was a huge success, but it involved working around and avoiding the difficulties work order-based WIP tracking introduced. Finally, the idea seemed to make sense given the nature of the class of systems that were becoming more prevalent — in both visual systems and finite scheduling contexts — called, “manufacturing execution systems” (MES). Part of the MES community was espousing that “planning and scheduling systems were dead” and created simpler systems that worked fine in simpler situations. Another part of the MES community was adding various stock, fixture, and other allocation and sequencing algorithms with various parameters to create what amounted to localized, specialized, sometimes-standard, sometimes-custom, niche finite scheduling routines within their view of “execution.”
What Was That Idea Anyway?
Again, I’m not sure if the idea, as it gets thought through again here, will turn out to be right. A recent thinking-it-through discussion about orbits turned out to have a flawed premise. : ) This wasn’t a bad thing. Finding flawed premises isn’t a bad thing. It’s a good thing. It’s progress toward having un-flawed premises! : ) True. Anyway, if the idea’s not right, this will at least get into some of the issues in a way that will be interesting to some. Or if not interesting to some, at least to me.
The simple version of the idea is to just track the WIP without “work orders” in the most usual sense of the term.
Update: As I came back to this, I can see that this “simple statement of the idea” is true, but it’s not the best way to say it. It says what the idea “is not,” but not what “it is.”
Let’s try it again. The idea is to track WIP as completed process steps. I’m pretty sure that’s what BAS/TheGoalSystem did. That, as I recall, works up to a point, but I need to think it through again.
Some of this is coming back to me now. You could post the location of an item of WIP as “ready before a process step”, “being worked on in that process step”, or “completed by that process step.” BAS/Disaster/TheGoalSystem showed WIP as “completed by that process step.” BAS/Disaster/TheGoalSystem used the term, “station”, for what I’m calling a “process step.” The idea was the material was passing from “station” to “station”, but there are several problems with using the word, “station.” It makes some people think, “work station”, as in “work center” or what BAS/TheGoalSystem called, “resource.” I’ll use “process step” as that seems unambiguous so far.
This gets into a lot of issues like “ready before a process step” doesn’t mean “located right there”; it could mean “located across the factory”. And like the MES community points out, the condition of “ready to process at this step,” if it’s to be real, needs to include whether “fixtures” and “tooling” are available. And maybe whether skilled labor is available. And necessary paperwork for design, instructions, permissions, Aerospace or Defense or Medical documentation or required conditions of some sort. In some cases, it’s not enough to just have the material itself “ready to process at this step.” I know some of this is different from “inventory”, but a new standard data architecture for inventory needs to work within the context of the other needs.
Also alternative routes or purchased substitutes for providing material that’s in condition of “being ready to work on at that process step” are also issues. And also different quality results (like one might obtain in a stainless steel processing plant). Also “partially processed at a step”.
These are not new issues in “modelling” a factory’s “parts.” What differences get a different “part number/name” or “model name” or “family name” vs. which just get a “color code” or “option code” has always been part of, not only getting a system ready to use, but also in the design of the company. It’s easy not to realize that all these issues have always existed — and were just somehow handled via judgments, trade-offs, priorities, and compromises –because we just go to work at a company or buy from one and somebody’s already created these ideas about the physical things that flow into, through, and out of the plant.
And, in a sense, providing the ability to post, “ready to process at this step”, “in process at this step”, and “finished at this step” is like shrinking the traditional MRP multi-step work order into a single-step work order. But careful, that’s combining the (I’m pretty sure) usually combined, but potentially separate issues of “inventory tracking” and “work orders” and “shop paperwork.”
This is a bit funny. This lengthy digression came from wanting to state the idea simply. But that’s standard TOC, science, or problem solving. The right concise simple statement often can’t be created with certainty until all the relevant lines of thinking emanating from that simple statement have been explored and tested.
At this point, I’m still thinking today’s computer power is so vast — in terms of available addressable/swappable main/disk memory, cpu speed, interactive methods, and database methods — that a “perpetual schedule” that would really be a Perpetual Product Structure, Plant Structure, Inventory, Status, and Schedule Memory Object [update: this later became “Perpetual Schedule-Based Decision Support (SBDS) Data Set” memory object ] might well become the heart of a new, more coherent, and more effective data architecture for manufacturing inventory and related information systems. The traditional Parts Master File, Bill of Material File, Work Orders file, Routings file, Orders file, and other files would either stay the same or be changed, not sure yet, but, in any event, they would support the creation and maintenance of that Perpetual Schedule object [update: this later became “Perpetual Schedule-Based Decision Support (SBDS) Data Set” memory object ].
That’s what the idea in the late 1990s was. And I’m still thinking it may be right.
4:30 pm, jan 7. i put that time and date stamp here because there are a lot of different starting and ending points on this page. some of them ended as loose ends. this one produced something useful. that “perpetual schedule” paragraph above is a good point to have arrived at. it consolidates all the other stuff written before the “starting over after 4:30 pm” header line below in the document. just trying to make the page easier to review without big revisions. not easy to make big revisions. bad eyes. plus more interested in moving forward from this point than revising earlier stuff. plus the usual leaving the verbalization process in place in case that’s interesting.
That all means I wrote the above indented digression after whatever all this next stuff is:
As I started thinking about it again, I could see it’s like the inventory system used in systems based on Haystack Syndrome where each processing step gets a unique name … hm … possible problem … after all this buildup … good thing nobody knows this page is here … : ) … anyway, possible problem is the unique name includes order number … it has to be that for scheduling in order to sequence batches … ah, but that’s the scheduling image, not the new Parts Master File … unless for custom orders where, potentially, every step is a unique … need to think about this …
well, so much for just introducing the context of the discussion … we’re back into the nitty-gritty thick of the issue now …
The inventory system I had in mind was also like an extension to a schedule that a Haystack system or MES would have in main memory … a schedule that would become … and I’m remembering what I called it now I think in my October 1997 paper at Apics 97 about TOC and MES working together which was about this inventory issue but also about a lot more … toc principles, thinking processes, measurements, scheduling, shop control, and the nature of MES in TOC system … anyway, “perpetual schedule” is the term I remember coining, starting to use, and starting to think from … i also, now that i think about it, remember talking to two software execs in Oct 96 about this … have to think about “perpetual schedule” again, though, to see if I’m still very impressed with it … it’s occurring to me that both all and no schedules are “perpetual” … they are all “the” schedule until they’re replaced by a new schedule and, eventually, every schedule has to be replaced with a new schedule … although, i can already, even before i post this update, hear the visual systems guys and gals shouting “NO, we DON’T need schedules at all any more.
Well, maybe, if your manufacturing process and flow are simple enough and if you didn’t break your company’s bank account investing in making it that simple, and also depending on what you mean by “schedule” … this arena is chock full of narrow zealous folk who mean well, but get too emotionally connected to fads and themes that have a lot of validity in certain situations, but aren’t the only answer and often don’t apply as well as they think to their own situations …
yep, we’re back into the thick of it again … yeah, man …
Manufacturing Company Information Systems
Within all but the very smallest and very simplest of manufacturing companies, there’s an information system, or a cluster or related information systems, that allows the people of the company to operate the business effectively and efficiently.
Inventory Information System
At the heart of the financial and manufacturing management parts of the information system is the inventory information system. The heart of this system, as my friend and Apics colleague, Dick Ling, would remind me, is the “Parts Master File” (think of this as the master file of parts vs. the file of some guy or gal known as … a Parts Master … no such person in a manufacturing company … there are several departments and people who share use and responsibility for various parts of the company-wide Parts Master File.)
Finished Goods Inventory
Different departments and people use the inventory information system for different purposes. For example, the inventory information system tells the sales, customer service, and shipping departments how many of which kinds of products are completely ready to ship to customers. This may be a finished assembled product like a lawnmower, pump, or fan or a finished processed product like a screw or a nail. This information in the computerized inventory information system is usually called, “finished goods inventory,” or FGI. When the company’s accountants prepare their monthly, quarterly, and annual financial reports, they use this inventory information as part of reporting the value of the company’s “assets.”
Raw Materials Inventory
The same inventory information system tells the purchasing department how many of which kinds of parts and raw materials are in the warehouse ready for the manufacturing operations people to “release” to “the shop floor” to start working on in processing operations and using in assembly operations. This part of the computerized inventory information is usually called, “raw materials inventory,” or RMI, and usually (I think I’m remembering this right) includes both the “purchased parts” that are just used in assemblies and shipped without processing and the “raw materials” that are processed before assembly and shipping. The accountants also use this information when calculating the value of the company’s assets for their financial reports to upper management, shareholders, and auditors.
Work-in-Process Inventory (WIP)
There’s one other category of inventory that every manufacturing company has to keep track of called, “work in process”, or WIP (pronounced “whip”). People sometimes just say, “WIP” and sometimes use the somewhat redundant term, “WIP inventories.” It’s this third category that presents the most complications for everyone concerned with inventory information management and use. That includes both operations and accounting people, but also computer programmers creating software programs that allow input, updating, and processing of the WIP inventory information. What is “WIP inventory”? As its name — work in-process — implies, this is the part of the inventory that has been “released” from the purchasing department’s shelves and bins to “the shop floor” for processing and assembly operations, but has not yet been completed and become and been updated in the computer as, “finished goods inventory.”
The WIP portion a company’s inventory information management system is the subject of this page.
When I was active in the manufacturing information systems area in the late 90s, I was beginning to suggest an approach to replacing the structure of WIP inventory information management systems that I believed would be much simpler and effective. I don’t know if the industry went in that direction in the meantime or not. If they didn’t, maybe they still should. At any rate, I thought I’d try to refresh my memory about what the issues were at the time with “work-order based WIP tracking, with work orders based on routings between bill-of-material levels” and about the simpler approach of … what should I call it? … I didn’t get as far as giving it a name … approach of … let’s call it what it is … “tracking WIP inventory as the results of individual processing steps.”
Some of the words will get in the way here. One of them will be, “work order.” It can mean different things. It can be paper printed out or just an electronic instruction. And there’s been a lot of just-in-time and other manufacturing “without work orders”, but not really. True, the MRP system’s “work orders” were not printed out and used, but even a kanban or empty bin in a two-bin system can be considered a “work order” as can the “sales order” that gets all the “visual systems” moving. So there will be that kind of problem with words and what they mean to different people and in different contexts, but that’s routine stuff to sort out. Just a matter of figuring out what’s really been going in reality (or what the reality could be changed to) and then working through the old and new words until the story’s clear. Same good stuff. Different day.
The Old Way of Dealing With WIP Wasn’t Wrong
Computer technology limitations gave rise to old way of dealing with work-in-process inventory information. That’s limitations in hardware, software, and computer concepts. That includes size and speed of addressable computer “main memory”, CPU speed, probably disk storage space, probably data base management systems, and probably software and systems development methods.
A manufacturing company’s product information was organized into clusters of parts for assembly into final products like lawnmowers or assembly into sub-assemblies like the lawnmower’s motor or lawnmotor’s chassis. These clusters were called “bills of material.” Why “bills”? Not sure. Archaic term probably. Was probably used in earliest manufacturing and just used since it was familiar so the computer version would seem as much like what people were changing from. The other part of the company’s product information was called the “routing.” This was the sequence of operations performed on a raw material to turn it into a finished part for assembly or shipping as a spare part. An example would be a lawnmower handle. The “routing”, as the name implies, would specific the route the straight piece of tubing (a raw material) would take through the factory to be cut, bent, tweaked, ground, sanded, and otherwise worked on to become a lawnmower handle. That handle could be used in the lawnmower final assembly operation or be shipped as a spare part to a lawnmower repair shop. The “routing” would, for each processing step, contain information about which work center in the factory performed it, how much time it might take for setup of the job (setup time), and how much time it took to make each part (time per part, TPP). It might also contain some notes about processing, about special fixtures, and references to more detailed blueprints/drawings or other information. That’s for each step. The overall routing of the several steps would usually also contain some text info including references to more information, and also a place to hold information about lot sizing (how many, if not just one at a time, of this part should be made each time this part is made). This lot sizing issue — at the level of the overall routing and at the level of the individual processing step — became probably the key aspect of the huge noisy big public nasty conceptual, technical, and professional wars over manufacturing management and manufacturing computer systems. Along with WIP tracking, management, and accounting, lot sizing — and whether it was different at the overall routing and individual processing step levels — was the key fact in the big public battles over issues like MRP (Manufacturing Resource Planning), JIT (Just-in-Time) and TQM (Total Quality Management) and “zero waste” and Zero Inventories and “visual systems” and “infinite vs. finite capacity scheduling” and MES (manufacturing execution systems) and — oh by the way — TOC’s schedule-based decision-support (Haystack Syndrome) architecture and strategy which included the TVA (TOC/Throughput Value Added) financial system, the drum-buffer-rope factory scheduling system, and the buffer management shop floor control system.
How could boring little details like “WIP tracking” and “work order vs. processing step lot sizing” have caused such a big highly-visible high-stakes global fuss?
I’ll tell you.
Change in Flow
Well, I’ve been writing around in different places on this page. It’s changed from an orderly exposition to a lot of little light bulbs going off in different places on the page. Rather than deal with overall flow of the page right now, I’m going to jump in at the re-starting point that seems interesting right now which is …
… That Memory Object in BAS in 1989 and 1990
The Haystack Syndrome was published in 1990. The evolving research prototype system that was used to think through the ideas in that system was first called, BAS, for Business Analysis System. Though everyone was expecting a new finite scheduling system from Eli Goldratt in the late 80s, to follow on his work in the 70s with a system called OPT, what he was actually making was a manufacturing decision support system. He, as a physicist, had been working in manufacturing companies, run into obstacles related to the use of allocation-based cost accounting for making decisions, worked out a different and better decision basis (which he called, “throughput world”, and I later in my book called, “TVA Financial System, TOC or Throughput Value Added Financial System.”), found very interesting relationships between the way many manufacturing decisions should be made and info that could only come from scheduling and shop floor control, and explained all of this in his book, The Haystack Syndrome: Sifting Information from the Data Ocean.
I later coined the term, “Schedule-Based Decision Support”, to describe what I also coined and described as “Haystack-compatible” systems. One main reason it was schedule-based decision-support is Eli made the point that you don’t know what the capacity of a factory is — expressed in terms of sales dollars, units of products in what mix with what due dates on the orders, with how many people/shifts working, with or without overtime — until you’ve made a schedule. I thought that sounded odd at first. I felt, well, people who work in factories know what their capacity is. But, when you think about it, he’s right. All of the figures people in the factories tell you about the capacity are due either to (1) what they remember or have records of actually having scheduled and gotten through the factory or (2) what they can project can be scheduled through the factory by making either a rough simplified or more detailed schedule that you might call a “simulation.” “Simulations” in decision-support systems are schedules made given a lot of assumptions about, well, everything, about all of the elements and parameters of planning and scheduling systems. Even a JIT visual system states its capacity by scheduling, though they’ll never admit it. When they consider adding a product line to a simplified flow, work cell, group technology process flow with visual whiteboard kanban and two-bin signalling to avoid — oh horrors! — every using a computer, they are using the backs of their envelopes and pencils or spreadsheets to work through the scheduling of an assumed order load.
Anyway, what that means is that, when we (the members of the Goldratt Institute’s BAS development partner program), were working with the “scheduling module” of BAS, we knew we were working with shop floor scheduling and the simulation engine for what I later called “schedule-based decision support.” hm … now wondering if i did really coin that … if it’s in haystack, no. if not, yes. not sure now.
Ok, that for context. Now to the starting point I was interested in.
The BAS scheduling module took several files of info about a factory, that came from several parts of the factory’s computer system, and combined them into a single block of data in main memory. the input files were something like … i can still see the diskettes … right, diskettes … 3 1/2 inch “floppy” but not really floppy diskettes … and the occasional pterodactyl flew by the window too … orders, resources, inventory, routings, bills of material, hours available … The BAS program converted all of these separate parts into a network that represented the scheduling situation that existed in the factory at a point in time. Because it was all in “main memory” programs ran fast. Back in 1990 — twenty years ago — limitations on the size and speed of main memory and … suffice to say it was a big advance over typical systems at the time …
and that, today, that kind of thing is taken for granted … what i mean is having LOTS of main memory that is fast and having FAST cpu is, ho hum, yes, we have that all day every day everywhere …
but that’s part of the point here … in fact, it’s the main point here … that kind of memory object can be the “perpetual schedule” [update: this later became “Perpetual Schedule-Based Decision Support (SBDS) Data Set” memory object ] … of all manufacturing computer systems that also … let’s see … getting close here … supports or is the new wip inventory system … probably IS the new WIP inventory system …
to test that, need to run the scenario of (1) the usual parts master file holding info for purchased stuff … no … somehow the parts master file gets changed too …
another point … can always save the memory object of “the perpetual schedule” [update: this later became “Perpetual Schedule-Based Decision Support (SBDS) Data Set” memory object ] to disk for backups and restores.
“the perpetual schedule” isn’t just a “schedule”, it’s the “schedule” plus the … maybe should coin a term, “perpetual product network” and “perpetual schedule” … the system will add routes/arrows, stations for new orders, and delete unused ones when orders are finished … [update: this later became “Perpetual Schedule-Based Decision Support (SBDS) Data Set” memory object ]
the “product network including wip” and the “schedule” are two different things …
still drafting …
note: going back from here and reviewing from the top led to the 4:30 perpetual product structure, plant structure, and schedule memory object expression of “perpetual schedule.” [update: this later became “Perpetual Schedule-Based Decision Support (SBDS) Data Set” memory object ]
#First interim conclusion
Starting Over After 4:30 pm (First Interim Conclusion)
All the above drafting, verbalizing, and thinking led to this first interim conclusion:
“At this point, I’m still thinking today’s computer power is so vast — in terms of available addressable/swappable main/disk memory, cpu speed, interactive methods, and database methods — that a “perpetual schedule” [update: this later became “Perpetual Schedule-Based Decision Support (SBDS) Data Set” memory object ] that would really be a Perpetual Product Structure, Plant Structure, Inventory, Status, and Schedule Memory Object might well become the heart of a new, more coherent, and more effective data architecture for manufacturing inventory and related information systems. The traditional Parts Master File, Bill of Material File, Work Orders file, Routings file, Orders file, and other files would either stay the same or be changed, not sure yet, but, in any event, they would support the creation and maintenance of that Perpetual Schedule object.
That’s what the idea from the late 1990s was. And I’m still thinking it may be right.”
Ok, so that’s good so far.
Painting a Nice Picture
I think a good way to proceed from here is to think of a Perpetual Schedule (a perpetual schedule memory object) [update: this later became “Perpetual Schedule-Based Decision Support (SBDS) Data Set” memory object ] as a painting to be created, as if the computer’s “main memory” were a canvas on which all the necessary elements are to be painted using paints/information from the various files in the computer’s database system, from various input devices, and from interactive users. It would be a good idea to do that for a single manufacturing company and then maybe do it again for the most typical types of supply chains.
Pretty Simple Factory Situation
Let’s assume we have a factory that has one customer who places one order at a time for a product that is assembled from two purchased parts. That simple enough?
Let’s make it a lawnmower manufacturing company. They buy a motor from one company and a chassis assembly that includes a handle, base, wheel, and blade from another company, assemble the two components and ship the assembled lawnmower to the customer.
We need to paint on our canvas the final assembly work center and the schedule of when it’s available to assemble lawnmowers. Let’s paint it in the center of the canvas.
On the right side, we write the customer order: Part Number (let’s call it, X4300) of the lawnmower, price, and due date (Jan 10), number ordered (1), number shipped so far (0 so far).
On the left side we show the two purchased components: For each, a Part Number (M23 for the motor and C78 for the chassis), Purchase Prices ($25 and $20), and quantity on hand (zero for the motor and 1 for the chassis).
We don’t necessarily need the pricing information for the scheduling, but the prices are needed for other purposes.
It might be inefficient to put everything that might be needed in the main memory painting, but I’m not going to worry about that on this first pass through the scenario. I can easily envision a basic painting being formed in memory for scheduling and then, as various kinds of decisions arise, having additional info painted onto the canvas for a while to support the decisions and then erased when no longer needed. In fact, that should be an assumption in how this architecture will work. Some data will stay in the “perpetual schedule” all the time. Other data will be pulled out of the database, off of the hard disk, or from some other connected system, or from one or more people using the system, as needed.
The order’s not due for three days and the motor, though on order, hasn’t arrived yet, so the factory manager, assembly supervisor, parts dispatcher, production planner, quality control expert, defense department inspector/auditor, and purchasing manager are playing cards, nickel-ante poker. Tough life.
In the computer database, there’s a Purchase Orders file with that one open purchase order (PO) for the one motor that has a due date of today. It should show up any time now. There’s also a record for the closed PO for the chassis that’s already been received earlier today.
Ok, we need layers in this image. So we’ll shift from a canvas … no, we can keep the canvas for the bottom layer, but then have sheets of glass over the canvas for layers. Why do we need layers? Well, we need at least two so far — one for, from left to right, the raw materials and purchasing receiving room, the final assembly work center, and the finished goods storeroom and shipping department. The other layer — the first layer of glass — is for the concept of the order for the product, the process step that creates the product that fulfills the order, the raw materials that are released into the assembly, and the three arrows that connect them into a product flow.
We may need other layers later. Not sure yet. Maybe for schedule, or for testing changing a schedule, or testing adding a new order to an existing schedule (constraints-based order promising).
Ok, I’m more sure now about some of these. We want the bottom layer (the canvas) for the physical view. First glass layer for the product structure view: materials to be released into “legs” of the product structure, arrows from materials to material release process steps, arrows between process steps, and arrows between final product process step and order line items. Second glass layer is the schedule in effect. Third glass layer is the new schedule or new other case being considered for evaluation for operational and/or financial effect.
So, there may be others, but we’ll have at least three layers all the time and a fourth when considering various kinds of decisions.
That second layer uses images of “arrows” to connect orders (actually, to line items of orders), to process steps that create final products, and to connect parts that are used in the final assembly process step to that process step. “Arrows” is a concept used in the BAS/TheGoalSystem. Maybe also in “Resonance”. Maybe also in Haystack Syndrome. It’s a useful image and concept to use in the new architecture. (There are some issues here involving differences when common parts exist vs. custom orders. Not sure how they effect things. Been a while. I’m going to skip over those for now.)
There are three arrows for this single customer order. One from each purchased sub-assembly to the final assembly process step. Notice the arrow is to the process step, not to the final assembly resource itself. The third arrow is from the final assembly process step to the order (line item).
The relationship we’re calling, an “arrow”, is represented by a data structure — a “record”, a “row”, in a relational database table. Almost everything we’re doing involves a row in a table in a relational database. I can’t right away think of anything we’re likely to deal with in all of this that doesn’t involve “rows” (or “records”) of “data elements” in the “tables” in “databases” of relational database management systems (dbms).
The Arrow data record has a unique identifier, a “from” location (base of the arrow), a “to” location (the tip of the arrow), and a quantity of material going from the one location to the other.
Let’s change the situation a bit to make the “arrow quantities” more meaningful. Our factory orders motors from one company, chassis from a second company, and the four wheels from third company. So the four “arrow” “records” for this product structure will now look like:
from to quantity ……… from to qty
wheels —> assemble 4
motor —> assemble 1
chassis —> assemble 1
……………………………………………. ………………… assemble —> order 1
What About Inventory Systems?
But the original idea was inventory system. What’s all this telling us about inventory systems so far? Not sure. But … Hmm … that’s good. If my pal, Bob Vornlocker, were here, I’m sure he’d be saying something like the Parts Master File could become a lot more dynamic with maybe process steps (“stations” in BAS/GoalSystem parlance) being added or deleted as needed … or maybe use traditional work orders in ways that don’t interfere or add unnecessary steps for operations … maybe, once have the perpetual schedule in memory, the old style work orders could be used in supporting role vs. be the main players … not sure …
It’s going to be easier to think about the inventory tracking within the perpetual schedule memory object vs. using traditional work orders if we put an I-shaped multi-step routing into the picture instead of just having material release and order fulfillments right away. So let’s add two axles to our lawnmower … actually, they have four little axles, one for each wheel, so let’s do that. We’ll assume we buy raw material in the form of steel rod already cut to length (we can add buying longer rod and cutting it in the shop later). So, inside the shop, let’s assume the purchased steel rods get processed in five different work centers each with its own process step. It doesn’t have to make sense, so let’s just use: (axle 1) flatten one end, (axle 2) rough grind, (axle 3) drill hole for cotter key, (axle 4) heat treat, (axle 5) clean, and (axle 6) apply threads. So there’s 6 process steps.
the new arrows file/table for the product and the one order becomes (with process steps in italics, raw materials in bold, and orders in underline:
from to quantity ……… from to qty
steel rod —> axle 1 ………… (arrow qty 4)
axle 1 —> axle 2 ………… (arrow qty 4)
axle 2 —> axle 3 ………… (arrow qty 4)
axle 3 —> axle 4 ………… (arrow qty 4)
axle 4 —> axle 5 ………… (arrow qty 4)
axle 5 —> axle 6 ………… (arrow qty 4)
axle 6 —> assemble ……….. (arrow qty 1)
wheels —> assemble ……………..(arrow qty 1)
motor —> assemble …………….. (arrow qty 1)
chassis —> assemble …………….. (arrow qty 1)
……………………………………………. ………………… assemble —> order 1
That’s an Arrows table view. More intuitive would be the product structure and flow view. For the axles, it looks like this:
steel rod -> axle 1 -> axle 2 -> axle 3 -> axle 4 -> axle 5 -> axle 6 -> assemble
Maybe I’ll break out MS Paint to draw the other parts of this. Basically, draw a circle or box to represent assemble. Have wheels, motor, axles, and chassis all come in from the left to the circle. Have an arrow going out of the “assemble” circle to the right to the order and that’s how visualize the “product structure” of an order from raw materials, through process steps, through assembly, to fulfillment of the order. Here it is:
Ok, with that “leg” of “process steps” that get a release of 4 steel rods, process them through 6 steps, deliver them to assembly, we have a nice basis for thinking about “work orders,” WIP, status posting of part and batch completion, lot sizing, splitting batches, overlapping batches, just in time, kanban, two-bin, and visual systems.
The way the traditional work orders worked was this. We’ll use the axles leg for the example. Let’s says there are 10 steel rods in the storeroom, with the 10 showing in the parts master file for that part number. To get the 4 steel rods we need issued (“released”) to the shop to work on, a new data structure gets created called a “work order.” The “work order” contains all six process steps for the axle. It’s essentially a copy of the “routing file” for the axle “part”. The way computers work for something like this is they use what’s called a “transaction,” a word that has a meaning similar, but also different, than our everyday meaning for the word. In computer-speak, a “transaction” is a related set of changes to a database that all have to be done before any of them can be made permanent. So, in this case, a change is made to the parts master file to subtract 4 units from the 10 showing in the database. Then the computer increases the number of steel rods on the new work order from its initial zero to 4. So the accounting for those 4 units of steel rod has been shifted from the Parts Master File to the Work Order File. When both of those updates have been made, the transaction is made final and permanent. At that point, the 4 steel rods are physically taken out of inventory and given to the first work center that’s going to work on them, the work center that will do the process step called , “axle 1”, which is the hammering to flatten one end. The important thing, from an inventory view, is those four parts are no longer “raw materials inventory (RMI)” accounted for in the Parts Master File; they are now Work in Process (WIP) inventory accounted for in the Work Orders file. That’s important and has a lot of consequences for different aspects of what system programmers and users want to know and do.
Although, I think having the perpetual schedule memory object in memory is going to solve a lot of what used to be those consequences. Consider that discussion of “transaction.” As the work order is formed and added to the Work Orders file, and 4 units of steel rod are removed from the Parts Master and added to the first process step of the work order, the transaction can also be programmed to update the inventory position on the perpetual schedule memory object.
That seems a bit redundant but, for now, I’m going to leave it there for now. For now, just keep letting work orders work their usual way except add automatic updating to the perpetual schedule in memory … that may be one way to implement it that works in a lot of cases where the re-programming or people resistance involved in turning off work orders might be too high … “turning off work orders” … that’s what’s done in some TOC drum-buffer-rope implementations … they must just post out of one Parts Master File record and then just post into another Parts Master record and have a procedure for whether they do that at shop release or arrival at assembly or shipping. that seems like it leaves open possibility for error. the transaction to create a work order, with a quantity and a due date, includes adding a record to the Work Order File and decrementing the raw material quantity in the Parts Master. closing out a work order transfers the stock to the Finished Goods record in the Parts Master file.
[not sure about this … just bracket and indent it to get it out of the way for now … perpetual schedule might not be the best term after all … something should say perpetual wip and other inventory that’s relevant to the current and possible revised schedules … that longer name — the “perpetual product structure, plant structure, inventory, status, and schedule memory object” — is correct, but too long, for frequent use …] another point is that work that needs to be done at a process step has two identities: one has to do with physically what will need to be done, for example: an axle not yet started needs all the process steps, or an axle that’s completed process step axle 4 needs steps 5 and 6, or a finished axle is done. the traditional work orders, if status is posted along the work order tracks this
(it’s important to point out that status is often NOT posted on work orders … status posting on work orders became part of the big “non-value-added activity elimination” crusade during tqm and jit and process improvement projects, and rightly so … drum buffer rope doesn’t require status posting in non-constraint areas either …)
a process step that needs to be done to satisfy an order has an identity as being associated with a specific customer order and due date, which comes from an allocation process …
I’m getting into all the lovely variables again here … whether to status post … whether to print out paper work orders or just have them electronic … same for process step instructions … To make getting through one pass simpler, I’m going to use the relatively simple ideal case of paperless operation, bar code or even RFID status posting, or no posting … or not …
A Second Interim Conclusion
let’s go back to the lawnmower’s 4 axles moving along the 6-step routing with work orders working as traditional except that — and this is the second interim conclusion — all the work order transactions involving inventory (open, decrement raw material parts master file, status post progress along routing, close and increment finished parts in parts master file, loss, and scrap) are modified to now include also updating the in-main-memory perpetual schedule. [update: this later became “Perpetual Schedule-Based Decision Support (SBDS) Data Set” memory object ]
So Let’s Play Out One Order
So let’s play out one order.
The plant starts out empty and with no customer orders. The guys and gals in the plant are playing cards waiting for some orders to work on. Raw material inventory is: 10 steel rods for axles, zero motors, 1 chassis and 6 wheels. No finished goods inventory and no WIP. No open work orders.
Then along comes an order for 1 model X4300 lawnmower. When the sales order is entered with X4300, quantity 1, due date Jan 10, the computer (the one we’re designing) paints the order on the right side of the first layer of glass over the canvas. It then checks the data base for X4300 and finds one arrow to the assemble process step on the final assembly resource. The computer paints the assemble process step and the arrow on the glass. The computer then goes into the database … oops … i forgot the stock check at the order level. Back up …
When the order is entered, the computer first checks the Parts Master File for whether a finished x4300 lawnmower is already sitting in finished good inventory. If there’s one there, and there are no other orders claiming it, there’s no need to build a product structure on the “first layer of glass” in main memory to cause the factory to build one. But, ok, when the computer scans to the row of the Parts Master File for finished X4300 mowers, the quantity it finds is zero. So now the computer paints the final assembly assemble process step and the arrow with quantity of 1 to the order.
Next, the computer checks the Bill of Material File for rows starting with X4300, finds rows for motor quantity 1, chassis qty 1, wheels qty 4, and axles qty 4. It creates arrows for all four part types and then checks stock for those quantities. For chassis, it creates an arrow with quantity one, needs one, finds 1 in the Parts Master File, and it “claims” or “allocates” the 1 chassis that’s in stock. For wheels, it creates an arrow with quantity 4 and it “claims” or “allocates” 4 of the 6 in stock. For motor, it creates an arrow with quantity 1 and it needs one and finds zero in the Parts Master File, so it creates either an alert message to buy one or automatically creates a purchase order or EDI order, depending on the system. For axles, it checks for finished axles in the Parts Master File, finds zero and creates arrow with quantity 4 from the axle 6 process step to the assemble step. It then creates the arrows from raw material to axle 1 process step through axle 6 process step, all with arrow quantity 4.
In the system we’re designing, all of this checking of Parts Master File, comparing of quantity needed and available, marking stocks as “allocated” or “claimed”, creating notice of needed purchase orders or creating the purchase orders themselves, and painting the arrows, arrow quantities, and process steps — happened as a single transaction triggered by entering the sales order for the lawnmower.
Oh, and one other thing the transaction would do is turn on a flashing light and ring a bell so the plant employees would know it’s time to stop playing cards and start building lawn mowers.
The final assembly worker and supervisor check the system and see that they have the wheels and chassis they need, but the motor they need isn’t yet in stock. The purchasing manager sees the “purchasing advice” given by the system and places or checks on the purchase order for the motor. The final assembly worker and supervisor also see they don’t have the axles they need, so they go back to playing cards. The supervisor and workers in the work centers for hammer (processing step, axle 1) flatten one end, (processing step, axle 2) rough grind, (axle 3) drill hole for cotter key, (axle 4) heat treat, (axle 5) clean, and (axle 6) apply threads process steps see they have work to do to convert the steel rod into axles and get them to the final assembly.
The parts dispatcher (or, in a lean environment, the operator of the first process step) goes to the computer system and creates a work order for 4 axles, pulls the stock of 4 pieces, and takes it to the hammer work center for the first step. When the work order is created, the computer automatically, as part of the transaction, reduced the stock in the Parts Master File from 10 to 6, and updates the inventory in the perpetual schedule in main memory.
Now there’s a choice to make. Does the first work center set up for the batch of 4, do all 4, and then send 4 completed parts to the next work center? If so, that’s the slow way. Another way is for the first few work centers to set up at times that let the first part move down the line without waiting between work centers. That’s called overlapping. The batch is overlapping across multiple work stations. That dramatically decreases the amount of time it takes the first part to arrive completed at the end of the routing which is sometimes, not in this case, a great benefit. In this case, there’s only very small benefit, if any, to overlapping, but we’ll assume they still do it so they can get the batch done and get back to their card game.
They have several choices concerning status posting the progress of parts along the routing. The traditional computer software provides a transaction for “completed process step” that allows the operator to state how many were completed. The computer then, for example, decrements the 4 at the beginning of the routing to 3 and increments the stock after the first step from zero to one. As the first part moves down the routing, and IF (big if) they are doing “status posting”, the number one moves along the process steps.
The system we’re designing here would, whenever status is posted, not only change the quantities along the work order’steps, but also update the quantities on the perpetual schedule in main memory.
That can work.
What Bob Vornlocker would tell us is that making these and other additions to the transactions in the standard manufacturing database system would keep the data set the Goal System or any Haystack-compatible system would need to perform its functions always available and up-to-date in main memory.
That’s the right answer.
That makes Haystack-compatible computing, Schedule-Based Decision Support (SBDS), the standard, a standard that allows more or less elaborate planning and scheduling and shop control depending on the individual plant and product structure. Use work orders or don’t. Print paper work orders or don’t. Status post not at all, at selected points, or — using bar code or RFID scanners — at all useful points. “Non-schedule” everything from shipping buffer using due dates and visual systems or maybe schedule the occasional mid-plant constraint when demands even the JIT-inspired investment in excess (vs. protective) capacity, or not. But, whatever you decide on these issues, have the various transactions in the manufacturing system be modified and expanded to include creating and maintaining in main memory the perpetual data set required to create and maintain schedules and to facilitate decision support. [update: this later became “Perpetual Schedule-Based Decision Support (SBDS) Data Set” memory object ]
“Perpetual Schedule-Based Decision Support (SBDS) Data Set” vs. “Perpetual Schedule”
I think that’s getting closer to the right term for that memory object we want standard manufacturing computer systems to provide tools for creating, continuously maintaining, and using.
More Conclusions (Fifth Installment)
If Lisa S, Mike B monitor, and Chief of CincyCom were here, they’d, respectively, wonder if we were making any changes to data structure after verifying that work orders and perpetual Sbds data set memory object can peacefully coexist and that many standard transactions will be expanded to create and maintain that object; save us by getting us out of thinking loop by proposing that what separates a part that gets a routing vs one that doesn’t is simply when the former starts out as one thing and ends up as another vs just being and staying what it is; and that, conceptually, i.e., along the lines of E F Codd relational database concepts, the parts aspects of wip in-process parts at their process step stages belong in the parts table of a relational database along with the ppi parts and rmi parts and fgi parts (and those parts aspects should not be only or even primarily or even also in other tables like work orders) while the processing aspects (setup times, process time, resource used, instructions, links to more instructions) can remain in other tables and also that each process step for each in-process part can have both a row in the parts table for in-process and a row in the parts table for completed (which means the parts master grows to include the routing process steps, with an in-process and a completed row in the parts table for each routing process step, wip quantities are maintained in the parts file and not the work orders file, status posting transactions update the parts master file and not the work orders file, [hm … what’s the role/use of work orders … to get batches onto the floor … ok] … and maybe more, like the data structure built and maintained in main memory by the base system, that’s new). So, yes ma’am, Ms Scheinkopf if you were here, the standard data structure for inventory management is likely to change based on these discussions.
Oh, and if Eli G. were here, he’d say, ok, CincyCom’s right to say new expanded Parts Master File that now will include rows for in-process and completed routing processing steps will be more like the Haystack/Sbds/BAS/GoalSystem data set memory object because it has completed parts for processing/routing steps.
Seems good so far.
We’ll see if it still looks good tomorrow. “)
The Tomorrow (Later Today) View
Well, it still looks good at 10 43. “It” being modifying the tables and transactions of the standard manufacturing database and software systems around the world as discussed above.
I was having a little trouble at first getting used to the idea that the Parts Master File would begin having so many additional rows/records due to treating each of the routing steps as a separate “part.” But I got to thinking about what CincyCom might say if he were here and it’s an unavoidable fact that a part that gets released from raw material inventory and starts to be processed at the first process step on the routing is a physically different item. Start to hammer on a piece of steel rod, for example, and it’s no longer that unhammered-on steel rod you bought from your supplier. When the hammering is done and the part meets the specifications for that process step, it is still not the unhammered-on raw material item, but it’s also not yet the finished hammered, drilled, ground, tapered, and whatever-we-do-to-it axle. It’s the part called “axle after process step 1.”
In a real system, the axle’s numbers would look more like this: the steel rod raw material would have a part number like SR3455. The finished axle would have a part number like AX125. The six process steps would have numbers like: AX125-001, AX125-002, AX125-003, AX125-004, AX125-005, and AX125-006. So, to restate the above argument about it making sense that the new Parts Master File will contain rows/records for in-process “parts” at various stages of their processing: When, in routing step AX125-001, you start hammering on steel rod raw material item, SR3455, it ceases to be an unhammered-on SR3455 and becomes something else, a hammered-on SR3455. When the hammering is finished and the one end is flattened out to specifications for being an AX125-001 … ok, we need that other new part number … when it’s being hammered on, but doesn’t yet meet specs, it’s carried in the Parts Master File as a AX125-001ip … “ip” for in-process … could have been AX-125-001cbh for “currently being hammered,” but better to work with the more general “ip”, or just “p” … yes, just “p” … So the transaction software used when removing steel rod raw material parts from the stock bin (I don’t dare say “stock room” ever since the JIT/TQM heyday … just kidding … one can say the s-word these days … my friend, Paul N educated me on these things …), and giving them to the hammering work center, creates a new row in the Parts Master File/Table for new part, AX-125p (no sense in having “p” or “in process” rows for every routing step of every part if that part’s not being worked on at the moment. better to create it when it’s needed and delete it when it’s not, right? eliminate waste? Paul N? see, I’m on board for eliminating waste too … when it is waste, when it is non-value-adding in the sense of cause-effect toward more TVA now and in the future … but I digress? not really … onward), increments the new row’s initial zero quantity field to 4, and then decrements the quantity field in the Parts Master File/Table for the unhammered-on SR3455 from 10 to 6. The new standard transaction also updates the “perpetual SBDS data set” object in main memory. The system’s now showing, in both the Parts Master File/Table and the pSBDSds … like that? … i’m not sure yet either … maybe “pds” for “perpetual data set” as a nickname for the “psSBDSds” abbreviation? … “pds” as “perpetual data set” that includes a “parts data set”? … anyway, the system’s showing, in both the Parts Master File/Table and the PDS in memory, that 4 … 4 what? … since we’re getting so comfortable adding temporary rows in the Parts Master File/Table …
also, may as well just call it the Parts File/Table since, without having parts on work orders in the same way or for all the same purposes, now all the parts are in the Parts Master File, so we’ll call it the Parts File/Table … and we may as well, while we’re at it, get into the 21st century on files vs. tables, and, since relational database tables can be in various files for lots of technical reasons appropriately invisible to the user, and since table with rows/records and columns/dataElements is the user point of view anyhow, we’ll just start calling the Parts Master File/Table the Parts Table. …
where was I in that sentence? …
… we may as well have our “release from raw material inventory” transaction create other little useful temporary rows that help us meet all the practical little things some of the MES help with … first create a AX125r or i row that says it’s released … the real systems designers are going to have go over all these new rows and transaction functions … the industry can’t just keep using this web page for final programming specs, jeez … anyway, let’s try some stuff … an AX125i or r row with quantity 4 that says the material’s been issued out of stock and is intended for the AX125-001 processing step in the hammering work center … also, a new row, with yet another new suffix …
we should start over because these new part numbers should be self-alphabetizing or they’ll come out confusing on display screens and reports and we don’t want to be adding extra sort routines and relational table keys and stuff … so i think we agree globally to a convention that a “step 000”, in this case, step AX125-000, is in order that will carry various suffixes for things going on before process step AX125-001 gets started. AX125-000r or i is released or issued from stock and is now somewhere physically between the stock bin and the hammering work center. next new row, AX125-000r2 or i2 with quantity 4 is “
no. better just have 000a, b, c, d, e, f, g with a reserved for “issued/released but not yet at first work center, b, c, d, and e left there in case some local situation has issues involved in getting from stock area to work center, f is reserved for “at work center”, g, h, i, j reserved for special uses i’m not anticipating, then k through z are for tooling, fixtures, paperwork, permissions, etc. So the people using the system will get used to the idea that they’ll see an AX125-000a and AX125-000f everytime, and that anything k and above is fixture, tool, special labor, permission, government/aerospace auditor there … right and these “parts” … ah … ok, potential negative branch … are we violating the ef codd/tomNies principle of having things in a table that aren’t like other things in the table? … I think CincyCom, if he were here, he’d say, it’s ok … because “part” is not only similar vs different physical things to pick up … “part”, to the philosopher, plato/socratic theory of the , what is that, of the idea … anyway, “part” is really “requirement”, “necessary condition” … so they’re ok, in fact, they’re brilliant …
but are all of them temporary like we’ve been saying, and, if so, where do they come from? the a and f and j-z … if CincyCom were here, he’d say I’m thinking 1980s and prior 16k RAM constraints and am irrationally trying to limit the number of letters … and he’d be right …
so, starting again … phantom processing step AX125-000 gets suffixes. AX125-000-1-WhateverItIs … that’s too much, of course, but it makes the point. we want to make the suffixes self-sorting alphanumerically … long enough to be clear, but not so long that they won’t fit on display … AX125-000-1-rel for “released, but not yet at hammering work center” … AX125-000-2-reserved … these things can be established in setup programs … i’ll just assume that, at my little lawn mower company, we just need release and avail for movement, and then a fixture and a special tool … these could be in the Routings Table on a permanent basis and created temporarily in the Parts Table when needed … update: probably the fixture and tool aren’t in routing table permanently. if anywhere, in “parts” since they’re “parts,” arrowed into the process steps that require them
so how’s this all going to work? let’s start again from no orders yet, everybody playing cards again waiting for something to work on …
customer places one-line-item order again for 1 lawn mower. “enter order” transaction checks stocks, creates process steps and arrows on the first glass layer over the canvas. the purchase advice for the needed motor goes out again, release allocations are incremented 1 for chassis again, 4 for wheels again, and 4 for steel rod for axles again. knowing they don’t have all the parts they need yet, the final assembly worker and supervisor go back to playing cards. again. the parts dispatcher sees the hammering work center needs the 4 steel rods to start hammering on. he goes to the stock area, pulls the 4 parts, and uses the computer to initiate a “release parts to shop floor” transaction. he does this by going to the screen for “release advice [from the computer] about what parts need to be released to the shop floor.” at the top of the list is the 4 steel rods to go to hammering work center associated with processing steps for part AX125 associated with order number ORD-98765-001. He selects that item of release advice and presses enter. The computer now goes to work on the transaction. It decrements the quantity on hand of SR3455 from 10 to 6, decrements the “claimed/allocated” quantity in the row for SR3455 from 4 to zero, both of these as “updates” to the Parts Table. Next, the computer, creates a new row in the Parts Table for part number AX125-000-01-release and increments its initial zero quantity to 4. It also creates a new row, AX125-000-02-avail (available at work center) and leaves its initial quantity at zero. Next the computer — as part of the same “release parts to production area” transaction, checks the Routings Table for rows beginning with AX125. It finds 1 for the needed fixture, 1 for the needed tool, and 6 for the actual processing steps. So it creates 8 additional new rows in the Parts Table for: AX125-000-03-fixture, AX125-000-04-tool, AX125-101, AX125-102, AX125-103, AX125-104, AX125-105, and AX125-106, all with initial quantity zero. [Update: maybe the fixture and tool are “parts” and are just in the PartsTable all the time. If my Albany MES developer friend were here, he’d say, as I wonder if fixtures, tooling, etc should be in the system vs just informally “handled” by operators, he’d say, our experience with having it in the MES is that now they know where the tooling is, doesn’t get misplaced, remade for no reason, etc, which would sell me on having tooling, fixture “parts” in the “parts” table and in the Bill of Material Table as yet another part required at that process step. that means the fixture/tool “parts” are already in the product structure painted on the glass at order entry time and don’t need to be created in the Parts Table at main part release time.] [another update: if my texan golfing disney-bashing furniture-making pal were here, he’d say, hey, tom, about these fixtures and tools, that’s a lot of what a work center is, so some fixtures will require thinking about and dealing with as “parts” with arrows to process steps and some wont, and he’d be right about that too]
The computer keeps rolling to now paint this new information into the pds, the pSBDSds, the perpetual data set, the perpetual schedule-based decision support data set object in main memory … but wait … isn’t a lot of it already there on that first layer of glass? yes, we said it was created by the “enter sales order line item” transaction. So that means the “enter order line item” transaction checked the Routings Table and Parts Table too to get the information for the stock checks, the names/numbers of the processing steps, the names/numbers of the raw material items, and to make all the arrows, right? right. sometimes limited availability of a raw material is the constraint used to create the schedule of the factory, so that got painted on the glass. sometimes lack of a fixture or tool can be the constraint. that has to dealt with in scheduling too. so that means all our extra AX125-000 with suffix steps got copied to the glass during the “enter sales order line item” transaction.
that means the only thing on the “glass layer” in main memory that needs to be updated is the 10 SR3455 from 10 to 6, the allocated/claimed from 4 to 0, and the AX125-000-01-release from zero to 4.
That’s it. And now the computer puts a message on the screen that the “issue raw material parts to the shop area” transaction was successful. the parts dispatcher gives himself a high five … can you picture that? … picks up the 4 steel rods and starts walking with them to the hammering work center. meanwhile, the final assembly folks are still playing cards.
when the parts dispatcher gets to the hammering work center, he finds she’s ready and waiting for him, with hammer in the air, ready to hammer away on those steel rods …
What about the work order?
Given that, what’s the new role, if any, of the traditional shop floor “work order”?
What was the traditional role?
What is it now?
So do “work orders” go away completely?
But, depending on what you have in mind when you think the term, “work order,” the answer might be, yes.
The “process steps” painted on the first layer of glass over the canvas are, in a sense, “work orders”, but created in a different way, put in a different place, having potentially different quantities that are subject to change from subsequent re-scheduling/re-allocation of available stocks at various levels (a big improvement over the old kind of open work orders that had to be treated as having fixed quantity regardless of shifting circumstances and had some pressure to finish them, wasting short-term capacity, even if the orders calling for them were cancelled … by the way, the new “r” suffix code for “released” , or maybe the “i” suffix code for “issued from stock to the production floor”, and the “p” suffix code for “in process at this processing step”, these new codes that create new AX125r or AX125i and AX125p “parts” rows/records in the Parts Table also make it easier to decide to not waste capacity on finishing old-style “open work orders” that get finished just so they don’t get confused or lost or misunderstood due to having no valid part number to assign to them or to show in the inventory system. that’s a big help. not obvious, but known to the folks who know these little details, the experienced factory operations folk, and you and me), and with important differences in meaning (especially the association with a customer order with its due date or with a stock order and its due date … what? gasp? because of “stock order”? right. another s-word. but it’s ok. toc and george plossl and michael donovan and bob vollman fixed that one years ago in the minds of those willing to think in terms of validity ranges and cause-and-effect and what works vs. not thinking and just reacting and over-doing some otherwise useful crusade).
Manufacturing Software Industry Notes
Cincom Systems – arguably the inventor of the database management system
Whoever bought CA that bought up everybody else (including ASK who was buying up everybody else for a while) for a while? Looks like nobody bought them. Amazing.
What about Thru-Put? Didn’t somebody buy them? Yep. PivotPoint. No mention of TOC roots of the company, or Resonance DBR product, in the press release by PivotPoint. Interesting.
but, a year before the PivotPoint acquisition, Thru-Put still talking TOC, DBR, and Resonance. mentions meritor, formerly rockwell automotive. somebody i know real well played a major role in creating that nice little landmark haystack-compatible system success.
this is cute. apparently, the people who bought the installed base of IBM s/34, 36, 38 Mapics software, and installed created a new software company with a huge installed base, also bought the rights to the web address for a well-known TOC software company that was acquired in 1999. when i clicked on the link below, i saw MAPICS in big letters : ) nice move, i guess. there was a moment in the 1990s when I was going to focus a systems integration business on upgrading the entire mapics installed base to haystack-compatible … no that wasn’t it … to modified-existing-MRP drum-buffer-rope … problem was, at the point i had that excellent idea, I didn’t yet know how to do that … learned in later, in 94, at which time other good ideas beckoned … thinking again … that was a while ago … actually, i think the idea was a combination of both, both modifying existing mapics using whatever report-writing/coding tools worked on its various versions to get really adept in mapics on the first few projects and then do several mutually-reinforcing things … (1) just do more modifying of existing software to DBR and TVA-lite (ding … new term) where where it was enough (for the plant and product structure), (2) implement Haystack-compatible system where that made sense, and (3) the real personal, professional, industry progress, and financial homerun, create the little “tools” “translators” and “interfaces” and all the know-how that went along with them for integrating MAPICS with Haystack-compatible systems, first as custom software services, then, after a few, standardize, parameterize, and shrink-wrap like CA did to build its “tools” business and Larry Ellison did to get Oracle started before the relational and sql juggernaut days. my pal, dave the fellow cisa, helped me first learn how to spell “mapics transaction set” … i never got much further than that on mapics … anyway, here’s the link to somebody who has an interest in mapics drawing in the web hits for people looking for the old thru-put technologies company … all the links click through to the “infor” company in alpharetta, georgia. lot of new players in 10-15 years. what? duh? what’s duh?
Oh and the other company with the other sg … what was the name? … we beat them on the rockwell automotive deal … i2 technologies … have their own wiki page … no mention of opt/toc background or goldratt background related to sanjiv … thought word was he was one of best and brightest students of goldratt from OPT (late 70s, early 80s) era … maybe that was ken s, also of i2 … also srikanth, another big name came out of that OPT era … lots of brainpower in the opt group working on an approach that got superceded in the normal course of ongoing practical physics … lots of brainpower like “the other eli” who was there from the beginning creating the famous “opt game” education simulator and all those V, A, T, and I plant simulator/games courses we all did our initial TOC growing up on in 2- and 4-day and 2-week toc-era courses … lots of brainpower in that wild and wooly opt era … anyway, i2 seems to have done well … sanjiv called 106th or so richest man in the world? richest indian ethnic outside of india? well, fine. he may have gotten all the other deals in the world, but we got the rockwell/meritor deal in 95/96. so there. ” )
and, of course, what of our old pal, The Goal System? aka BAS … aka … still can’t say it … : ) …
– bob still has it? always wondered if he’d license it or sell it vs. build an organization to install it a lot …
toc critical chain project management? ok, here’s another part of where thru-put went.
and, the original project management software based on toc. rob’s still gettin’ it done:
some other good folks:
Pretty Good Quote
“… no exceptional brain power is needed to construct a new science or expand on an existing one. What is needed is just the courage to face inconsistencies and to avoid running away from them just because ‘that’s the way it was always done’.” (Eliyahu M. Goldratt. Introduction to the Second Edition of “The Goal”)
The following link is to the paper where I found this quote today. Actually, before the link, a few comments concerning the quote and its original source.
The quote is not new to me. In my earliest TOC years (March 1989 through December 1991), I had both the 1984 first edition and 1986 second edition of The Goal. Their covers were different. The 1986 edition had both an introduction to the revised edition and the original introduction.
I first heard of the book, The Goal, in an early 1989 Apics evening Cpim certification preparation class I was visiting with my friend, Bob, who worked at the time for then computer giant, Digital Equipment Corporation, that we all just called, DEC, like deck. I was just there to see what the classes were like. Toward the end of the class, Bob, who was one of the students in the class, asked the instructor, “but aren’t we supposed to be managing our plants like it says in The Goal?” The instructor gave an intelligent answer that basically said the current course was about existing standard methods and that the ideas in The Goal were well-known, but not standard and not part of the students’ upcoming certification exams. The question and answer were interesting.
A month or so later, in early March 1989, I was in the Boston University bookstore gathering sources to prepare for approaching manufacturing clients as part of building a new systems integration consulting business. I just happened to notice the book, The Goal, because it was sitting on a pile of books that contained the one I was looking for. By the way, for you Apics and operations folks, the book I was looking for was Vollman, Berry, and Whybark, one of the two pre-eminent production and operations management textbooks, the other one being Fogarty, Blackstone, and Hoffman. The little piles of books were books that had been set aside for students who had registered in BU operations professor Tom Vollman’s courses. I thought, oh, that’s the book they were discussing at that Apics class. And, oh, that’s interesting, Tom Vollman’s requiring it in his classes. Ok, let’s buy one of those too. That’s how I got started with The Goal, Goldratt, and TOC.
Tom Vollman, by the way, was very intelligent in how he dealt, in his book and published papers, with what I soon learned was the very controversial pre-TOC opt and Goldratt. He didn’t promote, dismiss, or ignore the issues. He analyzed them. I would later learn that many others were more emotional and less logical in their approaches to viewing and commenting about OPT, TOC, and Goldratt-related matters.
I’m remembering this because of the quote above and because it reminded me of the covers and introductions of the two earliest editions of The Goal. There have been, I think, definitely one, maybe two more by now to incorporate the additional learning between 1996 and today.
I read that copy of The Goal from cover to cover that same afternoon in my office, and the rest, as they way, is, you know …
There are several other comments and things to notice on covers of the 1984 and 1986 editions, and in the introductions to both editions. I felt they were so effective in creating the right starting attitude for thinking that, in my own book in 1998, I suggested that people new to TOC should read the covers and intros carefully — and more than once. I should have also said (maybe I did) that people new to TOC should also slow down a bit and spend a little time thinking about the cover comments and intros.
What were the points? There were only a few. One of the comments was essentially about the progress of any science being about increasing validity of concepts vs. “truth.” One comment I’m referring to was, I’m pretty sure it was on the cover on the 1984 edition, was something like, “[the main characters] develop skill in finding out what makes the world around them tick.” It was removed when the 1986 cover was made more flashy and professional in appearance. Too bad. I liked having both editions to get a sense for the difference two years made. Not as much as the third edition. The second edition added an epilogue, a revised introdution, and changed cover art and comment. The third edition (was it 1994?) blended the epilogue’s info and several more years of evolution of thinking into the main story itself. So reading the 1984 and 1986 main parts of the book was identical, while the main part of the book in the third edition changed in maybe a few dozen selective ways interspersed throughout the book, while maintaining the extremely popular and increasingly internationally well-known basic storyline about Alex Rogo and Julie learning, Jonah as mentor, and the Unico factory’s turnaround.
Anyway, about the link to that paper. The paper I got the above familiar old fav quote from today is not a pro-TOC paper. It uses the quote from The Goal to create irony as it claims and attempts to use Goldratt’s own expressed standards as the basis for demolishing the perception of validity of TOC. It’s an example of the attacks on the validity of TOC that were common in the late 1980s when I first got started with TOC. I arrived to the paper from a google search because it has reference to my search words, “The Goal System”, a TOC computer software product, formerly known as BAS, and also formerly once named by the colorful Goldratt as “Disaster”, which I’ve been searching on to find out whatever happened to it. Attacks on TOC don’t surprise or bother me. Discussion and controversy are useful. But the timing (2004) and the author surprised me a bit. Also, given the author’s background and experience, the, um, let’s see, how to say this, the, um, well, the “logic” of the argument surprised me. For various reasons, here’s the link:
The paper demonstrates pretty strong general writing skill, somewhat stronger rhetorical flourish, and not terrible (but not terribly strong either) academic referencing and citing skill. Those things are all ok. The TOC community acknowledges the role and effect of general and specialized stylistic aspects of things, but never at the expense of the soundness, the validity, of the thoughts, the thinking, itself.
The real problem with the paper is not how it expresses the thoughts, but the thoughts themselves. The thinking process is so bad, and the thoughts it produces are so off-base, that it would almost be … what’s the word … maybe graceless of me … or how else to say it … something like, it should be beneath me or … here it is … I should be bigger than to get into the details of ripping the paper and its arguments apart when it’s so obviously an example of somebody unsuccessfully trying to play at a level that’s way over his head. The TOC community, consistent with Goldratt’s own leadership, never discourages people from trying to play at levels that are way over their heads. In fact, quite the contrary. TOC concepts and tools are designed, in large part, to enable individuals and organizations to do things they never thought they were able to do before. So I applaud the author of the paper for — and here comes the inevitable American sports analogy — stepping up to the plate to take a big swing at the baseball. But the idea is to keep the work grounded in the facts, the reality, of the area of life in question. Goldratt’s line for that was, “leg on the ground,” i.e., ok to be a hurricane and have head in the clouds, but hurricanes, or maybe it was tornadoes, yes, it was tornadoes, tornadoes go round and round and round with great noise and energy, but they don’t have effect unless they’re touching the ground. The Categories of Legitimate Reservations (CLR) part of TOC enables people to do that as I’ll demonstrate in a moment using the paper as an example.
So, as always, range of validity. I’m thinking that picking this paper apart a little bit can be a useful case study of how a well-intentioned person, who is reaching to make a contribution to what’s right, and trying to correct something he honestly believes is wrong, and doing that by trying to be a player in the published literature of a knowledge area — maybe as part of his job in the military, maybe as part of getting a masters degree, not sure exactly what’s the context of this and his other papers — can get things so completely screwed up it’s difficult to know where to begin to correct it. And discussing a few aspects of it can simultaneously (1) present what TOC is from a new and different angle, (2) demo the use of the Categories of Legitimate Reservations (CLR) part of the TOC thinking processes, and (3) maybe show how in general people who are working at elevating their skills into being serious players in published literature of knowledge areas can avoid making glaring mistakes in writing academic position papers.
So, let’s talk a little stroll into the paper and see what we can learn.
For people who have known me and my intuitive implicit (vs. formalistic explicit-rules-driven writing-and-revising-logic-trees) style of using Toc thinking processes, this little analysis is going to be kind of funny. Because I’m going to be citing the TOC Categories of Legitimate Reservations (CLR) and I never learned them. : ) I can almost hear Eli, Dale, and Tracy laughing … When I heard the rules of CLR presented or when I read them, I thought, yep, they’re the right rules and, if people as individuals, in pairs, and in groups use them, 95% or maybe 100% of the time wasted in problem identification and problem solving will be eliminated. And I think I’ve seen them expressed in slightly different wordings, neither of which I remember. In fact, I’m pretty sure I mentioned this in my book. I referred readers to Lisa, Bill Dettmer, or AGI for their versions of the CLR. So, for this exercise, I’m going to just make up my own words for it and say, if you want better CLR words, get Lisa Scheinkopf’s book or take a Goldratt Institute or, I guess these days, an Apics course. Right. Get the Apics thinking processes course. Oh, also Goldratt’s book, It’s Not Luck, if I recall correctly, has like the first one or two CLR rules in it. The usual teaser to give a little in the inexpensive book save the rest for the courses.
So, CLR items, as I recall them are … clarity … does it exist? … that last one might be why there are several versions of the CLR … i remember that as one of the questions, but i have to think about how to interpret and use it … insufficient cause … false cause … maybe the “oxygen” idea, but i’m not sure if that’s on the official list …
The reason I don’t know these CLR rules is I never use them. At the same time, I use them every waking minute of my days. Oh, that’s great, right? Well, what I mean is, I don’t use the concepts and expressions of the categories of legitimate reservations, but I use the phenomena in experience they refer to constantly. Best example is the “clarity” reservation. Getting clear on the meanings of statements, by meaning I mean the experience in a person and not just more words from a dictionary, is much of the progress that is needed to solve a problem.
Same for all the rest of the CLR rules/questions/criteria and for almost all the formal rules of the 5 logic tree processes and diagrams (current, future, evap cloud, prereq, transition logic trees). What? If I don’t use the rules, why do I think there so valuable everybody should know about and learn them? Because learning (become aware of) them confirms and increases speed and effectiveness if you already have them, or gives you the skills if you didn’t already have them, and, in either case, once you learn and use the explicit formal rules a few times, you can always use the explicit stuff for more complex or things you’re stuck on or in groups, but mainly, once you learn them and use them a little, they are always working all the time in the sub-conscious. Like learning fractions in grade school. At first, you drill on the concepts of “one-half”, “one third”, “4 fifths”, etc with pictures, examples, etc, and then — for the rest of your life — you just know what it looks like in your mind and how to apply it to stuff whenever somebody says or writes or you think 1/3 or 56% etc.
What? Are we ever going to get to this paper? Not sure. : )
Ok, first sentence: “The so-called “Theory of Constraints” (henceforth “TOC”) as articulated and explained in Goldratt’s books, is neither a theory nor a correct methodology.”
Is TOC really a “theory”?
I’m familiar with and ok with the “not a theory” idea. The author of this paper is not alone in using this argument against Goldratt’s work or other thinkers. (Stephen Krashen is another example in another field of a thinker who produced, over a period of years, a steadily-evolving ever-more-comprehensive body of concepts that gave more and more and better and better descriptive, explanatory, and predictive power, but got vilified by nominally educated, but actually only semi-educated, lesser minds for calling his work a “theory” and other parts “hypothesis” when they didn’t meet somebody’s narrow tests for the strict definitions of those words.) There are competing theories about what constitutes a valid “theory,” or a valid “science,” or a valid “philosophy.” These debates don’t matter because the issue isn’t “theory” vs “hypothesis” vs. “claim” vs etc etc etc, the issue is experience vs concept vs more or less descriptive and explanatory and predictive power. The stupid debates about whether the Theory of Constraints is really a “theory” vs. whether it provides descriptive, explanatory, and predictive power just show that the people debating it don’t really understand the role of ideas and words in science, physics, philosophy, and life. But, though that’s a dumb comment by the author of the paper, obviously reflecting that he heard somebody once say that “a theory must be this or that”, it’s not a big deal. So what if TOC is really a theory or not. It’s a set of concepts, principles, expressions, words, whatever — chosen to refer to experience, phenomena, in life. Ok? So let’s call it, instead of Theory of Constraints, The Bagful of Words About Constraints (TB0WaC). Like that better? If you do, then let’s call it that. I don’t care what we call it. What matters is whether the words reflect and create conditions in experience and life more or less effectively than other words reflecting other conditions in experience and life.
Is TOC a correct methodolgy?
So the first half of the sentence, while dumb, is not big deal. The second half, however, is more serious: “… not a correct methodology.” If that’s the case, there’s a problem. Let’s think about it.
What about CLR?
Oh, and I’m reminded I was going to make explicit use of CLR and I just went into one of my other intuitive implicit verbalization modes again.
Well, there’s part of the point. Though I believe the experience states underlying and referred to by the CLR princples are essential to sound thinking, I didn’t take the trouble to force CLR onto the discussion of that first half of that first sentence and a lot of old (to me) points got made in old and (to me) new ways, and some new (to me) insights and angles and explanatory techniques got flashed into my experience. But that’s how a group use of CLR goes, by the way. One part of one statement/entity gets discussed by the group until it gets distilled down into something strong. Sometimes it takes just a moment to clarify or revise. Sometimes it can take weeks (like sending somebody in a company to find out if a product has been reliable, or getting market share figures) or years (“is the north pole really melting” in a global warming analysis). But it’s CLR working … clarity (is the statement clear) … does it exist (is the statement true as stated, or true stated differently, or true at all?) … insufficient cause (yes, that causes that, but that alone can’t be causing it) … false cause (yes, that’s present, but it’s not the cause … like the old joke, cottage cheese causes fat people because fat people are often seen eating cottage cheese) … there are some other CLR items … oh, we’re not just talking about sound logic … what the CLR are doing is creating a social and cultural force in a problem solving environment … it’s Categories of Legitimate Reservations … as in, “i have a reservation, as in a problem with, or a hesitation about accepting, that statement” … so “we never did it that way before” is not a legitimate reservation … it’s an illegimate, not legitimate reservation … if I think of some more, I’ll mention them … maybe can get Lisa’s or Agi’s version by googling later … for the moment, I like continuing to make the point that, while I think that learning, practicing, and mastering the formal rules is a good thing to do at least once, that the formal rules serve their effective use vs. having the effective use get screwed up by worrying about dotting the i’s and crossing the t’s of the rules … it’s a subtle difference, a fine line, but very important to, let’s call it, full fluency in use of the TOC suite of concepts and tools … and i’m curious now about what the “official” CLR lists of rules are … : )
So far the paper has made two statements, “TOC is not a theory” and “TOC is not a correct methodology.” How to apply CLR to the first one? Let’s see … clarity … does it exist? … insufficient/false cause don’t apply … so clarity and existence are the two reservations to consider … I don’t know exactly which to use. Here’s that point I ran into back in the 90s with TOC thinking processes and I wrote it about it in chapter 5 in my book under the heading something like “but don’t hold your best people back with over focus on the thinking process rules” … sometimes, spending a LOT of time on one entity is very valuable … many problems get completely solved by simply working on the premise statement and finding the “problem” went away when the statement was clarified … can’t think of an example right away, but this happens a lot in life … applying CLR to “TOC is not a theory” … could be clarity in the sense of clarifying like i did in the discussion issues around the idea or concept of “theory” … could be existence reservation in that it’s not a true statement … maybe i need to invent a new category like “dumb ass issue” or “issue doesn’t matter” … or “what does it matter” … or category called, “where’s he going with that statement” or the “so what?” category … let’s say we accept it’s not, from somebody’s view, a “real” “theory” … so what? does that don’t use it even though it works? … i think where he’s going with it, based on scanning his paper a bit, is he’s going to get into “insufficient cause” logic to say something like “since it’s not a real theory”, then “goldratt shouldn’t be treated with respect in academia” and “they stop teaching his stuff to students in schools”. if i stopped word-streaming this verbalization and shifted to making a nice concise logic diagram (i have and do make those, by the way, to express the essence of an argument … haven’t done it here much) of the author’s thinking, i think there would be a cause-effect arrow out of “not a theory” up into “goldratt over acclaimed” and “goldratt’s stuff over taught” as UDEs or undesirable effect logic tree entities. i want to emphasize i’m not sort of sarcastically indirectly saying CLR and other thinking process rules, and other rules in life actually, are bad and wrong. i guess this is clear already. if bruce jenner-karadashian were here, he’d tell us that, in all ten of his decathlon sports, that he agrees with bill d, there are initial rules (things to focus on to get the basic skill into the ballpark), and then with the basics in auto, next stage of rules/guidelines to focus on, and, along the way, some of the things to focus on conflict with others because the fluent consistent championship performance is the blending of many factors which, taken separately, appear to be opposites, but come together in complementary different ways for different situations. So, if that’s how it would be for olympic decathlons, why wouldn’t it be that way for the equally exciting and dynamic world of thinking processes and logic trees, right? right.
here are some people who already understand these things. jim, charlene, bob, terry, ted, and their pals …
and if the lovely cher were here, she’d say, “yeah, yeah, yeah … i understand all of that. this TOC stuff is easy.” ♥
Applying CLR to “Toc is not a correct methodolgy”
Well, I could just be off to the races again with another few paragraphs of discussing this, but what if I tried to hold back and use CLR? Ok … grit my teeth … take a deep breath … this may hurt a little … “is not a correct methodology” … ok, i can use clarity … one thing I’m realizing is that my streaming verbalization and using CLR are not either/or. my streaming verbalization gets intuition and experience and new combinations extracted from Self which can be extracted, reflected on, and converted into concise CLR reservations to take into a meeting. One can think and research, etc, a lot on a subject and then, in a meeting, just present a concise position or positioning statement (they’re two related, but different things), to the group. The CLR can be used in solo logic tree work, but their real value is in making meetings effective. If you have any experience with meetings, you know SO much time and energy can wasted talking past each other with only parts of sentences heard by one person or different meanings for words among three persons or half the room moving on assuming what’s been said is right while the other half isn’t paying attention anymore because they don’t agree with two critical points that were made ten minutes ago. The logic tree process and CLR cut through all that BS. Now if people work together well without explicit CLR, fine. But sometimes compatible groups “work” because everyone lets one person dominate, even if it’s sweetly vs. everybody thinking about the steps in the logic.
Applying CLR to “TOC is not a correct methodology” … clarity can work on “what’s the meaning of ‘correct’ ” and “correct methodology” … but what’s the category for “in whose opinion” or “by what criteria” or “correct for what purpose?” … those could both be clarity reservations in support of getting clear about the meaning of “correct” and “correct methodology” … but there’s another here … TOC isn’t primarily a “methodology” … it has some methodology and is used to create methodologies all over the place, but, the best way to say what it is, is more like viewpoint, attitude, philosophy, approach, perspective … something like that … experience? … it’s often said that it’s a philosophy, but i’m just revisiting the “what is it” to see if I get anything new … what is a philosophy exactly? … again, in whose definition … what does the word say … phil is i think love and sophy is like knowledge? … oh, and it’s physics, it’s science … in my book i said it was an expression of physics, of science, of the process of physics, of science … a process is a methodology … but that’s not the level of thinking the author of the paper is working at when he uses the word, “methodology” … he’s not working at philosophy vs worldview vs science vs physics, he’s working, i’m pretty sure, at the level of organizational design and maybe from a fairly narrow technical Operations Analysis simulation and modelling standpoint … he’s characterizing what he sees as the impact of TOC in his area of expertise as the totality of what TOC is overall, which is incorrect … by the way, choosing among alternative words like philosophy vs physics vs worldview is really about guessing what words will have what effect in most people, or well-informed people, or people with big vocabularies … guessing what meanings/experiences the various words trigger in other people and whether those are the reactions/meanings/experiences we’d like them to have about our topic …
So the CLR clarity reservation could be used on the statement, “TOC is not a correct methodology.” One could use the “oh yes it is” reservation, but that one’s not in standard CLR. 🙂 That’s also how time starts to get wasted. that statement is really, “TOC is a correct methodology” and we’re not agreeing yet on whether we should be evaluating TOC overall or just some of its parts as “methodology” and not clear yet on what criteria or process for assessing as “correct.” So letting a single Clarity reservation give rise a lot of useful discussion that will lead to a clear statement would be a good thing. So, seeing this statement written, I declare my clarity reservation by simply stating, “Clarity on entity one,” or “I have a clarity reservation about entity one.” Ah, what a lovely jargonese manner of speech the CLR can lead to.
What about an existence reservation? Does it exist? as applied to, “TOC is not a correct methodology.” I’m wondering about this idea of “does it exist?” as a test for logic tree entities/statements … is it the right way to word that? … how else could eli have worded it? … imagine looking at a logic tree with a dozen or so entities/statements connected by cause-effect arrows … could say, “does it happen that way” … or “does that really matter, is it relevant to this situation” … although “does it exist?” seems a bit odd for some reason, i’m, so far, not finding a better question/critierion to use … how about “is it true?” … “is it true that the condition described by those words exists in the reality of the situation we’re analyzing?” … “or is the actual condition something else?” … i guess “does it exist?” is the best I can see at the moment … and that’s different from clarity … clarity is, are we together on what condition/phenomenon in reality is being referred to with these words? … existence then is, now that we’re together on what condition/phenomenon in reality the words are referring to, is it true that it’s in our situation? then cause issues are different too … it’s making sense … seeming complete … not seeing better alternatives or need for them …
i think it’s time google on clr and see what the official version is these days … first one from dr youngman in new zealand. he mentions dr. eric noreen’s version of CLR. he’s right. i remember those now. he also references lisa’s and bill’s books. and, right, it’s not just “existence” it’s “entity existence” “casual existence” and “predicted effect existence.”
we could say, instead of “existence,” “does that happen?”, “does that cause that?”, “will that happen?” … i still don’t find “does it exist?” quite intuitive, it still sounds a little odd to me, but i can’t think of an alternative …
So the Categories of Legitimate Reservations (CLR) are an important part of the ground rules to be used when reviewing logic tree diagrams. The idea is that, if your problem with, your reservation about, or your hesitation to accept some some part of the logic tree doesn’t fall into one of the categories of legitimate reservations, then it’s not legitimate in the context of creating a rigorous expression of the logic of a situation. It may be “legitimate” to consider for other reasons, like dealing with the people issues in implementation, but it’s not part of the process of discovering or creating valid logic tree expressions of current or potential future reality. The reservation, “but, we can’t do it that way, because we never did it that way before and joe said last week it could never work,” is not clarity, existence, insufficient cause, false cause, or tautology, so it has no place in a review to see if a current reality logic tree reflects the nature of the current reality, or if a future tree logic diagram expresses the logic of a possible new reality. That reservation would be, however, relevant and helpful when re-expressed as negative branches in the future tree to be resolved with additional ideas/injections and/or as obstacles to be overcome with intermediate objectives in pre-requisite logic trees and actions in transition logic trees.
Hm … i just noticed dr youngman has not insufficient and false cause, but insufficient and additional cause needed. hm … what’s the different between insufficient and additional cause needed? and isn’t false cause kind of fundamental? I’m not encouraging getting too hung up in the details here. The existence of the CLR dramatically improves the effectiveness of individual and group thinking if they weren’t naturally thinking like that anyway … having agreement among people that CLR are the ground rules keeps a group moving efficiently … their existence makes everybody more aware of their own and others’ objections which cuts out a LOT of nonsense … to get too worried about the details of the categories isn’t necessary and doesn’t really add to the HUGE benefit having them roughed out gives …
wonder if there’s a way to know the official goldratt institute version … dr youngman’s were pretty good, but insufficient and additional cause but no false cause thing spooked me a little bit …
oh, wait … causality existence is the same thing as what i’m calling “false cause” … and i see the insufficient vs additional now too … sufficient is enough to get the effect started like snow happens … then additional is the difference between causes of seeing any snow at all vs causes of 12 inches of snow …
and here’s my pal, john caspari’s take on CLR
looks like dr youngman and john caspari both are seeing 7 basic categories: clarity, 3 existence (entity, cause, and predicted effect), insufficient and need-additional cause, and tautology. with john seeing maybe a few others people discuss as maybe being good additions to the list.
the discussion of the “house on fire” reservation is cute. that one became TOC slang after my time. i was around for oxygen, banana, disaster, maybe more. : ) apparently, “house on fire” reservation has come to mean possible cause-effect reversal because of the oft-used phrase, “where there’s smoke, there’s fire.” kind of like “cottage cheese causes fat people.”
Let’s see … IF there’s fire, THEN smoke happens, and IF smoke happens, THEN i can see it, and IF i can see smoke, THEN i know that there’s fire … so IF there’s smoke, THEN i know that there’s fire … and we’ve now entered a logic tree calculus … naaah … I’ve been to this level before about a dozen times in the early 90s and again in the mid-90s and that’s why, like Ben Franklin who “liked his speckled ax best,” I just went flying forward knowing CLR existed, appreciating its value, recommending to everyone in sight that they learn them and the logic trees and TOC, all the while using the implicit experiential reality and process underlying the rules all the time in thinking about things everywhere in life, but not much paying much attention to the nitty gritty details of the rules.
Flip-Flopping around to get to Seven (7) CLR
I love the “house on fire” thing. that’s great. adding that to the official 7 makes 8 CLR. clarity, 3 existence (entity, predicted effect, and false cause), insufficient and additional cause, and tautology, and house on fire (possible reversal of cause-effect).
Actually, I don’t, at this moment, find predicted effect all that different from entity existence and cause existence/falseCause, so I’ll drop that one. And I think the official “cause existence” and my “false cause” and my new fav, “house on fire,” are all “cause existence.” That reduces the number to 6. So, as usual, I have to re-arrange things my own way … : ) … among other things, rearranging things in the way it makes sense to me makes it more effortless to remember them … we remember the shapes of ideas with words attached to the shapes more easily than just the actual words …
I’m going with clarity, entity existence, 3 causal reservations — cause existence (including false cause and the house-on-fire as “possible cause-effect reversal”), insufficient cause, and “additional cause needed” — and tautology. Six.
So, finally, after all these years, I now have my own personal standard set of six CLR. Great. : ) You can use them too if you want to. And Lisa and Bill can write about them in their next books. Great.
But … uh oh … on this pass … i’m now seeing the predicted effect reservation is qualitatively different from entity existence because one is, “does it exist?” and the other is, “will it exist?” So I’m arriving to what I think are the official AGI 7 CLR. Let’s see what the naturally-mnemonic order is … clarity (is it clear?), entity existence (does it exist?), predicted effect (will it exist?), 3 causal — existence of cause (including false cause and house-on-fire possible cause-effect reversal), insufficient cause, and additional-cause-needed … actually, still not clear on that one …, and tautology.
Here’s somebody doing web as Pinnacle … oh it’s my buddy, mark … with a pretty good TOC glossary page. He/she leaves out the “tautology” reservation, but includes my new fav, the “house on fire” reservation:
Update a day or so later: i added link’s above to lisa’s 1999 thinking process book on amazon. here too. oh, i usually send buyers to apics.org. well, look at/inside the book on amazon and then go buy it at apics online or via their 800 number, will ya? i really like it when apics makes money on toc. anyway, i was reading lisa’s book a little. it’s real good. reads well. good diagrams. she organizes the five standard logic tree processes into two categories that i don’t understand really, but i guess she knows what she’s doing there. she calls the cloud and the pre req tree “necessary condition” tools and crt, frt, and tt “sufficient cause” tools. ok. i guess so. don’t know if that’s from an agi (goldratt institute, avraham y goldratt institute, named after eli goldratt’s father) source or lisa adding value. all looks like cause and effect to me. would have to check direct unconceptualized experience to see if they “feel” “seem” “tend” “are” actually different/same in some way … not sure i’m in the mood right now … ok, evaporating cloud is clearly necessary condition. she’s right about that one anyway. read some of dettmer’s 2008 book too. good too. different than lisa. lisa’s book is for a wider audience. dettmer is a deming mind and his book is for people like him. i would want both.
not quite random thought … the “reinvent it for yourself” principle is important to sustaining natural interest and momentum … creates “not invented here” emotion to “invented here” emotion … eli’s “emotion of the inventory” principle …
also, i’m remembering that a little bit of thinking tools dogma, as we say a bit ironically in the american english idiom, goes a long way … if you start to feel suffocated by the debates over what is and isn’t this or that in the thinking processes world, just walk away from it for a while … then go back for a bit … we don’t take vitamins all day every day … we take them once a day … we don’t get shots every day … we get once a year or once every few years … stay with enjoying learning them long enough to get the basic idea … start using them and developing your own style of use … get the three books — lisa, dettmer, it’s not luck — and put ’em on your bookshelf … wave your hands over the books every once in a while … open them if you like too, but only waving your hands over them is required process … oh, right, all the thinking tools examples and advice in my book’s ch 1 and ch 6 are very good innoculations/vaccinations to prevent hurting yourself by letting thinking tools experts and their rules cause you to hate thinking tools … enjoy …
and that wasn’t about lisa and dettmer, by the way … like my chapters 1 and 6, they’re part of the innoculation/solution … all the people who might be the problem are good people too … people are just people … but i think it’s the nature of the activity too … a little goes a long way may be the explanatory principle that gives rise to balance and satisfaction … like, in another domain, a joke somebody used to tell about fertilizer … maybe thinking tools discussions are like fertilizer … a little here and a little there and everything grows … too much fertilizer at once and it stinks? … and we all SO love analytical and administrative procedures anyway, right? … right … can’t get enough … but, by definition, procedure constrains, with idea of focusing effort, being efficient, and avoiding errors … when are such constraining rules confining and when are they liberating? … interesting … tb,/ … if callahan were here, he’d say, ah, the rulesmeisters … don’t we just love ’em …
Oh, Right. We Were Dismantling that Paper …
In all the excitement about re-thinking CLR all over again (right, I’m easily amused), I pretty much forgot about that paper I found when looking for information about The Goal System software product (aka BAS, aka Disaster).
Ok, so now our memories are refreshed about CLR. I guess we can go back to that first sentence of that paper.
The two statements were: “The Theory of Constraints is not a theory” and “TOC is not a correct methodology.” We’ve already discussed the first statement. For the second … right, this is where we left off … of the seven newly-dusted-off CLR … registering a clarity reservation is clearly the right step to take … clarity clearly applies … what? right. “clarity clearly” applies … lovely … anyway, existence maybe too … but what i think i’m seeing is, if there’s a clarity reservation, no need to wonder about entity existence until the clarity question’s clarified.
A Few Quick Notes
without getting heavily analytical, and going off in a lot of interesting and useful digressive directions, as we did with the first sentence of the paper, a few points I remember from a quick scan read of the abstract and intro:
– step 4 of TOC’s 5-step system improvement process has nothing to do with balance of capacity or anything else. the step is “elevate the constraint” which means break the constraint, remove the constraint. why eli chose the word, “elevate,” i don’t know. but it not only applies to bottlenecks in physical constraint situations, but also to policy/thinking constraints where there’s even more clearly no relationship to the “balanced capacity organization” issue.
– the article makes the comment that “Goldratt forbids balance.” Goldratt’s theories don’t “forbid” anything. they demonstrate things using cause and effect. when it comes to balanced capacity plants, he demonstrates that the effects of statistical fluctuations in process times and of disruptions (Murphy’s Law stuff like breakdowns, scrap, etc) are cumulative in their effects on bottlenecks so, in order for the bottleneck’s capacity to be best used, the non-bottlenecks have to have some additional capacity compared to the bottleneck to “catch back up” to the flow rate set by the bottleneck’s schedule. nobody’s “forbidding” anything. the reality, in terms of cause and effect is verbalized, demonstrated and tested in the dice game and in the goal, and people can do what they want.
– that discussion was expressed in the bottleneck-oriented language of the post-OPT and pre-haystack syndrome era where, I think, the fixed-length buffer drum-buffer-rope TOC scheduling and shop floor control changed from being based always on a “bottleneck” to being based on a “primary physical constraint.” important difference. somewhere after OPT, and after 1984 The Goal, and maybe after 1986 The Goal 2nd edition and The Race (I forget whether the 86 Goal and Race used “bottleneck” or “constraint”), and 1990 Haystack Syndrome (that definitely made the change from “bottleneck” to “primary constraint”), Eli realized that he hadn’t always seen “bottlenecks” in plants, so he shifted the terminology to “constraint” as in “physical constraint” and “primary constraint” and “secondary constraints” with “primary constraint” replacing “bottleneck” as the “pacing resource” or “drum” since many factories are heavily-loaded, but don’t have true “bottleneck” resources. Plants without bottlenecks still need a “drum” or “pacing resource” to set realistic customer and stock order due dates and then use them with proper “ropes” “buffers” “leadtime offsets” to synchronize the material releases which, in flow scheduling style, synchronize the work of the non-constraints in the rest of the factory. part of the reason I point these things out is the paper that tries to say “TOC is flawed” doesn’t make it clear which era of TOC it’s criticizing. It seems to set up a straw man — actually, it more looks like a failure to understand TOC logistics solution evolution — of parts of OPT, parts of 84/86 Goal/Race era, and parts of post-1990 Haystack Syndrome era to create a picture of TOC that not only positions TOC incorrectly as only its factory management solutions, but also fails to accurately characterize the factory management portions of TOC.
– so where are we? one, it doesn’t matter if TOC is really a “theory”; what matters it that it’s concepts that point to experiential states and states of things that work for creating results. two, we have to know what “correct” means before we can decide if TOC is “not a correct methodology.” three, step 4 of the 5 step system improvement process has nothing to do with the much-thrashed-over “balanced capacity plant [as in equal capacity through the entire process]” fallacy and mistake. four, TOC isn’t just the 5-step process or just the OPT era principles or the drum-buffer-rope era suite of procedures, it’s a philosophy, perspective, or worldview and processes of physics applied to any area of life, including manufacturing and other organizations. fifth, TOC and Goldratt don’t forbid anything; they develop concepts that provide greater descriptive, explanatory, and predictive power — in other words, that work for creating desired results. so those things have been said. next, sixth, Goldratt and TOC never … wait … let’s do this … here’s the quote from the paper …
“An example of the type of damage that was caused by the critics’ silence is that academics often teach inferior methods such as Drum-Buffer-Rope (DBR) although even Goldratt himself stopped using it for scheduling”
Goldratt himself never stopped using drum-buffer-rope for scheduling.
DBR is used in its fixed-length-buffer single-constraint pacing resource mode in any plant whose product and plant structure makes it the right choice. That’s been the case since the mid-80s. It continued to be the case in in 1989 and 1990 when The Haystack Syndrome was published and the BAS/Disaster/GoalSystem software was developed and had first successful implementation in April 1991 at ITT AC Pump with Bob Vornlocker, Gary Smith, and the guys and gals in Cincy. It continued to be the case when an EG&G division in New York modified an existing system to do DBR in 1994 and when Rockwell/Meritor starting using Thru-Put Technologies haystack-compatible, “Resonance”, in the 1995-96 timeframe. there would be nothing to change that between then and now which means Goldratt and anybody who understood TOC was still using DBR when this paper was written in 2004, and I’m certain it’s still the case in 2011.
And it’s not only because a large segment of plants have the plant and product structure that allow “manual” “back of the envelope” DBR, and another large segment allow fixed-buffer-length DBR using modifications to existing manufacturing computer systems, and there is some software that supports simple fixed-buffer-length dbr.
It’s also because Haystack Syndrome-based systems also do drum-buffer-rope.
In fact, if a Haystack-compatible system, during its Subordination step in the scheduling process finds no, what’s the term, not red-lane peaks, yes red lanes peaks are a part of it, if it finds no red-lanes peaks between the single constraint used as pacing resource and shipping, and if it finds no other time periods in the other non-constraints areas where there’s not enough capacity in the schedule, …. , if those things are true … then the schedule the Haystack-compatible produces is a simple 1984/86 Goal/Race drum-buffer-rope schedule. The Haystack-compatible system only produces a schedule different from simple fixed-length-buffers (shipping, assembly, constraint buffers) if peaks in non-constraints areas have to be resolved with “dynamic buffering”, lengthening the buffers, which is releasing material a little bit for just one batch at a time. or if non-constraints peaking makes declaring a second or third “secondary constraint” to be useful. or if steps have to be taken to resolve a “first day load” situation.
But, to make that complicated-sounding stuff simple again, keep in mind that Haystack-compatible scheduling systems start out as simple fixed-length buffer single-constraint pacing resource scheduling and stay that way unless they run into a capacity shortfall in some time period on some non-constraint resource … because they can allow establishing an intelligent drum schedule that deals realistically with the constraint’s capacity and current demand, and allows thoughtful adjustments to due dates, and then can check all the non-constraints load vs capacity within time windows, and allow for one-batch-at-a-time adjustments for non-constraints load/capacity shortfalls … because it lets the scheduler do those things, it allows initial overall shop leadtimes and WIP to be much smaller at first and then increased only one batch at a time, or overtime or more hours to be added a little at a time.
In other words, those complicated-sounding things like “peaks in non-constraints areas”, “red lane peaks”, “first day load”, and “dynamic buffering” aren’t really that complicated when you play out the logic of what they’re starting from and what they’re doing one step at a time.
Think of TOC Scheduling and Production Solutions in 3 Eras:
First Era: The OPT Era. Ok, now forget that one. It was valid in the mid-70s to mid-80s in some plant structure, product structure, and other manufacturing company circumstances. How do we know? Because some companies made astonishingly large improvements in delivery ontime, wip and leadtime reduction, and profits.
The OPT ideas were still valid later as I’m pretty sure the manufacturing finite scheduling part (vs. supply chain, underlying peoplesoft/oracle/jdedwards/etc database, and consulting and custom systems integration programming and other implementation professional services parts) of the huge financial success of the i2 technologies company demonstrates (i heard once very recently that the huge business success and astronomical founder and other shareholder wealth created by i2 came, in large part, probably mostly, from extending the concept from an OPT-like pre-selected multiple-bottlenecks, vs. Haystack’s sequentially-selected constraints finite scheduling within the factory, to systems and services for entire supply chains, but, more importantly for explaining the huge company revenues and cash flows and profits, also became, not just a provider of a finite scheduling factory module and supply chain management module, but a full-service systems integrator selling and providing, and getting a slice of the much larger sales and profit numbers coming from selling the huge underlying peoplesoft / oracle manufacturing / jd edwards / etc manufacturing, planning, control, and financial database system and consulting and implementation and especially those low-risk high-margin and high-value-to-the-customer education services, just using the finite scheduling package, with its comparatively tiny revenue and profit contribution, as just one of the many reasons — maybe a crucial tipping-point reason, but just one of the reasons — a large manufacturing company should choose them to overhauling their information infrastructure. That’s very good business sense, and it’s serving customers at a point in time with part of what’s perceived and understood at a point in time by the market as commercially-available and effective, but it’s different from being a physicist in search of the unavoidably correct (i.e., “correct” meaning most valid given relevant system definition, simplest comprehensive solution, ockham’s razor, my “mininum number of simplest concepts with maximum descriptive, explanatory, and predictive power”, any of those, or my “ya’ just keep seeing, in addition to the original desired effects, more and more and more unintended beneficial effects that elegantly fit into what is obviously some sort of intrinsically right natural pattern.” any of those criteria for evaluating competing concepts, concept systems, theories, hypotheses, whatever … 🙂 comprehensive natural long-term answer that should become understood and programmed and sold and used as the best and natural right answer. It’s the difference between R&D and “sales right now.” The Haystack concept was actually ready in 1990, completed “R” research, sufficient “D” happened as early as April 1991, and the other market/industry perception/education/preparation progressed in the rest of the 90s. Meanwhile, Goldratt was working out other aspects of overall company management that would be needed or helpful or newly possible with Haystack systems — thinking processes for implementation and especially identifying and clearing policy constraints, marketing, management skills, project management for new product intro and other projects. So, on the software and systems side, in the 90s, we, the TOC community, didn’t quite get the big software companies and integrators persuaded to finish their part of the “D” in “R&D”, didn’t quite persuade them yet that the Haystack solution was intrinsically ready, done, and right, with customers ready to demand it (we were working that part too, different ones of us in our own ways, not coordinated by Eli, but coordinated by knowing what Eli was trying to do and seeing stuff we could do to support it), the usual chicken or egg marketing/investment threshhold, in 1990, but I was making progress with Cincom, CA’s David Cahn, a little bit with a pal in EDS. ,/ js&j. Anybody who knew me at the time knew that part of my stated agenda in the 90s was persuading all finite scheduling outfits to migrate their solutions toward Haystack (i even jovially lobbied my pal, Ken Sharma, that i2 should do it. what’s funny is i was doing that while competing with him for rockwell/meritor from a position with thru-put and resonance. actually, i didn’t remember that right. the scene i was remembering was different people, but, if i had had the chance to lobby ken at that time, i’d’ve done it for sure : ) and all manufacturing database providers to modify their systems to support Haystack, and all MES providers to build up from their execution systems base to add Haystack functions. ,/ albanyMESguy … In fact, one of the things I tried to do with my copy of Oracle, this must have been 90-91, was to build a simpler new inventory and data system to support BAS/Diaster/GoalSystem to replace other data systems for simpler plants … but I got hung up in various lack-of-knowhow loops because, at that point, I was still learning about traditional manufacturing systems in general, TOC production in general, Haystack in particular, working out issues I later called TVA financial management system, learning how Eli Goldratt was extending effect-cause-effect and evaporating cloud procedures into the full “roadmap” of TOC thinking processes, not to mention learning relational database concepts and normalization etc etc in general, and Oracle’s way of doing it in particular, all at the same time. Oh, and while I had SQL, some of the stuff I would need required procedural programming languages vs. just query language and, while I thought the answer might be 4GL/CASE tools and was studying them too, those things all need supplementing with procedural language skills (back then, C, later Java, .NET C#). But all those steep learning curves in 89 and early 90s aside, it was clear that early that a simpler way with the data system, especially with teh wip, would work. … Probably now enough of the right people on the buy, sell, and advice sides of the commercial part of the game agree about enough things that this happens now. ,/ ssi2ks le tn s.king).
Much of the OPT approach is still valid. But it’s not as valid (in the sense of an increasingly-simple and increasingly-comprehensive natural solution that deals with all relevant issues elegantly vs. only succeeding in more narrowly-controlled situation and expectation niches) as the next two TOC stages — simple DBR and dynamic buffering DBR. For historical purposes, and for purposes of knowing how the current TOC manufacturing solution arose, know about OPT. Otherwise, forget OPT. It was Goldratt’s first rough draft. He learned a LOT about everything going on in manufacturing in that era, created some solutions in scheduling and around scheduling right away that are still right (like TVA/thoughput accounting and scale of importance, like global vs local priority, making batch splitting and overlapping ok and encouraged vs. taboo, lot sizing issues, work order issues, and, probably most importantly, saw the need to develop an expression of thinking processes robust enough to deal with all the policy constraints that would, in any modern manufacturing company, block the understanding and ability to implement the simple, natural, and effective shop solutions he had worked out by the end of the OPT era). During the OPT era, he created a series of increasingly-valid, but not yet comprehensive, factory solutions. by the end of the OPT era, he had arrived at the suite of concepts we now call simple fixed-length-buffer DBR and had to walk away from the several drafts of OPT concepts — all of which worked somewhere, had a range of circumstances over which they were valid — that had been developed and tried before that. so much for stage I, OPT. interesting for perspective, but, when it comes to what to implement today, forget it.
Second Era: The TOC Simple Drum-Buffer-Rope Era. Simple back-of-the-envelope and moderately-simple modified-existing-system 85/86 Goal/Race fixed-length-buffer single-constraint DBR that gets the same great results as OPT, but with a simpler solution that’s more comprehensive and has less side effects. simple DBR synchronizes the factory with a “drum”, supports on-time delivery, cuts shop leadtimes and wip, let’s quality and other improvements be focused by watching what happens in the “holes” in the buffers in buffer management.
Third Era: The TOC Haystack Syndrome Era. DBR with dynamic buffering does the same thing as basic drum-buffer-rope, but lets it be done for more complex scheduling situations AND lets the scheduler start with less leadtime/wip (shorter time buffers) and less resource staffing and overtime, and only add the leadtime/wip and staff/OT hours or offload or move to alternate resource, etc as needed, AND see these needs BEFORE they happen, not have them in their face with no time to react.
Goldratt never walked away from DBR, so that’s about the sixth reason so far that the entire paper might appear to be useless because it’s built on an almost complete lack of understanding of what TOC is and even of what the operations/manufacturing/logistics part of TOC had, by 1990, become (focusing only on the factory part here, for the moment, leaving aside the supply chain and critical chain project aspects). But the paper isn’t useless; it created the opportunity for these points to be made. The man had the testosterone to get out there with an ambitious analysis and bold alternative solution with the intention of fixing something he thought was wrong. That’s good. When he improves his understanding of TOC and then re-applies the same ambition and horsepower, more good things will come of it. If it makes anybody feel any better, I very seriously doubt if the author is the only operations, manufacturing, software, and manufacturing systems expert or professor or professional or consultant or executive who didn’t, until now, understand these things about TOC, CLR, DBR, or Haystack stuff. Bet a buck most did not.
Can almost hear my pal saying, gee, i’m glad we gave that guy a free jonah course … : )
Ok, that was thru jan 10, 2011 early am …
Later That Day …
caution: the usual … drafting, drafting, drafting … what started out as a few afterword tidbits turned into essays with the usual interesting digressive branches … will work on straightening it out a little if have time … in the meantime, some useful and interesting points are coming up ahead here …
Haystack Architecture: 3 Modules and a Database
“What If” Module
Buffer Management Module
Database System (with transactions supporting the other three modules)
3 Modes of Haystack Scheduling and Flow Management
Flow Scheduling – No physical capacity constraint. Order due dates and very short shop leadtimes used to schedule material releases that flow, self-scheduled, through the factory. This is usually done without a Haystack computer system simply by over-investing in capacity and then (a) in a simple shop, calculating material release dates on the back of envelope or, (b) in a more complex shop, trying to keep some control over the time distribution of order due dates and then tinkering with leadtime offset values in the manufacturing database system’s bill of materials table/file that are used to calculate material release dates. But a Haystack system will allow you to manage your very expensive over-invested-in-excess-capacity low-ROI factory too, but without the awkward work-arounds in the manufacturing database system’s system’s bill of material table/file and with a lot more common-sense flexibility in dealing with forecast order, order promising, schedule changes, etc.
Drum-Buffer-Rope (aka Simple DBR. aka Single-Constraint Drum-Buffer-Rope) – This is what everybody knows and loves as “drum-buffer-rope” scheduling and “buffer management” shop floor control as in the 1984/86 The Goal and 1986 The Race. Like “flow scheduling” mode, simple single-constraint drum-buffer-rope is also usually done with backs of envelopes in very simple plants and with modest modification work-arounds to manufacturing systems in more complex plants, but can be done with a Haystack system too.
Dynamic Buffering – Or Dynamic Buffering DBR. Everybody knows Haystack systems do this. It’s the other two that are big news to most people. It’s a very big deal that a Haystack system lets a company move quickly and easily among these three modes. Think agility. Think lean as in “right sizing.” Think ROI.
Another Take on the Three Haystack Modes
Haystack Production “Modes,” not Haystack “modules,” and not TOC DBR scheduling “phases,” and not Haystack “applications,” and not TOC scheduling “eras”
Haystack “Modules” are “What If”, Buffer Management, and Scheduling — all working with a database.
TOC DBR and Haystack Scheduling “Phases” are “Identify”, “Exploit”, and “Subordination”
Haystack “Applications” are Scheduler, Purchasing, Order Promising, Material Release, Manufacturing Engineer, Product Engineer, Quality Engineer, President/General Manager, Controller, Marketing, Sales, and probably more.
TOC Scheduling “Eras” are the OPT era (mid-70s to 1986, bottlenecks-based), the TOC Simple Drum-Buffer-Rope era (1986-1990, constraints-based), and the TOC Haystack Syndrome Dynamic Buffering DBR Era (1990-present, includes prior DBR which includes simple shipping time buffer flow manufacturing)
Haystack Production “Modes” are “Flow Scheduling (shipping time buffer provides timing of shop releases)”, “Drum-Buffer-Rope”, and “Dynamic Buffering DBR.”
These modes don’t have to be turned on an off with a switch or anything like that. They are just there ready to help you respond effectively to your company’s situation that has been created by decisions about investing in machines, hiring production workers, and committing to sales order due dates and by any production-effecting events (such as late supplier deliveries, scrap, quality or setup or run-time surprises on prototypes or early runs of new products being introduced into the main factory flow, and machine downtime).
In other words, another interesting tidbit to know about the circa 1990 Haystack-generation TOC operations systems concept …
[ i wrote “circa 1990” but the solution is as unavoidably correct today as it was then … why? because a physicist, in his physicist’s mind, erased all the existing policy and systems stuff from the picture in his mind, and “saw” the simple and natural possibility for how capacities, demands, product structures, pricing, and materials flows could ebb, flow, and change under varying circumstances … and then created a system architecture and processes to reflect that … that’s why, over 20 years later, it’s still the unavoidable right answer for the systems industry and its manufacturing customers … as long as ROI still exists as the most important of all the many many other measures of a manufacturing enterprise, the Haystack solution will be the place the systems industry needs to get to in order to get it right … ]
… is it allows for three major modes of use. I’m not referring here to the 3 modules plus database discussed in Haystack — the “what if”, “buffer management”, and scheduling plus database. I’m talking about three scheduling and shop management modes — (1) dynamic buffering DBR (for reasons of ROI, creating one or more sequentially-selected constraints and using variable-length time buffers), (2) simple (one-constraint) DBR, and (3) flow (letting order due dates “schedule” the shop … at the expense, by the way, of increased capital investment in plant and equipment over-capacity, increased operating expense for increased workforce size over-capacity, and consequent lower ROI, as the cost for feeling good about having simple visual no-constraints happy happy joy joy flow manufacturing … if you insist on wanting an entire factory in flow, with no physical constraints at all — and I’ve had by ear bent by manufacturing consultants and manufacturing company managers and even a few execs illogically trying to persuade me that avoiding having constraints was the right approach …
update here jan 13: i haven’t read over all of what’s on this page. my eyes aren’t so good these days and proofing can be a bit of a challenge, but I did happen to return to this passage and notice this error. there, of course, may be others on the page. what? errors, me? smart ass … anyway, as to where I broke into this sentence …
ok, saying that folks were, “illogically trying to persuade me that avoiding having constraints was the right approach,” as if it didn’t matter what reality the speaker and listener had in experience associated with the words, and, as if for any of those meanings, there’s no range of validity on the statement, is a mistake. I could elaborate, but, for the moment, I’ll just leave it there.
Ok, I’m back. This might get a little bit long. I’m remembering when I thought this through back in the 90s. Where to enter this discussion? … I know where it’s going and remember most of the main points, but where to enter and how to proceed reasonably clearly and efficiently?
Let’s try starting this way. Consider a factory with a lot of equipment and work centers in it. The equipment is there 24 hours per day and 7 days per week, right? That sounds like a lot of capacity. It is. Potentially. I’ve known one plant that operated that way, a stainless steel plant in Ohio. However, if no people are scheduled to operate the equipment, the effective capacity is what? Answer: If no people go to work to operate those machines, the capacity is zero, right?
If we hire, train, and pay enough people to run the equipment for one 8-hour “day shift” each day, Monday through Friday, there’s only 40 hours of effective capacity despite the fact that the equipment is in the building 24/7. Obvious, right? That’s the way TOC works. Start with the obvious and stay with it all the way to the sublime. Oh, that’s new. That’s nice. : )
I’m remembering now one more thing … the things Eli used to say, and Larry Shoe used to talk about in terms of “preferring to have ‘the constraint’ …” somewhere … was that in the market, market constraint? or in the plant, physical constraint? Well, one of the things about doing TOC correctly is to avoid just automatically accepting what Goldratt or any other TOC expert says as a “best practice” … The appeal of Goldratt and TOC is that you don’t have to take anything on faith. You come to expect that, when Eli says something, if it’s not clear right away, you can think about it figure out why he’s saying what he’s saying. So we stay with the thinking until we see why the suggestion or principle or procedure is what it is … and be on careful watch for when it does and doesn’t apply … ranges of validity … so I won’t try to remember whether it was “prefer constraint in the market” or “prefer the constraint in the plant” … I’ll just, as usual, wait for the answer to become obvious, along with validity ranges, in the stream of verbalizing, of systematic application of common sense, which is one way I like to describe TOC …
I started this discursive little essay with a few images/points in mind, none of which I’ve gotten to yet. Having made the point about the equipment available 24/7, but not really effective production capacity until staffed with production workers, I’m not sure I’m liking that entry point after all.
But the points I had before starting were: a michigan pump factory that deliberately keeps only M-F day shift and a little swing shift for maintenance and daytime overload, a texas cabinent factory that I think does a similar thing with 2 shifts M-F and a little spillover to back shift or saturday shift, a conceptual picture of higher ROI from single-constraint or dynamic buffering DBR (as compared to “over-investing” in enough capacity to do flow manufacturing without a physical constraint”), and the image of — oh here’s where the fully-stafffed/unstaffed equipment came from — the image of flow vs. the two types of DBR at various numbers of hours, shifts, and days of staffing on the equipment and effects on equipment lifetime and ROI. In other words, the last point says the meaning of “bottleneck” (capacity completely filled, no gaps) is different for the plant staffed 24/7 (where they can’t just add a shift or some hours to get some more capacity … that resource is FULL) and for the plant staffed for 1 shift (it’s a bottleneck “sort of”, an “artificial bottleneck,” because it can instantly stop being a bottleneck and be a heavily-loaded resource just by adding a few hours or a weekday shift or a weekend shift.
One important thing to know that most people don’t think about right away, but is obvious once you do think about it, is that, unless a resource is completely overloaded with work 24/7, whether that resource is a “bottleneck,” heavily-loaded “primary constraint,” moderately-loaded “secondary constraint,” or a reasonably lightly-loaded “non-constraint” has everything to do with decisions made about the number hours people are assigned by management to come to work and operate the various work centers. (Leaving aside cross-trained labor and moving from work center to work center which happens, but we won’t complicate things further here with that for the moment).
Whether any resources, or which resources, are “real 24/7” or “artificial (based on less than 24/7)” “bottlenecks” or “constraints” — also has everything to do with what level of orders in what product mix with what distribution of due dates are accepted by the factory.
That means, a company can, in effect, choose whether to have an internal physical constraint, or which resource becomes the physical constraint by adjusting the level of scheduled work hours up or down, and changing the level of orders accepted to work on, or by altering the mix of products in the accepted orders, or by influencing the distribution of the due dates of orders accepted. This is a very important point. Factories don’t just have to react to what’s happening with the way the factory has always been staffed. Management can take a fresh look, decide whether to hire and staff at at just 40 hours, or 80 hours, or 24/7 (168 hours), and make marketing, sales, and order acceptance plans that alter the production mix that needs to be handled. They can decide “where to place the internal physical constraint” on a strategic basis vs. “identify what and where the physical constraint is” everyday. The investments made in the machines matters too, of course. For simplicity, I’ve been assuming the investment in machines has been established.
Here’s another issue that’s part of this. It will surprise a lot of people who haven’t lived and/or thought through transitions from poorly-organized manufacturing flows in factories with A-shaped product structures (factories with final assemblies) into TOC-style synchronous manufacturing. Even if the capacity situation doesn’t create a “bottleneck” or even a heavily-loaded “primary constraint,” the work in the shop has to be synchronized in the sense that build-able combinations of parts need to flow through the factory to final assembly. Some say just use sales order due dates … but … um … but a lot of things … which are relevant to this? … here’s a place … sometimes … people in the factory find it natural to use some particular work center or group of similar parallel work centers (like all the final assemblies) to be the “drum” pacing resource … The main point is a plant may well do drum-buffer-rope even if it doesn’t have a “true” 24/7-based “bottleneck”, and even if it doesn’t have an “artificial” “bottleneck” created by only scheduling a work center 40 hours per week, and even if it doesn’t even have a heavily-loaded “primary constraint” on either a “true” 24/7 basis or “artificial” scheduled labor basis … It might do drum-buffer-rope and just pick a non-bottleneck non-constraint resource to be the “drum” to facilitate synchronization of order due dates with shop material release dates, and both pacing resource and final assembly batch sequences all of which together synchronizes the work throughout the factory.
What’s that mean for the original issue?
What was the original issue? : )
Ok, the original issue was whether it’s illogical for a management to deliberately avoid having a physical constraint? Answer: Yes and no. Another answer: Maybe. Another answer: It all depends. Isn’t TOC great? All these great questions and answers!
A lot depends who owns the manufacturing company and what financial and non-financial results/effects they want to see happen from that ownership.
One place to start is this: A company doesn’t want to be market-constrained for the level of capacity associated with the level of staffing that they want to think of as their “permanent work force.” There are a lot of good reasons for this. It’s expensive in several ways to bring a new person into the company, train them, live through their initial low productivity and mistakes, lose all that when it doesn’t work out and the new employee leaves or is fired, do it again, works this time. Compare that with long-term high-productivity who fits in well, shows up every day on-time, works reasonably hard, knows the company well enough to participate in customer relationships, product improvements, process improvements. Also, the local community respects the companies that provide stable good jobs vs. hire/fire often for small dips in sales. Losing a talented employee who knows the company takes your company’s proprietary information about products, manufacturing process, prices, costs, and customers to your existing or new competitor. So you don’t want to have difficulty keeping sales and TVA cash flows high enough to keep your long-term stable work force steadily employed. Which means you don’t want to have a “market constraint”, an inability to get enough sufficiently-TVA-profitable sales, for the level of staffing that is your “permanent work force.”
So let’s start there. Let’s start with a level of Operating Expense (OE) associated with that level of permanent work force, say the michigan pump company’s M-F day shift. That’s the salary and benefits of all those people and the other OE to have the factory going, generate funds for future replacement machines and additional capacity and owner profit expectations, and loan payments — heat, lights, management salary and benefits, supplies, phones, copy machine, all the usual stuff. So we know that level of OE. We need a level of sales of the right mix of products to get the TVA to pay for that OE. Not having a “market constraint” means not failing to have adequate sales to meet that OE and not struggling to make it either. Not having a “market constraint” means being positioned in enough markets with products at the right volumes and prices to generate at least the required OE. There are TOC concepts and approaches to ensure this is the company’s condition — not “market constrained.”
Good so far.
Now a company can be in a “no market constraint” condition and also a “no internal physical constraint” condition. If that level of “permanent work force” can make the amount of product that meets needed OE and still have capacity — time — left over to play cards … like the folks at our lawn mower company. Flow manufacturing works in an environment of no physical constraints, or if all the processing steps have been combined into one cell. But, if it’s because of a lot of spare capacity, nobody’s going to do that, right? Sales will sell more and grow the production volume until some work center in the factory has to be treated as a “constraint” and treated in that special way TOC people are familiar with. And drum-buffer-rope starts.
So, if people tell me, they want to avoid internal physical constraints in the sense of not filling the plant to 24/7 so they always have some overflow capacity they access via overtime or new hiring, to prevent customers from going to competitors for emergency or big short-term orders, I say, ok, that’s logical.
It they said, they want to avoid internal physical constraint in the sense of, for a given level of labor, leaving the condition be excess capacity, I say, ok, that’s unusual, but maybe you have your reasons, things you want employees to do in their spare time at work.
So, at the end of the day and discussion, I think most companies will want to be in a “non market constrained” condition for their preferred level of staffing, to create their preferred and planned level of effective capacity, and, given that level of effective capacity, tilt sales volume and/or mix to the point where excess capacity everywhere changes from flow manufacturing to simple drum-buffer-rope and then, if they want, continue increase to using dynamic buffering at that same level of staffing, because why not get the additional TVA cashFlowMoney from that same level of permanent work force staffing?
whew! long haul, but that’s the right handful of perspectives to have to have one’s “arms around” the major issues in this question of, “should we avoid or deliberately take action to get an internal physical constraint?”
and as to market constraint? seems always a good idea to not be in that condition. be always making sure we’re not market-constrained and always taking proactive action to make sure one doesn’t happen in our company’s future. ideally. not saying any of these things is easy. but having a coherent and correct way of thinking about the right important things helps individuals and teams gather force more effectively and avoid doing ineffective things.
oh, you know where my comment about people “bending my ear” came from? … some of my non-TOC manufacturing friends did me the favor early in my TOC career at Apics of giving me the counter-arguments to the TOC conclusions … a standard response one would hear from the MRP I, MRP II, and especially CRP (Capacity Requirements Planning) would say, once they do their Master Planning of the due dates in the Master Schedule and run the little mrp (materials requirements planning) and use those planned order dates as the basis for running CRP, the CRP results would tell them how much labor to add to avoid having constraints. Suffice to say, that was once a pretty good way to manage reasonably well and was better than having no systematic way of managing, but it’s very clearly a thing of the past. It’s the standard MRP/CRP way of virtually ensuring over-investment in capacity for a given level of sales, probably at a lesser profit generation rate from the sales (lower TVA margins). There’s the JIT and Flow ways of virtually ensuring over-capacity and lower ROI and the MRP/CRP way, that’s as compared to not having a market constraint and building volume and smart mix to either a simple DBR or dynamic buffering DBR production state.
… a Haystack-compatible system will help you do it … but it will also let you work smoothly out of it into smarter operation and higher ROI … remember, as my pal, larry zycon would say, having no constraints always means lower ROI … now that might be ok sometimes, range of validity et. al., ie, for certain circumstances, but, most of the time, it’s better either to have not over-invested in capacity for a particular sales volume of product, or to kick sales and marketing in the butt to fill up the plant some more, to reach a nice point of having one or a few constraints of your choosing, choosing via capacity and order-promising decisions, to give both a nice steady high TVA/cashFlow –which investing in over-capacity will still give you if you ignore or don’t know about capital costs and depreciation “non cash” accounting items — and ROI … only being in a situation with some constraints gives you both … once again, though, this is TOC, range of validity, using a practice or even a “best practice” only when it makes sense, not just because it rose in somebody’s eyes, under certain circumstances, to the level of “best practice” … so, generally speaking, getting to where zycon, valmont, itt ac pump, kent moore cabinents, and lots of other toc companies got, through a nice series of transitions from one “not too much capacity” and “not too little capacity” for the cash-generating product volume, with one or a few well-managed constraints, giving nice sales, reported profits, tvaCashFlow, and ROI at each step … that’s what is most often what makes sense … )
[ subject: the blackmer/dover caveat about maximizing short-term ROI … another case study in range of validity …
… having said all that, as always, there’s a range of validity on what I just said while pounding on the table … it’s what I’ll call the blackmer/dover (and i’m pretty sure also the zycon, kent moore cabinets, itt ac pump, and rotron caveat) … which is basically that a manufacturing operation isn’t always seeking maximum short-term ROI from the physical plant and equipment assets in its factory … like anything else, it’s not obvious until you see it, and, once you see it, it’s then obvious all the time … : )
… the first time I thought about it was during a lunch conversation I had in grand rapids back in 1990 with my friend, tom s … my other pals, maury and the other tom, were there too … four of us … a stone’s throw from amway’s headquarters …
anyway, that day at lunch, I was listening to the management team, getting the picture about how the company thought about itself, where it had been, what it was trying to do … and I was, as always, in my TOC “whatever else is going on, watch for opportunities for the client to grow the TVA” mode …
[ I hadn’t invented the term, TVA yet … that came in late 97 or early 98, so I was actually thinking in Goldratt’s term, “throughput”, meaning “financial throughput” not “units of physical throughput”, which is why I invented the term TVA because it was a pain-in-the-neck like right now to ensure people know it’s cash-flow-like throughput, but Goldratt chose the term to make a manufacturing company seem, at the global level, to be a money factory, which helped solve the local vs global measurement problems, and how do i get out of this interesting, but everlasting, loop … ] [ ok, there … whew ]
… outside of the friendly social chat, most of the conversation about the business was between tom s and myself … the other three guys and I talked all day everyday … this was the opportunity to listen to how the chief thought about and spoke about things … from time to time, I asked the usual manufacturing consultant questions … like how many shifts do you run? … tom said they worked 1 shift monday through friday, with maintenance on swing shift and also sometimes some production on swing shift to handle what was needed to keep day shift on schedule … i said, great … what i was thinking was, great opportunity for a big TOC win here … lots of physical plant capacity to handle the growth in sales that would come from getting leadtimes down and on-time delivery stats up using TOC … at that point, i had a lot of experience in general, and a lot of knowledge of haystack scheduling, but not much experience in manufacturing … as a TOC guy, in the attitude we then thought of as “throughput world”, which I later called, “TVA world”, which meant, among other things, “achieving ROI objectives now and in the future using scale of importance with growing TVA/cash above other measures”, i was thinking, ok, cool … with only 1 shift running, there’s 2 shifts and weekends of capacity to fill with new sales when we get the leadtimes down and on-time delivery numbers up. cool. so i tested that idea, but tom said something like … well, we run just one shift because, if we run in to problems on day shift, or if our customers need us to respond immediately and effectively to short-term demand … our customers are reps and distributors who don’t know for sure when they’re going to suddenly close a deal for 100 pumps for 100 city firetrucks … or 200 gasoline trucks … and our corporate parent likes being able to predict how much sales, profits, cash, and assets they’ll need to show wall street from our operation, so, when we get the leadtimes down and make our on-time delivery better, we’ll probably just stay pretty much where we are, the same size of company in terms of overall sales and profits and cash flow and permanent work force, maybe keep growing it a few percentage points each year, but we’ll be serving our customers even better with shorter leadtimes and better on-time performance, and the better synchronization will make operations smoother and better for quality and worker quality of work life, which will make our current strong position with our customers and employees even better … he also said, now we are, however, very willing to do like the idea in the “P and Q” exercise and shift mix a little bit every once in a while and give profits a nice little boost when we need or want to (ie, in many of the standard TOC books and courses that shows how getting sales and marketing to shift mix of products sold can change how much TVA/cash/profit gets generated by the factory)
that’s what tom said …
… so sometimes, the role of a company manufacturing division in the overall corporation is to be a good steward of a fine product line, with ever-stronger competitive position and brand loyalty with satisfied customers at good win-win value-based (vs cost-based) prices, and solid relationships with skilled long-term employees, that delivers a reliable steady level of sales and profits and cash flow and assets up through the financial accounting procedures to the parent company’s income statement and balance sheet …
another issue we didn’t discuss at the time, but tom had probably thought of before was that running those same machines 3 shifts per day instead of 1 shift per day 5 days per week (15 vs 5 shifts per week) and then adding 6 weekend shifts (now 21 vs 5 shifts per week) would wear the machines out over 4 times as quickly as they were wearing out in their existing schedule.
incidentally, for the stock market analysts, CFOs, fans of my pals at Stern-Stewart of EVA fame, and accountants out there, the TVA financial management system deals with the “depreciation” non-cash accounting entity by … let’s see … better slow down here … i have to think this through again … the essence is cause-and-effect financials … “what it actually is” financials … for decision making … vs all the accounting fictions that are either useful or required for reasons other than making decisions … and vs separate required accounting rules for separate required accounting reports … “depreciation” is a fiction required in required reports, but not required or helpful in decisions (except if the way it shows up in required reports has some effect that has to be considered and managed, like a good example is the effect of depreciation fiction on taxes, that’s real) … projecting wear-out rates and replacement dates for machines is not fiction and is useful for decisions … considering the purchase, lease, fund from TVA or fund with debt or equity is not fiction and is useful for decisions … just show all this on five-year projected TVA-I-OE like the chart in detective columbo bit in chapter 6 my book …
he probably also had thought about the fact that it’s a lot easier to hire, train, manage, and retain a “day crew” of factory workers and supervisors and staff than to find them plus workers, supervisors, and staff for 4pm – midnite “swing shift” and mid-8am “back shift” and weekends. Another factory I was working with at the time in Ohio did work a 24/7 schedule like that which was part of why my first thought when hearing tom s was running just 1 shift that he might he interested in growing like wildfire. reflecting on it, i’m thinking that it’s probably the very highly capital intensive factories like the stainless steel plant in ohio that find it makes sense to staff around the clock 7 days a week, and they are most likely the exception. as i’m thinking about it, most factories I’m familiar with had substantial capital plant, but weren’t, like the stainless steel plant or an oil refinery,”capital intensive”, and most of those non-capital-intensive factories worked “day shift plus a little” schedules.
so how does that mean most companies would use haystack software? and what does that mean for constraints? capacity? excess capacity? ROI?
A Haystack system will let you use any of these three operations scheduling and management modes — flow, simple DBR, or dynamic buffering DBR. It’s the only type of system that will let companies do any of these and change easily — very easily with virtually no major changes seen by the people on the shop floor — from one to the other as strategy, demand, economic conditions, labor availability, and such change. To make a Haystack system do simple due date-driven flowing scheduling and flow execution, all you have to do is over-invest in capacity in forms of both plant & equipment and staffing, show that to the system in the available capacity parameters, and the system will dutifully detect no physical constraint and just give your shop people material release dates that let parts flow right on down through fabrication and assembly operations to shipping. No problem. But, from that over-capacity position, if you strategically decide to introduce a new product line, and keep the old ones, or develop a new market for the existing products,and use the same factory, the Haystack system will support you there by, in planning mode, showing you which resources will become constraints, in fact, the planning system helped you decide which products to emphasize in the new markets, by noting the differential capacity consumption of the products, letting you select which resource would become the constraint, so you know months ahead of time what the single-constraint DBR situation in your factory will become, with control over batch sequence on the constraint and continued flow scheduling in the non-constraints areas. So, unlike with non-TOC “flow manufacturing”, you added that new product line or market without investing in new plant/equipment or increasing staff. If you were adding the line or market in the non-TOC flow manufacturing mindset, you’d have built a new focused factory with over-investment in capacity similar to the first one and, after a while, somebody in the CFO’s office, or on wall street, would begin to notice your much-ballyhooed “simple” “visual” “flow” “focused factory” “cellular” way of always doing production wasn’t using capital as efficiently as competitors, and, not only that, the prices you need to keep getting ROI in that over-investment in capacity would be a bit of a strain if competitors are doing TOC-style constraints-based flow manufacturing and meeting your performance to customers at lower prices. Ouch.
And then, as you decide to tinker with marketing efforts, the tinkering guided by TOC Haystack sbds planning, get more and more out of the existing plant/equipment/staff capacity, you move from the simple DBR scheduling to dynamic buffering.
And through all three modes, what’s happening is the planning and scheduling is getting a little more sophisticated — but manageable and matching the realities people are considering outside the computer anyway because the computers prior to Haystack haven’t been matching what they’re doing anyway — but, what’s really important is that in all three modes — flow, simple DBR, dynamic buffering DBR — as ROI goes up, the simplicity on the shop floor stays pretty much the same. It’s (1) all flow in flow, it’s (2) give specific batch schedule to one constraint work center and flow everywhere else in simple TOC DBR, and it’s (3) give schedule to one or two or maybe a few or even several similar (like to each of several product focused final assembly cells), but not even close to most of the work centers, constraint work centers and everybody else doing flow in dynamic buffering TOC DBR. It’s a huge misconception that Haystack is overly complicated. It’s not. It never becomes complicated on the shop floor. It becomes a little more complicated in the scheduling/simulation/planning as you go from TOC flow to TOC simple DBR to TOC dynamic buffering DBR, but is just a reflection of what’s true and needs to be considered anyway. And ROI goes up up up as you go from anybody’s flow to TOC flow to TOC simple DBR to TOC dynamic buffering DBR. That’s enough reason for all commercial manufacturing database companies to made the modifications to their transaction sets that will make all this easier to implement.
… not only allowed for multiple-sequentially-selected-constraints and variable-length-batch-specific releaseTimeOffset/timeBuffer/wip/shopLeadtime scheduling (usually just referred to as “dynamic buffering”) to support day-to-day operations (usually just referred to as “drum-buffer-rope” scheduling and “buffer management” shop floor control) and also forward-looking decision support (usually just referred to by me as “schedule-based decision support, sbds”), which leads, compared to everything else — i.e., to simple TOC DBR, to “flow manufacturing”, to “just in time” — to superior flexibility and more consistent ROI (we’ll prove that later … this sentence is already getting too loaded up with related true things, but, in case I forget to do it, it’s because it allows companies to get all the flexibility and lean “right sizing” and shop floor simplicity while — unlike most of the others — keeping a lid on the I part of ROI, the “plant and equipment” line item in financial statements shop floor people don’t see or think about, but CFOs, CEOs, boards of directors, and company shareholders do see … i wrote about this in some of my papers presented in various Apics forums in the mid-to-late 90s, but the logic and conclusion derive unavoidably from the principles in Goldratt’s 1990 The Haystack Syndrome … why less I, as in ROI, from even simple TOC DBR and especially less simple, but still only as complex as the real common sense situation is anyway, and, of crucial importance but not widely-understood, still simple in terms of what shows up on the shop floor, TOC Haystack? compared to most non-TOC “lean”, “flow”, “visual”, “jit”, and other approaches? because the non-TOC ones get that “simplicity” by over-investing in excess vs. protective capacity … the non-TOC approaches don’t even make a distinction between protective and excess capacity … they just focus on shrinking leadtimes, wip, and current operating cost at the expense of capital cost … often at the expense of capital cost, not always, it depends on product structure, plant structure, and timing of mixes of orders … but we’re looking for an approach, and a system architecture that supports the approach, that can adjust itself to all the various scenarios … ) …
incidentally, this “dynamic buffering” mode of Haystack systems is the mode/use that (1) most people incorrectly think is the only thing that Haystack-compatible systems can do and (2) most people incorrectly think is too complicated to understand and use. It’s actually something very elegant it can do to adapt to a useful level of complexity (people have gone overboard looking for simplicity in manufacturing … there’s always a best balance of simplicity and some complexity) — when it’s useful or needed — and the thing none of the other approaches or systems can do. But it’s not the only thing Haystack systems can do. They can also do simple DBR (with drum scheduling of one constraint) and even simpler “flow manufacturing” (letting the order due dates and releaseTimeOffset/shippingTimeBuffers “flow schedule” the factory with no declared “physical constraint.” And, if there’s no physical constraint in that factory that’s getting those short leadtimes and low wips from “flow scheduling”, what do we know about that factory? Answer: IT’S OVER-INVESTED IN CAPITAL PLANT CAPACITY ASSETS compared to getting pretty much the same low wip and low shop leadtimes from simple DBR or even lower wip/leadtime asset investment from dynamic buffering DBR.) I wrote this in some of my Apics papers. I don’t have them handy, but doesn’t matter, it’s written here. I’m wondering if I wrote it that way in my book. Maybe. Probably. The book diskettes went to publisher in late 97. I was still accepting invitations to speak and writing a different paper covering different aspects of this for each talk.
Well, I think the proof about the ROI that was going to be later just verbalized out sooner. That’s fine.
… and not only allow use as
A lot of these points about effect of TOC and Haystack on ROI and TOC with other approaches were things a few people understood. Some were publishing part of it. I was trying to create symposia of other people putting parts their experience symbolized into the public domain. I was publishing things in part to cover things not covered and in part to try to all get all the important pieces covered in the public domain within Apics.
One could infer these things as unavoidable conclusions from a proper reading of Haystack Syndrome or from reading that and the several papers I wrote, presented, and published in Apics during 1994 through 97 or maybe still in 98, yes also 98, entitled things like, “TOC and Lean Manufacturing”, “TOC and Agile Manufacturing“, “TOC and Process Improvement”, “TOC and Re-Engineering”, “TOC and Activity Based Costing” …
[ … trying to remember … it was at the Apics Aerospace and Defense SIG conference in the 95 – 98 timeframe, probably 97 … I had worked out the way for TOC folk and activity-based costers to work together … not sure if I worked it out completely myself … or whether seeds planted by goldratt, fox … i know they were saying TOC throughput world versus ABC as part of the necessary attack on allocation-based product costing … there may have been, probably was, discussion in that debate ( in Cambridge Mass, bob and eli vs. robin somebody and another leading ABC guy) from the TOC side about knowing everything needed from knowing the effects on operating expense … can’t imagine the debate went anywhere without it … but there were insults and bad feelings arising that day … also avery-denison’s steve buchwald was doing something smart with toc and abc … but, whether i’d heard it before or not, i didn’t understand it well enough to be completely certain and persuasive in insisting on it until i derived it for myself one day and then took it into that presentation at the A&D Sig, the one Tom Nies spoke at and i covered for Ptak and did her presentation for her and met CincyComGal and DanThePaperMan … It was, and still may be, a huge point that needs to be made, though it maybe seems subtle and not a big deal because of the words …
my agenda was working out ways for the various company improvement conceptual crusading camps to work together well with TOC without giving up any of the things TOC was just plain right about, and showing that doing the other things with TOC instead of without TOC gave huge benefits …
In this case, for the A&D SIG paper and presentation, for the benefit of the activity-based costing (ABC) community and our shared client, the manufacturing industry worldwide, I had re-defined “activity based costing” to mean “activity cost analysis” and “activity analysis” and further re-defined all three to mean “cause and effect analysis of the unallocated overhead and direct labor operating expense line items for base case and incremental decision scenarios (thereby completely eliminating allocation-based product costing for decisions).”
This is very important, and I think still not widely-understood outside the TOC community.
As i think about it, I must have worked this out by 89 or 90 for a little presentation I made to new cisa information associates of agi that was the first time i gave the detective columbo bit and a left-to-right-arrow timelined T-I-OE slide. The typical corporate time-on-horizontal axis diagrams for rows/accounts/ledgerLineItems money flows weren’t being used. I started to use them for the three rows, T I OE, of “throughput world”. there was the vertical “scale of importance” diagram with T over I over OE, but not with horizontal multi-period money account lines. and there were single product unit throughput calculations in p and q exercises. but no timelines and it the timelines and the associated multi-year planning scenario discussion are helpful, i’ll say essential, to persuading the accounting, controller, CFO, CEO, audience that needs to be persuaded of the TOC accounting issues.
I’m realizing that I’m right about my point that horizontal visuals are needed for the CFO/CEO/board/stockAnalyst view, but that, when I’m writing of it, I’m combining (1) the right-going arrows of discounted-cash-flow (DCF)/netPresentValue(NPV) analysis — that also have the up and down arrows at points in time showing cash outflows and inflows — and (2) the left-to-right spreadsheet/table displays of the numbers. In fact, I remember now, I went back and forth between the chart/table horizontal-time format and the dcf/npv arrows horizontal-time format … I decided, for my book, for my definitive Detective Columbo case, to use, instead of the diagram of DCF/NPV little vertical arrows on long horizontal arrows, to use a tabular display. Let me get that graphic. Hold the phone here … : ) …
This Detective Columbo chart is the “chart/table” way of visualizing the multi-year financial effects of various combinations of decisions considered as opposed to the “dcf/npv-like arrows for timeline and cash flows” diagram. (Didn’t my book designer, Sarah Nicely Fortener, do a wonderful job setting up that chart and page!) The dcf/npv “timeline and cash flows arrows view” is still very handy as kind of a quick icon to, with a few quick dry-erase marker pen strokes on an overhead projector transparency, represent the multi-year table of numbers or to get into dcf/npv discussions if somebody wants to. But, to use a single format to decisively nail the point with the either the everyday garden variety activity-based allocation-based product coster, or the best of the world-class players in corporate finance or activity-based costing or activity analysis or activity-based management, it’s part or all of the above table that’s best. It lets the chart’s labels and numbers do most of the talking right away when you’re there, but also later when the person you’ve persuaded later thinks, “Wait. How did that go? Naaah, that can’t be right.” It is right. To get to the same level of detail and ironclad case with the arrows, you have to spend a lot of time first telling some people what the arrows are and then adding some numbers and then you still haven’t yet started to make the real case about cause-and-effect on the OE line items being enough and making allocation-based product costs obsolete for decisions. There are a LOT of points made, questions pre-answered, and objections/obstacles anticipated and pre-cleared/overcome in the way that Detective Columbo chart’s rows and columns are set up and labelled. The arrows are great for just creating the overall idea and image in mind, or when with TOC-friendly folk, or with folks who don’t really understand or like accounting, finance, and planning, but, if it’s a group that still thinks it’s anti-TOC, and if they’re world-class experts who know what they’re doing, Detective Columbo’s chart and schtick (little act/skit, in chapter 6 of my book, that creates an iron clad case and then asks the experts to say why it’s not everything that’s needed) is what you want to use to nail the point decisively without the conversation blowing up emotionally due to misunderstandings. Another way i’ve done that table, for the non-CFO level, is show less of the EVA-related lines items at the bottom and more specific OE line items that make the point more visual — operating expense line items over 1, 2, 3, 4, 5 years for things like custom engineering staff (that get used up on each new order so are a part of capacity and can become a contraint, even a drum, but don’t need have costs allocated for allocation-based product costs), order entry staff (ditto), production staff, marketing expense, legal and HR staff expense, heat, lights, etc — just to show ALL the overhead and production labor money flows can be dealt with at whatever level of detail is desired, on a cause-and-effect basis, for any given scenario — and, when that’s done — there’s zero benefit and substantial risk of bad decisions and, ironically, unnecessary non-value-added activity for activity-based allocations to product cost at the front end of the accounting cycle to argue over what the allocation rates should be, to deal with mix changes, and at the back end to unravel variance analyses.
Also, need to be careful with the capital charge thing on my Detective Columbo chart above. It’s the EVA version of hurdle rate from dcf/npv analyses. True TOC planning and decisions are about the facts of what one might do, what effects that’s likely to have, and whether you want those effects on business situation or other effects. All the other measures — even ROI, but also NPV, EVA, and such take a lesser priority. This is a subtle, but important point: If the management believes a particular move is essential to the survival of the company (on the downside) or to definitely seizing some competitive/structural position in its industry (on the upside), and the TVA-I-OE picture says they can get there, that it’s possible financially — it won’t really matter if the ROI dips a little or even a lot below target for a while, or EVA and debt- and equity- and risk-weighted Ibbotson-Sinquefield capital charge or hurdle rate or DCF or NPV or HIJKLMNOP dips a little and says “no go” … the essential value of the TVA Haystack SBDS TVA-I-OE way of displaying operational and financial cause-and-effect, and the reason it is superior to all the rest, is it provides the right data, with the right levels of aggregation, and the conceptual/philosophical leadership to get the right operational and financial facts, and act on those facts, regardless of whether the ABC, EVA, DCF, NPV, or even ROI specialists and their usually useful guidelines are saying otherwise.
The simple little horizontal and vertical arrows all MBAs use when learning about discounted cash flow (DCF) and net present value (NPV) analyis, and all corporate finance people think from, look something like this (I’ll discuss the differences after the image):
The timeline and cash flow arrows diagrams that MBA students use in their DCF/NPV exercises, and that CFOs think from all the time, have the up and down arrows I show in three lines combined into a single investment project timeline. In their diagrams, as with mine, the expenditures for investments and expenses are shown as down arrows under the line and income from the project are shown as up arrows on top of the timelines. Also, they don’t have TOC’s TVA-I-OE labels on the left. The label on their diagrams is just dollars.
As I was making that timeline arrows graphic, I was remembering that I was talking to Drew G about writing a book that worked the TOC TVA financial thing at the CFO/EVA/Stern-Stewart level. Start there and let the discussion ripple down through all the rest of TOC and manufacturing and supply chain. I got busy with other things and, if I’m not mistaken, it still needs to be done. Dream Team: Eric Noreen again, Institute of Management Accountants (IMA) again, Shel at the IMA, Kenichi Igarashi (formerly of NEC corporation, who can also write or help write the book we need for Haystack), Stern-Stewart, Robin Cooper, Robert Kaplan, John Caspari, Charlene Spode, Larry Shoemaker, Thomas Mannmeusel, definitely Eli Schragenheim ( now that I think about it, he may already have covered a lot of this already, all of the substance of it with results tables and charts of OE line items being financially effected by decisions and sometimes becoming physical constraints … i’m thinking that’s enough for persuading toc-friendly CFO-level minds, but maybe not having the fully dcf/npv/eva/hurdleRate ibbotson-sinquefield stuff, there’s a lot of difference in how to express it to that kind of rarified and often a bit elitist not-oriented-towards-getting-hands-sullied-or-dirty-in-gasp!-operations wallStreet/boardOfDirector/CFO/ community that sternStewart (who, i have the impression is pretty ops friendly for that crowd) serves with EVA ideas and services, differences in language, sequence, and style, vs Apics members and their operational-friendly bosses, to reel in the toc-hostile finance community, in his books and post-OPT … he created the famous OPT game … and post-agi … he created the famous TOC simulators used in the standard TOC executive decision-making and jonah and other education courses … that bob and eli goldratt and donn and dale and avraham and harriett let me attend for free to get me jump-started in TOC in the second half of 1989 … management simulators), of course Avraham Mordoch (my mentor and pal who was development program president for the Goldratt Institute’s Business Analysis System (BAS) that was later renamed the Goal System), maybe Charlie from Rotron would be willing, Dick cfo from egg, collaborate with Tom Nies’ and Larry Ellison’s and similar folks, with Bob and Eli available for advising, to do the really pretty straightforward thinking-it-through work to get the right tables/charts for this and that and the other obvious recurring company view/decision, and the correct arrows diagrams for this and that and the other view/decision, and just keep doing it until it’s done, understood by people, and available as standard stuff in all the world’s manufacturing planning and control (MP&C) computer systems.
Update: One of the things this book by the Dream Team about the TOC TVA Financial Management System needs to do is something I’ve done for myself back in the 90s, but didn’t build into the version of the Detective Columbo chart above, which is address the issue of machines wearing out and being replaced. The story told about the 5-year plan needs to include a discussion of how replacement machines will be funded — from internally-generated TVA funds, or from divisional debt, or corporate capital infusion? I’m avoiding using the term, “depreciation,” because it is not a real cash flow fact in the cause-effect picture. For existing machines, if they’re leased, they the lease payments are probably in OE already … anyway, the trick is to avoid getting confused by accounting fictions, to stick to the cash flow facts, and have the view for a particular decision be thought through to have the relevant facts. Not having an accounting background and not having an MBA might help in not getting confused about what’s an actual cash flow and what’s an accounting phantom.
There’s a world of difference in what we need to do and see for corporate, GAAP, and tax reporting and what we need to do and see for making decisions. It’s the attempt to use the same numbers, levels of aggregations, views, and procedures for both reporting and decisions that causes much of the confusion and problems.
Even the I in ROI in standard TOC discussions becomes a problem (purchased old bargain machine or market/replacement price, or maybe not single purchase price but ongoing lease with buyout) if you’re dealing with reports based on past investments and operations vs. easier forward-looking incremental I vs. incremental TVA and incremental OE (and, therefore, incremental Net Profit and resultant ROI) decisions. There are many different views, base cases, and incremental cases and factors to deal with.
It will be useful in this book to remind the stockAnalyst/CFO folks who deal every day with fictional “cash flows” generated from GAAP external reporting procedures (I believe used internally as well, maybe even for EVA) … the EBITDA fiction which is a very good thing for the stock analyst to use to approximate cash flow from external reports is not what we want to do internally for decisions … at most, we internal people might want to be aware of how our decisions will roll up through reporting procedures and look at higher levels and externally, but we want to deal with cash flow facts. EBITDA is Earnings Before Interest Taxes Depreciation and Amortization and is the closest an external analyst can get to actual cash flow by using only externally reported data. Since reported “earnings” has non-cash accounting fictions like depreciation and amortization and real cash flows of interest and taxes, EBITDA is an external person’s equivalent of an internal person’s TOCThroughputTrueValueAdded-OperatingExpense=NetProfit
And, in my book, I forgot to throw one more rock and give one more compliment to EVA. The rock to throw is the EVA people don’t get on the “let’s get rid of allocation-based product costs at division levels” bandwagon. Stern-Stewart needs to get with the program, man. (They’re good guys and gals. They will.) The compliment is I was in a company that implemented EVA and it went in smoothly, was well understood, and was a big improvement in coherence in a huge multi-group multi-division corporation.
So, let’s go, Dream Team. We need this TOC for CFOs book. We have Eric Noreen’s TOC book for management accountants. But we need to bring the viewpoint up a level or four from the factory management accounting view and work it from there (factory to division to group to corporate to external analyst, that’s four levels).
But, back to this notion of how and when the suite of ideas came together — Detective Columbo, cause-effect on OE line items, redefining ABC, and such — I did some of the key parts basically 4 main times (with a lot of more informal uses in between) – (1) at that 1990 Goldratt Institute CISA meeting (with some, but not total confidence … that was a friendly audience pre-sold on TOC and I think mostly not familiar with CFO/CEO mind/view), (2) Apics A&D SIG probably 1997 (by this time, I was sure and was ready for accountants, controllers, CFOs, Stern-Stewart and their EVA world, and anybody who showed up), (3) my book, written 1997, published 1998, and (4) article in Apics Magazine, submitted via email in Feb from spring school vacation break at DeVos’ Peter Island, published in April, 1998.
… “TOC and MES (Manufacturing Execution Systems).” Dick Ling handled the important, “TOC and Sales and Operations” paper. I also did papers and presentations on “TOC and MRP II (Manufacturing Resource Planning), “TOC and Just In Time”, and “TOC and TQM” papers, but, unlike those others, these had been done several times before by others (like Jim Cox, John Blackstone, Dettmer). I just did them again to create a pretty complete set of statements in roughly the same format about why TOC could be used with, “over”, or “under” the other prevailing management and improvement paradigms that people thought were “either/or” situations with TOC — with specific good reasons, cause and effect, to expect even better results than using the other good ideas without TOC. Each time I was invited to speak somewhere, instead of hauling out a standard speech and doing it again for the umpteenth time, I wrote a new paper that corrected one of the many misconceptions about TOC vs other approaches I knew were in people’s minds because I was talking with and listening to people. It made each presentation a little less polished than if I were doing the same speech over and over again, but, more importantly, it was creating the case in the published professional technical literature for why TOC should be used everywhere even when people were also madly in love with other approaches.
Another little tidbit is that it was physically possible to develop and implement Haystack Syndrome-based systems even back then, in 1990 and 1991, despite substantially less inexpensive computing power and easy networking in the industry at the time but mainly despite almost total lack of transaction support from commercial manufacturing database systems software products. That it was physically possible was demonstrated by the herculean successful efforts by Bob Vornlocker and his pals at ITT AC Pump in April 1991. If it was possible to do it in 1990 and 1991, it would be easy to do in 2011 — IF the commercial database suppliers make what are really pretty modest changes to their standard manufacturing-related database transaction procedures and programs.
Never Did Find What Happened to BAS/GoalSystem
Last I heard, it was here, but no visible sign of it today. I suppose it will show up again sometime.
Another Day, Another Several Afterthoughts
jan 11, 2011
Also went back added all that discussion above about the “TOC and Activity Based Costing (ABC)” paper.
What About The Traditional “Work Order”?, Part II
Earlier on the page, we asked “what about [traditional] work orders?” And “what was their role before now when, clearly, all manufacturing planning and control computer systems suppliers and all custom-programming systems integration houses and all in-house customs programmers are sprinting to adjust their systems to reflect the still-not-widely-implemented characteristics explicitly described or unavoidably implied by the 1990 TOC book, The Haystack Syndrome?” and “what is their role now?” and “do work orders go away completely?”
we only answered that last question. the generic idea of an “order” isn’t going away in any of its obvious meanings as “sales order”, “purchase order”, or, more to the current point,as an “order” for “work” to be done. for people new to manufacturing systems and to manufacturing operations, it won’t be difficult to just get their first impression and understanding of “work order” in the new way. no problem. for people who do have experience, shaking off all the pictures, associations, explicit meanings, and connotations will be a little tougher.
Computer Terminology First Borrowed and Then Meaning Changes
as i was thinking about it the other day, and it recurs to me now, the “work order” situation is a good example of TOC’s “step 5 loop.” the “work order,” in the sense it’s been used for at least the … how long? … it’s 2011, IBM getting big growth from manufacturing computer systems in I think the 60s, so 70s 80s 90s 10 more, so for at least over 40 years … and, if the computer industry picked up the term, “work order”, from pre-computer factory administration, probably did, computer programs usually do use the terms people are using before the function gets computerized, like spreadsheet program could have been called “table program” or “chart program” or “rows and columns”, but the spreadsheet inventors knew only the bankers and businesspeople who used “spreadsheets” could afford computers, had them, and had a need could be satisfied in a homerun-hitting way with an “electronic spreadsheet” … hence, i forget the name of the first one, but Lotus 1-2-3 dominated, then Microsoft copycatted with Excel and, today, most of don’t know where the idea of “spreadsheet” came from … when we think the word, “spreadsheet”, we think Excel’s or Lotus’ little rows and columns.
Keep or Toss “Work Order” as a Term?
That’s how it’s going to be now with the idea and the shifting realities that were and will be referred to by the term, “work orders.” Unless, because of the potential confusion during the transition, some other word, like “thing” or “rabbit” or “grapefruit” or “task order” or “process step order” or just “task” … no, not just “task” … there’s a difference between the “order” for a “task” and the “task” itself … actually, comparing “task order” to “work order”, i’m immediately pretty persuaded that “work” is not as clear as “task”, “work” can mean “many tasks” as “work order” does now … let’s leave it there for now … looking at the physical reality of manufacturing, there are several “order”-like phenomena that need terms to refer to them … the one we currently call, the “sales order line item,” the one we call, the “purchase order line item,” the ones we’ve called earlier in this page, “orders for process steps” (BAS/GoalSystem called them, “stations” but I think that’s a good term, “task order” is better), and … let’s see … that may be it … what that’s leaving out is the current “work orders” at assembly and sub-assembly bill of material levels and the “work orders” containing the “routing steps” along the “routings.”
“Task” and “Task Order” Get the Nod
Ok, that discussion did several things, one of which is show people new to “work orders” roughly what they are and roughly where they have fit within factory environments during the past 40 years or so. It made a decision to use “task” and “task order” instead of “process step” or “routing step” or “station” for individual processing steps at work stations and orders for individual processing steps.
TOC’s Step 5: Breakthroughs That Become Constraints
Before wondering how long traditional “work orders” had been around, I was starting to say that the case of the traditional “work order” is a nice example of the idea in TOC, expressed in step 5 of the system improvement process, that a breakthrough solution, brilliant idea, genius solution, from a prior round of system improvement can — and often is — the main constraint blocking the next brilliant genius breakthrough solution in the next cycle or generation. Step 5 says, “go back to step 1, which is identify the system’s constraint, but, caution, don’t let inertia become the new primary constraint.” Here’s Goldratt’s brilliantly altering traditional physics itself by asserting correctly that the concept from physical motion physics called, “intertia”, also applies to human behavior. Anyway, that’s a good perspective to have in thinking about traditional “work orders”, because they really do a lot of things in the systems of the last 40 years. The manufacturing concepts and systems designers did brilliant work in using the work order to deal with the underlying manufacturing reality, the cost and other accounting rules pepole all viewed (incorrectly) at the time were necessary and correct (they were necessary, but not correct in how they were proceduralized, … Goldratt published a paper in I think 1988 entitled, “Cost Accounting: Public Enemy Number One of Productivity”, not because cost accounting, in general, is bad, cost accounting is inevitable and necessary … after all, even the TOC equation of TVA/throughput = net sales price minus material or totally variable costs involves a form of cost accounting, ie, to get the materials and any other units-driven costs on some sort of first cost FIFO, last cost LIFO, average, projected future … but because the allocation-based product costing procedure and certain inventory valuation/costing procedures within the worlds of cost accounting and “management control accounting” were creating a wide range of illogical forces, many of which have been cleared up in the last 30 years, like global vs local measurement priority … others have not yet been completely cleared up and some problems have been replaced with other problems like replacing direct labor based allocation based product costing with activity based allocation based product costing instead of getting rid of allocation based product costing entirely and replacing it cause-effect analysis of operating expense line items in specific incremental decision scenarios) … anyway, it can get a little complicated, but the point here is, manufacturing grew starting in the 1800s and on into the 1900s in the industrial age with certain assumptions and techniques the manufacturing managers thought were necessary and helpful to success … they worked well enough to grow big companies all over the world … but, as the 1950s, then 60s and faster and faster in the 70s and 80s came along, various people in companies and in universities figured out that many of the things that had always been done didn’t need to be done, or should be done differently, put them to use, gained competitive advantage, put their competitors out of business, people noticed, people asked how and why, the ones who knew tried to keep secrets, but word gets around and people figure it out too, and that’s the way lots of things have changed within manufacturing companies. in the 80s and 90s, instead of things mostly being done like one company at a time,
actually, let’s take this to apics … in the 1940s, apics was formed because some companies had figured out how to manage inventory and production better than others and the people who formed apics correctly felt it would be good for the country to have ALL US manufacturing companies doing that the better way. that was before computers. so apics had been there for a decade or two figuring out and communicating the concepts and procedures of manufacturing in the time just prior to when it all got put into a computer in the 1960s and maybe 50s, not sure, a while ago anyway, by IBM and others, like Cincom whose founder, Tom Nies, came from IBM.
anyway, to make that interesting story short for the moment, though I think the traditional “work order” is still one of the big obstacles to getting manufacturing computer systems in line with the intrinsic natural opportunity that exists in manufacturing, i also think of the traditional “work order” and the “mrp” and integrated manufacturing planning and control systems they’re a part of as being pretty brilliant solutions at a point in time. you’ll see part of this if I ever get out of creating the context/perspective and get around to describing the details of how the work order fits into the overall cycles of planning, scheduling, tracking, and both good/necessary/helpful accounting (like recording “actuals”) and bad accounting (like contributing to overemphasizing local measures and probably, i’ll have to think it through again to be clear about it, it’s been a while and I only took it so far before, probably contributing to dysfunctional inventory valuation, and allocation-based product costing). There’s no question in my mind about the fact that the traditional work orders became a problem, but I’m pretty sure that, if I get through this re-analysis re-verbalization of the role they played in the systems of the 60s, they’ll look pretty smart. The only two ways they might not is (1) if the people who created the standard could, even with computer tech at its state at the time, could have selected a different architecture or process … ie, if a goldratt physicist had been working at ibm at the time, or at the oliver wight firm, and “saw” the inherent natural way it could be done, would a different system have come out in the 60s? or would computer tech limitations, mandatory (even if incorrect) accounting, rock solid expert (even if incorrect) manufacturing “best practice” have made it impossible? or (2) i maybe combined both into that first one. : )
How Many People Think About “Work Orders”
For purposes of efficiently understanding the main functional and psychological issues involved in moving away from traditional “work orders,” it’s I think most efficient to say, that for many, maybe most, people involved in the change, “work order” means the “paperwork” –the “shop paperwork” — that they’re used to seeing all the time in the factory, reading, using, passing along, and talking about.
That’s one thing that “work order” means and is.
Work Order as Computer Stuff
Earlier on this page, we’ve been discussing a “work order” as a collection of computer system data elements stored in the rows and columns of the system’s Work Orders Table, that contains information about the part numbers and quantities of parts released from the stockroom and about the process steps in making things like our lawnmower axle example.
Work Order as Paperwork
But again, simply stated, most people who have worked in a factory, when they hear or think, “work order,” they think of the paperwork — what many environments refer to as “this? oh, this is the shop paperwork for those axles we need to make today, that goes with those 4 parts in that bin over there at my work center, i was just reviewing the special notes and process instructions before beginning to set up the job to hammer on them” — that goes along with the parts on their travels through the factory. It can be a piece of paper, or a little stack of 2, 3, 5 or more sheets of computer print out paper. Sometimes, it’s computer printout paper attached to a sheet of stiffer cardboard-like paper. Often the paper is placed into an envelope. Sometimes it’s a big manila-like envelope with spaces on the front to write in and one of those little twisty things to close the envelope. More often, it’s a see-through plastic envelope or document holder/protector that holds the paper. The paper itself can be just one sheet of paper, but, it can also be more depending on how the individuals who most recently organized or reorganized the way information, paperwork, and parts flow around in that factory. In aerospace, defense, medical, or other sensitive environments, there can be one or more signature areas on the form for machine operators, supervisors, engineers, or production auditors.
Paperless Shop Paperwork
That’s if the factory hasn’t gone paperless using some combination of bar codes and bar code scanners, or even RFID (radio frequency identification), plus maybe having process instructions and drawings and other needed information in an image management system that lets this technical information be viewed through a computer monitor. There are a lot of options for how to create “shop paperwork” packages, or the “paperless” versions of the paperwork.
Understanding the Basics: Focus First on the Paperwork Era
I’m not going to further complicate an already somewhat complicated issue by working the discussion in any of the “let’s get rid of the wasteful non-value-added paperwork and related posting transactions” eras of “shop floor paperwork/administration.” The issues I’m bringing out will come out most clearly from discussing it in the “paperwork” vs. “paperless” era of in-shop information and parts flow. Once the issues are clear in the “paperwork” era, it will be relatively easy to translate them into the various degrees of “paperless-ness” that are out there.
There are SO MANY Different Situations
As I was scanning through my memory of factories I’ve been in long enough to get into shop information and parts flow details …
[ … that’s about, let’s see, bp cs ci, it fm, rot, rock, maybe glass, at least 8, maybe more … not a lot by standards of some manufacturing-related people, but i tend to get more than most people get from a single data point/situation, while I don’t claim to be more expert than anyone else, I do claim to know at least a little bit … but that’s not really the relevant perspective … plant visits for various reasons … detailed freebie phone consults to help toc-oriented pals get to a next step … factory presentations in various conference forums listening carefully and taking notes … my effective mental inside-the-factory database is probably more like about 100 factory situations with a wide range of plant structure, product structure, and market/industry position situations … the grasp of the shop floor paperwork probably really comes from just 1 fan factory mainly, another steel factory, and two pump factories, probably an oil seal factory, where I really opportunity and reason to really dig into and reflect on the nature, sources, and uses of the shop paperwork … like a lot of things, once you really get into the something at the right level in the right way, you can “see” the next 100 or so and know everything you need to know from asking a few of the right questions … shop paperwork is like that … ]
… to select an entry point for the discussion of the paperwork … like follow an order through the plant … or start with the view of one machine work center operator … or take the viewpoint of a production control dispatcher where there is one …
(a lot of the production control employees got swept away in simpler non-defense/signature non-custom-engineered-product “repetitive” “flow” plants in the “eliminate non-value-added activity” crusade, along with a lot of status posting, work orders, finished parts stocks, and stockrooms) …
… I was reminded that, although there are many things that all manufacturing companies have in common (among many other things, they all have to get orders, buy parts, build stuff, and a LOT more), they are still SO MANY different combinations of product types, plant types, systems types, cell vs. flow types, variable raw material quality, outside services or not, dependent setups or not, special tooling, alternative routings, functionally-equivalent parts, “sometimes make and sometimes buy” parts, ways of dealing with purchased parts and raw materials and finished manufactured parts, and, of course, different way to deal with the shop “paperwork” information flow. What they all have in common makes it manageable to get into any of them and join them in doing good things, but the many ways they differ is making me slow down a little to select some efficient way to get at and present the important parts of the issues.
Time For A New Blog Page
This page is getting a little long. I think we’ll pick up the discussion over on a new Inventory Systems II blog page …