Inventory Systems II

The previous page, Inventory Systems, started to get a little long, so we’ll pick up the discussion over here for a while.  If you like, you can jump over some of the drafting/verbalization to one of the main conclusions arrived at early on about Haystack-comparitible database systems by clicking here (parts file/table changes and new process results file/table) or here (where work order functions go), or some later conclulookesions about Haystack “what if” modules by clicking here (only real money flows allowed) or here (5 timelines per entity: TVA, I-m I-p&e, OE, and Cumulative Cash).

here are links to some other pretty good stuff that showed up along the way …

. like the products G & H “full cost” exercise dedicated to my pal, john . (which has become the already-legendary and soon-to-be-classic exciting case study, “Nuggets, Incorporated”).

. working on now . recap of new inventory specs so far . more conclusions about new inventory system specs and what happens to traditional work orders .

. using schedule history (past schedules) as expressions of plant capacity .

. clean slate . the great toc haystack vs jit showdown in the old midwest . using haystack “identify” and “subordination” processes (skipping over “exploit”/drum) for “flow scheduling” . how come database suppliers didn’t help? . was there an “open” perpetual SBDSds memory object spec? (if not, should create one now).

. how jit worked/works . some timeline stuff of jit, toyota production system, ohno .

. mother nature . father time .

. “fully-allocated product costs” needed for pricing decisions? . “toc full cost” is born!toc community intellectual property (trademarks and service marks)leadership statements . revolutions in accounting? . 6 homework assignments for all toc people: allocation and variances . cost accounting “hollows out” a major mfg companyanalysis of variance analysis . TVA and other internal cash-like money flows . EBITDA and other external cash-like money flowsDealing with Physical/Real Things vs. Financial/Fictional “Depreciation” .

. dealing with “I” (first pass) . dealing with “I” (second pass) . “relative ROI” vs. “absolute ROI” . #done . #rightnow . #socraticMethod .

#########################################################

Jan 12, 2011 – 4:10 pm – A foot and a half of snow so far and no signs of stopping yet here in the northeast US. [note 46]

Well, when I was creating this new page, the first step was to put a title on it.  I started just to type, “Inventory Systems II,” to match the previous page and I thought, wait, has this discussion really been about … “inventory systems”?  It’s been everywhere from “What TOC is” to “What TOC isn’t” to “CLR and TOC logic-tree thinking processes” to “OPT/i2, drum-buffer-rope, dynamic buffering and Haystack Syndrome” to “activity-based costing” to “TOC TVA” to “discounted cash flow and net present value analyses complete with Ibbotson-Sinquefield debt/equity risk-adjusted hurdleRates/capitalCharges” and all the way back through “EF Codd / Cincom relational database tables and principles” to “work orders” and “Parts Master and Routings and Bills of Material files” and “factory shop floor paperwork.”  Whew! [note 1]

So, as I started this page, I was thinking, “Is this discussion about inventory systems or not?  Has it been?  Is it now?”

The answer is, yes, to all three.  The discussion has been about inventory systems the whole time.  It’s been about what the inventory systems element within a manufacturing company should be like if it’s to be the Mother Nature-indicated part (vs. man-made partially-arbitrary partially-unnecessarily-compromised conceptual part) [Note 54] of the overall elegant, comprehensive, and effective natural solution for manufacturing enterprise systems and management that applying TOC’s expression of physics to manufacturing seeks to create.

The Original Issue — in the 90s, among other conceptual loose ends related to TOC, i was pretty sure there’s a simpler and more natural way to design an inventory system that makes a lot of important things easier, and was also pretty sure that this was related to the idea that something about that (what I now call) SBDS data set object in speedy main memory needed to be perpetual (maintained real-time continuously, not created on a batch basis)

I don’t remember what recently (a few days ago) got me to thinking again about inventory systems, but I do remember that, at the end of the 90s, before other projects beckoned, I was thinking I should probably write three more TOC books:  (1) a book on systems integration and implementation issues for Haystack-compatible systems (operations-level nitty-gritty stuff that worked out the new inventory system needed to replace the system based on traditional open work orders, the programs needed in standard database systems to initially create the Haystack data set without requiring custom programming to extract and massage the data, the transactions needed in standard manufacturing systems to maintain the data set, etc), (2) a book bringing in TOC, TVA financial management system, and the rest of Haystack in from what I thought of as the viewpoint of the CFO, CEO, corporate board, “wall street” stock analyst (most of them are not actually on Wall Street or even in New York City anymore), Stern-Stewart and their EVA fans, and corporate-finance-level-savvy group-, division-, and factory-level controllers (discussed on the previous page with the “Dream Team” combination of authors), and (3) a book to verbalize my take on the nature and use of the TOC thinking processes.  In all three cases, the writing would verbalize and check what I already thought I knew, add what I’d learn by studying the interpretations of others as I was writing/thinking, and add whatever new stuff I’d inevitably invent/discover during the thinking/writing process.

Part of the reason for book #1 was, while I knew Haystack-compatible systems worked (I knew because Bob Vornlocker and I led the first successful implementation of the first such system at ITT AC Pump in April 1991), I also knew that one of the toughest parts of that job was dealing with in-process inventory, something that isn’t intrinsically very complicated, but is made complicated by how traditional systems organize and handle WIP and work order data.

Another reason for book #1 was I knew that there should be a perpetual something in memory that was continuously updated (vs. the really really really awkward process of getting the data needed in memory every once in a while, on a batch basis, and then running a converter program, and having to deal with the fact that things had changed in the factory since the batch data run was started and you get the new schedule to the floor.

Also something perpetual needed to be in memory, and updated by standard system transactions, if a whole lot of really useful practical little incremental decisions and actions were going to made clear, easy, and easy to reflect in the schedules … things like, oh, lots of things, but one for example, “capacity-based order promising” of new orders into constraint/drum “gaps”, or freeing up enough capacity by what if off-loading to another resource or what if buy vs. make for a while to make it ok to accept a big new order.  a lot of what ifs at the operational level.

another thing was working out data system needs and working through example uses of ‘what if’ at simulation levels.


schedule/capacity history – working out useful housekeeping things like saving schedules that worked so a database of expressions of plant capacity existed.  like, “hey, susan, do you think we can shift mix to 20/30/40/10 percent on products x,t,r, and s?”  “well, steve, we did have that schedule for, when was that, week, i think, hold on, let me look at some of the old schedules in my Haystack workstation” “looking for a needle in a …?” “yeah, in a Haystack. you going to get a new joke someday, stevie?” “not likely” “that’s what i thought … ok, schedule 2010-04 was in late january 2010 and it looks like, hold on, looks like we had 2-1/2 shifts on in stamping, some overtime in milling, but we had 15/30/40/15 going for, hold on, looks like about 6 weeks” “ok, thanks”  “no problem, big guy, anytime” … … … … … “susan, steve again”  “hi. what’s up?”  “do you know when we made that upgrade to the framistat machine we use on products x and t?” “hang on, i’ll check. you wondering about that jan 2010 mix schedule?”  “yes, i’m wondering if we were getting that 15 percent x and 30 on t with the old or new machine.  the new one cost us plenty, but cut setups and run-times” “i remember that one.  we talked ourselves into busting our capital budget to get setups down, but it doesn’t pay for itself until our kids retire?” “that’s the one, milady.” “thought so.  ok, that upgrade was finished in dec 2009.  that mix i gave you should be a good guide.  just remember, if it matters to what you’re doing, we were still buying the gilhooly outside and sending the mower axles out for hammering” “right.  knew that.  but thanks for the reminder. ciao.” “no prob, guy.  keep in touch”

And another reason for book #1 was all the custom programming needed to convert bill of material structure files and routing files into “networks” of “stations” or “process steps” in that memory object.

All of these things needed to be worked out into a book that played a role for Haystack-compatible systems like I think Joe Orlicky’s book did for MRP systems programmers.  The book, The Haystack Syndrome has been around since 1990.  Bob Stein wrote a good book in 97 or so in the Apics CM series, I advised him as he wrote it and wrote the forward.  It’s a good book.  But I told Drew G and Bob at the time that what we all had worked out at the time wasn’t what we would eventually need to get it right.  That’s why, even as I was supporting Bob’s book that put Haystack‘s ideas in more standard presentation style and added a lot of implementation smarts and Bob’s other ideas, I was already saying another one would eventually be needed.  I think it’s still needed today.

Inventory System Was Just One of Those Loose Ends

So I’ve had that inventory system thing in the back of my mind and … what?  no, i’m not kidding … after all these years? … yeah, i know.  doggedly persistent … i suppose that’s a good thing.


Recap on Inventory Systems … Turns Into Finally Nailing It Down

Here are the things we said we should do so far:

Start referring to the Parts Master File as the Parts Table.  Same for the other standard manufacturing-related files.

Start referring to the processing steps in routings as “tasks.”  Actually, maybe don’t  need that one.  I just remembered the term is usually, “operation,” which is pretty clear and I think not the same as something else.  “Task” or “operation” seem ok.  Don’t really like “station” from the BAS/GoalSystem that seems like “work station” which is resource.  So, “operation” for now.  Except I sort of like “task” as crisp.  Somebody else will decide these things anyway.  Moving on …

WIP Tracking Part of “Work Order” Function Goes Away to Parts Table

Keep track of WIP in the Parts table.  Items of WIP are parts, so follow relational system principles and keep all the similar things in one table, so all parts-like items in the Parts table.  So all parts are now only in Parts table, not some in Parts Table and some in Work Orders File/Table.

Add two rows to the Parts table for each of the operations in a part’s routing — one to parts starting each operation and one row for parts completing each operation.  An attractive design option seems to be create the new rows for the operations on a temporary basis when the parts are issued, then delete the unused rows when the parts pass beyond them.

WIP Tracking In Parts Table Handy for Required Tools and other Operation requirements

Additional rows in the Parts table can be made for special tooling or other requirements.  These can be connected into the appropriate processing steps with arrows like any other “parts” or “requirements.”

WIP Tracking in Parts Table Allows Huge Improvement in Identifying and Later Reusing “Scrap” … “Scrap” is not always “Trash,” especially in a so-called “Process Industry” Factory

Additional rows can be created for scrap at a work station.  This idea’s new, but why not?  What else you going to do with a scrap part?  Write a note on it, tape it on or put a rubber band around it and put in a cardboard box somewhere?  Well, I guess, yes, we’ll do that, but may as well have a nice clear “part” number on it to write on the piece of paper under the rubber band, like AX2339-005-scrap-2011-01-13 -0002, so that part number can refer to a ScrapNotes table.  That way, if somebody needs a 3/4 inch diameter Haynes alloy 46 rod in a hurry, for an order or a new product prototype for a potential big customer, somebody can go into the system and see from the Parts table row created by our Haystack-compatible system’s Scrap Transaction (yeah, I know, all the TQ and JIT and process improvement non-value-added activity busters are standing on their chairs and shouting that scrap never happens in factories they advise, but, truth be told, scrap happens when highly-engineered custom products push tolerances, or mistakes get made even if rarely) for our AX2339-005-scrap-2011-01-13 -0002 part that it’s in bin number 54 in stock area 3, and clicking over to the ScrapNotes table shows it’s a 1 inch diameter Haynes alloy 46 rod that can be worked down by a machinist to exactly what is needed and save the day — and the sale and the customer!  I like this new Haystack-compatible inventory system a lot already.  [big cheshire cat smile]

WIP Tracking in Parts Table Facilitates Batch Splitting, Batch Overlapping, and Returning Partially-Processed Parts to Stock Upon Order Cancellations

With the temporary rows for the routing operations in-process and completed in the Parts table, we can now deal easily with the situation when we’ve split a 20 piece batch into 10 and 10 at operation 1, overlapped the 10 across the four steps of this routing, suddenly customer cancels this order and places a different order, pays us handsomely for being flexible, and we either decide to finish them all because we know we want them and they won’t chew up much capacity to finish, or we know we can’t use the finished part anyway and just identify the 10 overlapped parts as what they are — in process step 3, one of those, finished step 3, 5 of those, finished step 4, 4 of those, which accounts for all 10 from the batch split.  Mark them all clearly somehow and take them back to stock with notations on the paper and/or in the system, probably in the system, and we’re good.  Hey, these Haystack-compatible inventory systems are great!

“Work Orders” Maybe Just Go Away?

Consider the possibility that Work Orders concept and file just go away.  Probably do.

We’re not talking about “work order” in the sense of shop paperwork here.  That either goes away or it doesn’t depending on a lot of factors that would take a chapter of a book to work through at the right level of detail.  We’re talking here about the logical and computer entity called the traditional “work order.”

Let’s see if it can go away.  Apart from WIP tracking, what is it used for?

One thing is inventory valuation.  Financial module can get the WIP figures the Controller needs for inventory valuation for GAAP, corporate, and tax figures at period end, from the new Haystack-compatible inventory system’s Parts table.  If labor is needed for GAAP and/or tax and/or corporate inventory valuation (but remember we don’t want labor onto our parts cost for TVC, Totally Variable Costs, for use in Sales Price – TVC – TVA, for decisions), the financial modules inventory valuation program can get parts units from Parts table and process times from routing table and labor rates from somewhere in financial module and no traditional Work Orders records, tables, or files needed there either.  Where else?  Oh, capturing actuals.  Right.

Capturing Actual Setup and Run Times … Another “Work Order” Function Goes Away to Somewhere Else, Someplace New This Time

Capturing actual setup and process times for identifying training needs, keeping setup and unit process times up to date, maybe gathering data about dependent setups … ok, this one may keep something like the old work order alive.  Collecting actuals on pieces and batches.  Well, if we are going to have a history file of this kind, it should be routing operation level in granularity.  so the Haystack-compatible gets a Actual Times table, with data elements something like, Part Number, standard/expected/planning setup time and run time per part, actual setup and run time, these times get captured by the worker at the work center telling computer when starting setup, when setup finished, and when x number of parts complete, like existing systems that have transactions what write these actuals to the work order that has the standard/planned times written in from the routing file.  yeah, i think this part of the work order stays around.  i’ll leave it to people writing technical papers at apics to work through whether to have a released batch of 10 have a common “sequence number” (probably) like the old work order did as a “work order number” that posts into the new Actual Operation Times table that captures the actuals for monitoring performance of workers, spotting trends in incoming materials, tools problems, continuous improvement, keeping standards in the ballpark of what the actuals are, and all the reasons for capturing actual setup and run times.  Let’s see.  How will this Actual Operation Times differ from the old Work Orders table?   Well, one big difference is the old Work Orders Table had standard times, actual times (if people posted actuals), and WIP.  Our Actual Operation Times table won’t have WIP.  Maybe the Actual Operation Times table is really the Operation Results table with the actual setup time, actual run time for some number of units, standard setup time (it can change, so need, i think to capture from the routing table), standard time per part, number of  units completed, number of units scrapped in that operation.  So I think that means the Work Order table goes away and it’s functions are divided into the Haystack-compatible’s exciting and sexy revved-up new Parts table and its also very interesting and attractive new Operation Results table.

Which brings us back to the shop paperwork, but that’s going to be easy because we can have the same paperwork if we want to, all the info … hm … except the work order number itself … will still be coming from the Parts table … hm … that sequence number we needed in the Operations Results file is the old Work Order number … it turns out, i’m beginning to think, that when that Release Parts to the Shop transaction fires (from previous page), it needs to create, not only temporary rows in the Parts table for each operation in the routing, but also rows in the Operations Results table along with the … well, it’s really the old Work Order number, but that’s not the most clear term, and, since we’re changing stuff to be more intrinsic/natural/selfExplanatory/efCoddRelational-like, I’m thinking the old Work Order number might become the new Batch Number.  I think that works.

update:  i still think it works, but i’m remembering one of the operation issues with a Haystack system … the scheduling process creates batches related to order line items for net requirements about finished stocks and in-process are parts are allocated … there, i think will need to be three parallel identities for in-process parts … one is the part number each part is … one is the batch number it’s travelling with … the other is the sales order line item or finished stock order the most recent allocation process has assigned to the parts in the in-process batches … the part number will change as the part goes through the processing steps … the batch number will stay the same unless there’s a split … the salesOrStock order of the parts in a batch will change as the scheduling changes it …

Because that’s what it is.  It’s not “work,” it’s a “batch of parts.”  If the batch is split, create a new batch number, maybe based on the pre-split batch number, and decrement the quantity on the pre-split batch.  I think that works.

#####################################
#####################################
#####################################
More Conclusions About What Happens to “Work Orders” and Inventory Systems

The new specifications for Haystack-compatible manufacturing database systems will also include the following:

1.  The traditional Work Order file, the “Work Order” concept and term, and the Work Order number all go away.

2.  The traditional Work Order file/table’s *rows* (database records) for each processing step for a part, copied from the Routings file/table for each new work order and work order number for a part, will go to two places:  (1) the part of the rows supporting the WIP tracking function will go to the Haystack-compatible system’s new Parts file/table, the new version of the traditional Parts Master file/table and (2) the parts of the rows supporting all other traditional Work Order file/table functions will go to a new Process Results file/table.

3.  The Haystack-compatible system’s new Parts file/table, replacing the old Parts Master file/table, will do everything it did before, but also take over the traditional Work Order file/table’s role of supporting WIP posting and tracking functions.  This will be done with the rows used by traditional Work Order records, but also by using new additional temporary and permanent rows described on the previous page and i think mostly repeated in the recap above in this page (i.e., rows for when parts begin processing at a routing step as well as for row for when parts are completed, plus new rows for tooling, drawings, and other “parts-like” things needed to start processing at a process routing step).

4.   The Haystack-compatible system’s new Process Results file/table (referred to earlier as a new Operation Results file/table) will be used for all the functions of the traditional Work Order file/table other than WIP tracking.  This includes (1) recording the Batch Number, which replaces the traditional Work Order number, that is created at the time of shop material release, (2) capturing and storing, for each step in a part’s routing, the current work center and current time standards for setup time and time per part/batch from the  Routings file/table, (3) electronically (vs. on paper) capturing and storing the “actuals” at each processing step including setup time, time per part/batch, part completions, and “scrap,” and (4) for generating shop paperwork, if used in a specific location.

5.  The traditional Work Order file/table’s function of providing data for generating shop floor paperwork (if it’s not apaperless operation) to accompany in-process parts, is taken over (I think) by some system-design-specific combination of the Haystack-compatible system’s new Parts table (maybe), Routings table (maybe for some special instructions and links to more), and new Process Results table (probably, for steps, work centers, current time standards for setup and time per part/batch, and quantity to be produced from the release process … something like that).

6.  Transactions for shop material release, operation start, parts complete at  operation, parts scrap at operation all update both the tables on the computer hard disk and the Perpetual Schedule-Based Decision Support (SBDS) data set (pSBDSds) object in computer main memory.

7.  Haystack-compatible database systems will have interface programs that will create the Perpetual Schedule-Based Decision Support (SBDS) data set (pSBDSds) object in computer main memory.  A specification will be needed for this object.  The programs that create the pSBDSds object take input from the following tables and sources:  Sales Orders, the new Parts table, Bill of Material, Routings, Shop Calendar, and maybe other tables and sources (i.e., need to think this through again).  The programs will create the object that will be a logical network representation in computer main memory of the work that the factory needs to do within a selected scheduling horizon and the resources of various kinds available to support the work.  This object will be used for various query, factory and order due date scheduling, buffer management shop floor control, and “what if” decision-support purposes by workstation applications developed to support different roles in the factory.

8.  Haystack-compatible database systems will have new versions of traditional manufacturing systems transactions that maintain the Perpetual Schedule-Based Decision Support (SBDS) data set (pSBDSds) object in computer main memory.  [Note 63]


Glad we got all of that straightened out.  : )

For my pals, Bob Stein and Drew Gierman:  These are the kinds of things I was talking about way back when in the late 90s, in the context of another TOC systems book still needed, as things nobody had worked out yet, but somebody needed to work out in order to get this Haystack thing done right and properly ushered in as the global manufacturing systems industry standard.

A Few Things for the “To-Do” List for All Manufacturing (or Enterprise, or Supply Chain) Systems Provider Companies

And some of this might actually be right.  Although, even if it isn’t completely right or yet the best set of system design decisions, the thought process is what needs to get done by PeopleSoft, Oracle Manufacturing, J. D. Edwards, CA (Computer Associates), Cincom Systems … my pal Karen Alber’s company, SAP … who else is out there as provider of manufacturing planning, scheduling, control and financial computer systems database systems these days? … my ITT/ApicsA&Dsig pal Lynn White’s company … I think that was SAP again … whoever bought the IBM Mapics software rights and installed base … all the manufacturing company database systems suppliers need to think through the changes to their data structures and transactions they need to make to support Haystack-compatible systems … and, while there will always be proprietary aspects and differences among companies at lower levels of implementation within the competing systems, the higher level systems structures and procedures should be public domain industry standards and Apics is the place to coordinate and locate the industry-standard part of the work.

Could IBM’s MRP Designers in the 50s/60s Have Done This Then?

What it means, by the way that, if, when I google in a moment to see when E. F. Codd’s relational database model — that like Object Oriented Programming, and TOC — is oriented around considering what the natural essence of something is and having concepts, procedures, in-memory objects, and systems reflect Mother Nature [note 54], if I find that Codd’s model was available when the MRP (little mrp, mrp I, materials requirements planning vs. big MRP, MRP II, manufacturing resource planning) system architecture was defined, then the MRP people made an unnecessary mistake when they formed the work order file that took intrinsically parts-like data (work in process inventory quantities) out of the Parts file rows, and put them with the released batch information rows/file.

So let’s have a look at EF Codd’s, Cincom’s, and relational database management’s story in wikipedia.  Have to assume it’s there.

http://en.wikipedia.org/wiki/Edgar_F._Codd

Ok, it seems like the original MRP designers, who did their work before Codd’s first paper in 1970, just did what they thought was right and probably just copied what was done in the pre-computerized factory world which was track WIP and actuals on the “work order” shop paperwork that flowed through the factory with the parts.  Here’s from the Wikpedia link above on Dr. E. F. Codd: “In the 1960s and 1970s he worked out his theories of data arrangement, issuing his paper “A Relational Model of Data for Large Shared Data Banks” in 1970, after an internal IBM paper one year earlier.[4] To his disappointment, IBM proved slow to exploit his suggestions until commercial rivals started implementing them. Initially, IBM refused to implement the relational model in order to preserve revenue from IMS/DB. Codd then showed IBM customers the potential of the implementation of its model, and they in turn pressured IBM. Then IBM included in its Future Systems project a System R subproject — but put in charge of it developers who were not thoroughly familiar with Codd’s ideas, and isolated the team from Codd[citation needed]. As a result, they did not use Codd’s own Alpha language but created a non-relational one, SEQUEL. Even so, SEQUEL was so superior to pre-relational systems that it was copied, based on pre-launch papers presented at conferences, by Larry Ellison in his Oracle Database, which actually reached market before SQL/DS — due to the then-already proprietary status of the original name, SEQUEL had been renamed SQL. Codd continued to develop and extend his relational model, sometimes in collaboration with Chris Date. One of the normalized forms, the Boyce-Codd normal form, is named after him. Codd’s theorem, a result proven in his seminal work on the relational model, equates the expressive power of relational algebra and relational calculus (which, in essence, is equivalent tofirst-order logic). As the relational model started to become fashionable in the early 1980s,”

It also answers a question I had about the respective roles of Codd, IBM, Larry Ellison, and Oracle in the leadership concerning relational database management systems and the SQL query language. Hm … but what about Cincom? …  Cincom played an important role in the relational database and other aspects of software industry history.

Oops … just looked up Joe Orlicky’s book.  Published 1975.  Updated by my friend, George Plossl, in 1994.  Looks like IBM might have blown it.  Codd’s paper came out in 1970.  Let’s see when the MRP crusade was.

An online encyclopedia has:  Magazine article from: Production & Inventory Management Journal; ; 700+ words … Since the MRP crusade of the 70s.”  So the MRP crusade was in the 70s.   That’s when Goldratt was starting with OPT.

http://en.wikipedia.org/wiki/Material_requirements_planning

Wikipedia on mrp:  “Prior to MRP and before computers dominated the industry, reorder-point/reorder-quantity (ROP/ROQ) type methods like EOQ had been used in manufacturing and inventory management. In the 1960s, Joseph Orlicky studied the TOYOTA Manufacturing Program and developed Material Requirements Planning (MRP), and Oliver Wight and George Plosslthen developed MRP into manufacturing resource planning (MRP II).[1]. Orlicky’s book is entitled The New Way of Life in Production and Inventory Management (1975). By 1975, MRP was implemented in 150 companies. This number had grown to about 8,000 by 1981. In the 1980s, Joe Orlicky’s MRP evolved into Oliver Wight’s manufacturing resource planning (MRP II) which brings master scheduling, rough-cut capacity planning, capacity requirements planning and other concepts to classical MRP. By 1989, about one third of the software industry was MRP II software sold to American industry ($1.2 billion worth of software).[2]

google hit on orlicky.  wondering if he worked for IBM: “Not much about MRP appeared in print until 1975, when its principles and precepts were set down by Joseph Orlicky in the first edition of this book”

“Enter MRP. This was introduced in the US in 1960 by Dr Joseph Orlicky2, a Czech-American engineer working with IBM.” source: http://www.bbc.co.uk/dna/h2g2/A3488646

bbc says introduced in 1960 … ok, but book not written until 75.  working “with” IBM, not necessarily “in” IBM.  Seems Orlicky was an engineer who worked in a manufacturing company, maybe JI Case, and was one of those client people software companies like IBM work with to design systems.  i’m thinking it was systems architects and designers in IBM who blew it on Parts file vs Work Order File for in-process parts.

“Joe Orlicky and Ollie Wight lured American manufacturing onto the rocks with MRP in 1961, and we have been foundering on them ever since.  Originally, MRP stood for ‘Material Requirements Planning’.  A few years after Orlicky put the first system into a JI Case plant in Moline, Illinois, the system was tweaked to add capacity planning,”
http://www.evolvingexcellence.com/blog/2005/12/mrp_on_the_rock.html#ixzz1Atj7nTQW

Looks like early 60s, mrp gets started.  crusade in 70s.

This article’s good – http://www.leanlibrary.com/mrp_rip.htm

It’s clear Orlicky got JI Case going on MRP in 1960.  Wight and Plossl got Stanley Tool Works up shortly after.  They got an MRP crusade going at Apics in 71.

So what’s that mean?  Not sure.  Tired a little.

Well, I’m wondering if I’m making too big a thing of whether the IBM people who built the original MRP maybe could have, maybe should have, maybe would have put the wip data in the parts file instead of the work orders file.  And I think I’m getting too tired at the moment to think it over again now.

Real nice stuff came out on putting the traditional Work Order’s data into a new Parts table and Operation Results table in the emerging Haystack-compatible industry standard.

Next Day.  Later That Day.  Whatever.

Jan 13, 2011

“Monday Morning Quaterbacking” (over 50 years later) IBM’s 1950s/1960s MRP Architecture Design Decisions

For international readers, “Monday Morning Quarterbacking” is the inevitable American sports expression that means thinking about and talking about whether a decision already made could have been on should have been made differently.  The expression refers to the fact that a “quarterback” is the leader of an American football team who makes decisions in a football game on Sunday after which, on the following Monday morning, all the fans who love football (and love arguing with other people who also love arguing about football) express energetic opinions about what they would have done on Sunday afternoon if they were the quarterback.  Cute.  Like Eddie, Don, and Jim’s funny “throwdown” segment on VH1 Classic’s That Metal Show. I’m not sure how “throwdown” became a popular phrase in the American idiom.  Maybe a play on the similar sound, “showdown,” like in the gunfights in the American old west.  Or something deriving from the fun silliness of professional wrestling.  But what “throwdown” means is very similar to Monday morning quarterbacking which is debating something enthusiastically that really doesn’t matter, but is an interesting and fun way to be involved in and participate in the hobby/subject/interest area.

Anyway, the “Monday morning quarterbacking” or “throwdown” issue that has arisen here is, “Did the MRP people at IBM back in the late 1950s and early 1960s unnecessarily blow it when they made the data architecture and procedure decisions concerning in-process parts (aka WIP, aka work-in-process inventories)?  And did the MRP systems designers within IBM blow it again after 1970 when E. F. Codd published his paper within IBM articulating the relational database model?  And did the MRP user experts blow it by not seeing the limitations of those decisions and forcing IBM and the MRP systems providers to change them?  And has the entire manufacturing and manufacturing systems world worldwide been fighting and struggling and creating unnecessarily restrictive schemes to work around the effects of these decisions in forms of just-in-time, flow manufacturing, excessive custom in-house programming for process industry and aero/defense/medical industries systems rather than fixing the problem by making the changes that would bring systems more naturally in line with the 1990 Haystack concepts which bring upper-level processes and architectures more naturally in line with actual manufacturing management realities?”

The answer to all of these questions is, yes, they all blew it.

No problem.  That has happened in every area of life.  People don’t see the right answer until they see it.  In the meantime, they do the other thing they think is right, or is in their best interest, or both.

So people who see it now can jump on it, make their systems better, get ahead of the competition, and — what’s that star trek phrase? — live well and prosper.  Those who don’t see it or think it’s wrong can hold what they got and see how customers vote with their systems purchasing dollars from here going forward.  Decisions, decisions.

Why do I say, yes, to all of those questions?

I’m writing this after I’ve thought about this a little while again today, and also after considering what Goldratt would say if he were here getting to the answer before everybody else as usual because he’s been through the issues in-depth several times before beginning in the 70s, and also after considering what Tom Nies would say if he were here about his company’s experience with relational db issues, and what my pals at process industry plants like Dave and the scheduling guy and Frank and newPres and oldPres at the stainless steel rolling mill factory would say about their variations in quality at raw material and in-process levels and complex dependent setup situations (like changing the chemical “pickle” bath depending on alloy, bath exhaustion, prior alloy) and necessary sequences (like wide to narrow rolling to avoid edge impression defects which means only less wide rolls “parts” can go next on a mill until the rollers are removed and resurfaced) and again the variation in resulting in-process characteristics (i.e., we would now call that “different parts” with “different part numbers”) and whether to make a “part” at one level of quality into a part at another level by using the “fine grinders” to take out defects or meet the demand with footage from another roll in better condition, and what my pal and process industry SIG Ed Schuster my pal Steve controller at the generic pharmaceutical factory at my first manufacturing almost-a-client in march of 1989 would say if they were here about batch quantities in mix operations with time per batch steps vs discrete parts with time per part and about variations in mix output (different “parts” again) and batch tracking info rules, and, because, this small data structure change would allow easy vs. very difficult solutions (make some impossible things possible, some possible things practical, and some impossible and difficult and practical things easy) to all these problems, and the change could have been made in the 1950s and 1960s on the basis of “keeping parts stuff with parts” vs. “combining parts-like stuff with processing step like stuff” they didn’t need Dr. E. F. Codd’s 1970 internal IBM “relational database” paper to avoid the mistake, so my answer is yes, the IBMers who created the MRP systems structures in the 50s and 60s blew it.

Again … [this is update, 1/15] … again … Why do I say, yes, to all of those questions?  … Because putting in-process “parts” — including partially-processed and non-spec-processed “parts” (aka “scrap” and “scrap” is not always “trash”), into a “work order” file and not the “parts” file made “mrp” unnecessarily no good or overly-difficult for makers of  non-“discrete”, “process”, “full custom,” and/or “documentation-intensive” products, and unnecessarily too troublesome for “simple” and “repetitive” and “small” manufacturing operations.  Bet a buck that, between the early 60s and now, more than a few people in those niches excluded by IBM mrp’s  parts and work order decisions, who were knowledgeable enough to see the problem — folks like process industry wiz Ed Schuster, ops prof Johnny Blackstone, independent-thinking George Plossl — brought it up found it mostly not possible to alter the consensus, but only to create niche systems, due to the sheer weight of the various kinds of software development, installed base, prior education and certification, courseware, and textbooks — all that investment over a few decades in the original IBM decision.  Well, fine.  But that was then and now is now.  Time to change it.  Software that writes software can re-create all-new systems pretty quick.  Done deal.

afterword.  without the boldface type.  imagine that.  anyway, the manufacturing employee, advice, and systems community is full of good people who intend well.  so much of the confusion and mistakes came, as in any area of life, from the meanings of the words.  consider that the boldface type above says “mrp” excluded certain industry niches.  but “mrp” means “materials requirements planning.”  that phrase can be generic as in, “any company has to plan for materials as in raw materials and purchased parts,” which is how most of the “excluded” companies mentioned above use the specific software and procedure of Orlicky mrp, mrp I, “little mrp.”  the non-excluded companies — “discrete” “reasonably repetitive” — try to use the same software for shop floor scheduling and shop floor control.  Most of those then moved to JIT or drum-buffer-rope or flow.  So, if you’re new to this, or even if you’re not, and the words seem to get confusing, that’s because they often are.  Main thing is Parts table needs to be used for in-process stuff of all kinds when they are different “parts” and also for things like tooling or other special  requirements that have “parts like” characteristics (as in something needed for an operation producing a “part”).

What Was the Essential Point?

The question about IBM’s MRP systems designers in the 1950s and 1960s arises from the fact that the new inventory system specifications we’ve derived on these two “inventory systems” blog pages, were derived in order to bring manufacturing database systems more naturally in line with Haystack-compatible systems which were, in turn, designed to bring higher-level manufacturing system functions more naturally in line with the actual phenomena and tasks in manufacturing companies.  Those specifications may very well be both correct and a huge breakthrough.  They seem so.  So far.

Development Partners in Even Better Shape Now

It will be even nicer now than before to be one of those Goldratt Institute BAS development partner manufacturing companies that I used to tell my pals, Steve King and Frank and Judy and BUgal and Guy

and Bob about at and after Apics Middlesex Chapter (“sex is our middle name”, get it? Middlesex Chapter? Sex is our … what?  you got it the first time?  right …) meetings.  They get as many copies of Haystack SBDS workstation application modules as they want.  One for Purchasing, one for Sales Order Promising, one for Manufacturing Engineering, one for Product Engineering, one for Marketing, oh one for Production Scheduling, plenty for Workstation Operators, one for QC, oh one for Controller, one for General Manager/Pres, one for Ops Supervisor.  These days, any physical workstation becomes any one of the functional views just by clicking on a mouse and downloading the selected program/viewpoint/application from the company’s network and servers and logging in.  All software already paid for!  : )  Steve Terry at ci, Frank Dave cs, Maury Tom Tom Jack Pc and Prog at bp, and others I didn’t know.

Time passes …

Ok, Why Was That Small Change Such a Big Deal?

Time passes.  The sense of a really big improvement exists.  More time passes.  And the, “Can that really be right?” stuff starts.  Not a problem.  It’s healthy.  Self-checking.  Double checking.  Triple checking.  Looking at it again from time to time to see if some factor has arisen to consider.

Ok, so why was that seemingly-small difference in manufacturing data structure such a big deal?

Lots of reasons.

Reason number one … this is the same stuff, but entering it from a different direction than the direction we figured it out from … Reason number one … because by putting the in-process inventory tracking data “holders” (data elements within rows, of files or, later, within rows of tables) in the Work Order file (and later table), copying them from the Routings file/table, no “part numbers” were created to store (1) parts started at an operation step, (2) part …

[ maybe come back to this ]

A New Problem to Solve Later

Just making a note of this.

Problem just popped up … suppose a routing changes … like, as often desired, from 10 steps to three … or maybe 10 steps to one by creating a work “cell” for that family of parts … If we had a cancelled order for that batch of 20 parts that got split into two batches of 10 with one of them overlapped across the 6 steps of the routing and put the parts back in the stock area with their AX2394-005 type of part names, and posted them in the Parts file/table that way … what if the 6-step routing gets changed by manufacturing engineering to a 3-step routing? …

If I don’t solve it, someone writing and presenting a paper in an Apics forum can solve it and share the solution.  That will need to be done for, let’s see … for all the issues from inventory to scheduling to buffer management to shop paperwork and ops-level “what ifs” … that’s 58 thousand, five hundred, and thirty-two issues …  58,531 plus this one … hey, it’s a great opportunity to get into the game of noticing, solving, and publishing papers to share solutions about issues … as I wrote in my book: I’m white-washing a fence here, and it’s really fun!

A Good Thing to Know About Solving Problems

( … I already solved that last one … probably you did too … detailing the problem statement caused, as it often — but not always — does, the image of the solution to flash into mind … )

Another Inventory Systems Note

If we look at the custom computer systems that some of the so-called “process industry” factories have created for themselves with in-house programming or by contracting with outside custom programmers, we might see systems with inventory systems like we’ve discussed here.  Unless they’ve forced themselves to both use and work around the standard work order data structure to deal with their variable raw material and in-process material/part characteristics, where they often don’t know what “part” they have until somebody inspects or even tests it.  Steel rolling mills and semiconductor chip fabs might be good places to look.  Just for interest.  The specs we’ve developed are either correct or a good step in the right direction with a pretty clear thought process to take it the rest of the way.

Maybe Eddie and His Pals Can Help Us Go Over This Again In an Interesting Way

Eddie van Halen plays the guitar pretty good.  Maybe he knows a lot about TOC and inventory systems too.  Let’s look for clues in this YouTube video:

http://www.youtube.com/watch?v=gsqywc7fnqE

Yes, that was very helpful.

Layers Again

So we have several layers in our main memory object.  The first was the canvas which was an image of the physical (vs. logical data) factory shop floor shown with raw materials on the left side, work centers in the middle, final assembly right of center, and finished product and shipping on the right.  Even if a real factory doesn’t flow that simply, it’s useful to, in our minds and in our imaginary “canvas” we’re painting on in main memory, to think of it as simply left-to-right.

The second layer, the first layer of glass over the canvas, is our Perpetual Schedule-Based Decision Support data set memory object.  It is superimposed over the canvas on a first layer of glass.  We envision it as a network of shapes and arrows that goes from raw materials and purchased parts on the left, through the processing operations, to assembly, and the last arrows are from the assemblies to the sales and stock order line items on the right hand side.  This data set also has the in-process inventory quantities and locations (processing steps) and the quantities of the raw materials and purchsed parts that are used in the active sales and stock order line items included in the active scheduling horizon threshold.  This data set also has the time each work center resource is available for scheduling.  This data set is created by a program that uses sales and stock order line items and their product identifiers to get information from the bill of material table, routings table, parts table, and schedule table to form this in-memory data set.

On the third layer will be the schedule.

On the fourth, we’ll create incremental decision scenarios.  Maybe complete new schedule.  Or smaller changes.

The first layer stays the same through it all.

The second layer is continually updated as new orders come in, raw materials and purchased parts arrive as receiving or are released to the shop, and parts progress through operations.  Batch orders are removed from the perpetual Schedule-Based Decision Support data set in memory when completed, but their results remain with actual setup and run times and completed vs scrap parts results in the Operation Results and Parts tables.

hm … it’s clear the pSBDS data set should be perpetual.  first impulse is that the schedule (releases, drum batch sequence, adjusted order line item due dates, and constraint-fed assembly schedules) should be “perpetual” too, at least to some extent.

but maybe just replace schedules.  here’s another area for Apics members and ops professors to think about and write papers about because the right answer is going to be a lot of different things in different circumstances.

not sure about this … very rough … reconstructing something from a wihle ago … There’s the idea that I think Larry Shoe used for a really slick way to allow capacity-based order promising, having mostly real orders, but knowing demand for quick premium-priced emergency short-lead time turns always existed, got nice profits promising into drum gaps and dummy orders, all supported with adequate protective capacity, … it brings a relatively long lead time element up like a forecast order, which it it, but, if the premium-priced short-short order doesn’t arrive, can let the drum batch slip … i think that’s pretty close to what they were doing and liking a lot … but, as i’m thinking about it, do they need dummy orders if can just create new incremental schedule … no, using dummy stock order is needed … what’s the difference between that and just having a stock order at some level?  maybe the option to not do the later ops if quick turn order doesn’t come … ask larry shoe … there’s a lot of experience and know how there …

The Haystack Scheduling Module begins to support the Scheduler in forming a schedule when the Scheduler initiates the Identify phase and process.  The Identify process takes each sales and stock order line item, in due date order, past due and nearest due dates first, allocates stocks at finished and all levels back to raw materials and purchased parts.  Where stock is not available for a particular order, a batch order is created for the net requirement at each level until raw material and purchased parts are reached.  If rm or pp are not available a purchase order advisory is created.  This is done for every sales order and stock order line item.  The resulting batch orders are written onto the second pane of glass in main memory.

Wow.  Eddie van Halen really knows this stuff!  Let’s watch that video again to see what else we can learn.

http://www.youtube.com/watch?v=gsqywc7fnqE

The batch orders on the second pane of glass are each marked with the sales order or stock order line item number that created them.  That is one of the main things that makes them different from “little mrp” (materials requirements planning, mrp I) “planned orders.”  Importantly, the Haystack batch orders made in Identify processing also carry the batch quantities from the sales order line items (net requirements after stock allocation … can get original line item quantity from link to sales order itself).  They are not “lot sized” as in minimum lot sizes as with traditional mrp net requirements (ie, in traditional mrp, net requirement is 1, but traditional minimum batch quantity is 5, so an mrp “planned order” would create a batch of 5 to get the 1).  This causes the traditional mrp system to order the factory to make the wrong (by modern standards vs 50s standard) number of parts.  Haystack batches are the order amounts unless stock decreased that to a net requirement.  The Haystack batch orders are also not offset for leadtime, yet.  At this point in an mrp run, the planned orders would have been given fixed leadtime offsets at each bill of material (assembly) break.  That creates excessive leadtimes and wip by modern standards.  During that Identify processing stage, as the batch orders were being created, the total demand on resources for the net requirements were also being calculated.  An mrp run at this point completely ignores capacity.  That’s why mrp is called “infinite capacity” planning.  Capacity is built back in later as a factor, but it’s too late to correct the problem created in the first pass, again by modern standards (ie, given leadtime and wip expectations from flow manufacturing, just in time, OPT, Opt-like, other finite, or TOC dbr, or TOC dynamic buffering DBR).  The Haystack Identify processor, while making the batches, has been totaling the setup and process times.  The Scheduler will this data and other information and considerations to decide whether the Haystack system will create a flow schedule (with no physical constraint for drum) or will begin drum buffer rope with the first constraint for drum pacing resource.  We will assume, to make the point about TOC flow manufacturing using a haystack system, that the setup and runtime totals vs. resource hours available indicate there’s much more capacity than demand.  So the scheduler does not select a constraint.  The haystack system skips the Exploit phase and the Scheduler authorizes beginning the Subordination phase.  The meaning in this context is the Due dates of the Sales or stock order line items will be the drum pacing resource and the scheduler’s selection of shipping time buffer and assembly time buffer offsets will be applied to the batch orders in a way that subordinates the flow of the factory to the due dates (serves meeting the due dates).  The shipping and assembly time buffer elements “place” batch processing time “load” into the factory’s work centers in times that are roughly when they would arrive there after release.  This checks for capacity shortfalls.  If the system detects a time period on a resource that seems to have insufficient capacity, it will first try to extend one of the buffers (make the release earlier, this is dynamic buffering).  If it is able to do this, the scheduling is  finished, and the material release times and quanties to support simple flow scheduling are made available to the workers in the factory.  If capacity is insufficient for unmanaged flow manufacturing to work, and dynamic buffering earlier releases won’t solve the peaking problems because the first day (now) was reached and we can’t release parts yesterday, the system shows a “first day peak” which the scheduler can handle in a variety of ways.  If it’s very minor — within the accuracy of scheduling flow manufacturing — the scheduler might ignore it, or note the resource profiles, see which is loaded the most, and shift a few hours of crosstrained labor from a less utilized resource, or offload, several things.  He/she can also greatly increase the efficiency of the schedule and declare one of the resources as the primary constraint to uses as pacing resource and rerun the subordination pass.  that creates a simple single-constraint drum buffer rope schedule for the plant.  we’ll assume no first day peak meaning there’s easily adequate capacity for flow mode.  scheduler authorizes schedule which takes the form of a schedule, a list, of material releases.  the subordination loading profile totals are saved for each work center, but, depending on system option selected, the actual batches in that load are not saved.  they are not needed.  the non constraints are not scheduled other than managing the control of the releases.  that’s what makes it flow manufacturing.  [actually, the batch orders are there, they could get annotated with their subordination placement … not sure … what’s best there … i know BAS/d/gs dumped the subordination placement info, and we didn’t need it, but that was 20 yrs ago … i’ll just note it for now …].  So on the 2nd glass layer are the net requirements batch orders … maybe full batch orders with stock allocation annotations .. plus release schedule and now haystack draws a buffer management display from due dates in the shipping time buffer and a buffer management control and information display for assembly using shipping and assembly buffer time elements … this data is on the 2nd glass … the batch orders, the release schedule, and … actually the shipping and assembly buffer display would be created from order info on 1st glass level, time buffer info on 1st level and wip position from 1st level … and that’s it.  schedule created, sent to floor, and now shop releases, first in first out flow, unless over ridden in sequence or batch splitting and/or overlapping by decisions based on info from buffer management

Bet Eddie was in one of my Apics cm toc 4 day courses  ,/deL

i didn’t know eddie van halen knew how to use a haystack system for flow manufacturing.  wow.  he’s really good.

One Minor Correction

[next day … or, i guess, later that day … jan 14 ]

I remembered something after I finished that first journey through a Haystack-compatible scheduling module’s 3 scheduling processes — Identify, Exploit (skipped in that example), and Subordination.  It was something I hadn’t thought about for a long time.  It’s that a Haystack-compatible system — even when dealing with a factory condition where capacity is high enough compared to the production demand that “flow scheduling” and “flow-managed” or “self-scheduled” production is possible  — will still give the company the benefit of dynamic buffering.  It will still, for individual batches, one at a time, lengthen either, in this case, the shipping time buffer or assembly time buffer time offset to cause materials for only those batches to be released to the shop a little earlier.  Why?  To level out the demand those individual work centers will see when the schedule is issued to the floor.  That’s a good thing.

The Issue of Haystack System Monitors

A Haystack system should have certain types of “monitoring programs” that operate during scheduling processes and let the user know what it’s doing right now, how much it’s done, what it’s found so far, and roughly how much work and time remain for the current scheduling processing task.  It should provide easy-to-use tools for looking around in the part of the results that have already been produced.  It should allow for a few selected actions, like pausing, restarting, or cancelling the current processing task.

To know what I mean, think of programs that install other programs on your computer.  Or think of anti-virus programs or that scan your computer or backup programs that backup your disk drive.  They give you those little windows with all that info and function I mentioned in the first paragraph.

Like other scheduling programs in the 1990s, the two Haystack systems I worked with in the 1990s didn’t have these, but all Haystack systems built in 2011 and beyond should.  There were a lot of things systems in the 1990s didn’t have because programming teams were focusing first on functional essentials, then useful utilities, and then maybe some jazzy non-essential useful stuff to help with the sales and marketing and help with beating competitors who were doing the same thing.  But a continuing revolution in computer industry technology and economics — continually making much more speed, storage, function, ease-of-use, and inter-connectivity available at far less expense — but, even more important for this discussion, software development methods and programming automation tools have continued to evolve at the same rates to allow much more robust systems to be developed with the same systems design and programming resources.  For one example that I’m familiar with from learning and using a little, but not mastering, is Microsoft’s .NET (“dot net”) free downloadable series of development platforms for C# and other object-oriented programming languages.  I was using them in around 2006 and maybe 2007.  These are so different and do so much to have software writing very complex and very bug-free robust software, with the person operating it giving the software-writing software only very general instructions … so different from the C programming language with just its modules and even the “computer-assisted software engineering (CASE)” systems I was doing the same thing with in the late 80s and early 90s … which were, in turn, so different from the BASIC and IBM machine language I was working , oh and FORTRAN, sequential programming languages I was dealing with in college in the 70s … What I’m saying is what we’re seeing right now as we use very easy-to-use and very elaborate systems here on the internet and at home with installed or downloaded software is the result of both decades of continually higher-performance and less expensive computer hardware but just as importantly the continued parallel advances in the concepts, procedures, and software for automating the writing of excellent, comprehensive, and bug-free software.  The software that writes software was stunning in the 90s, unbelievable in 2006-7, and today …

… well, what I’m trying to say is, we don’t have to be constrained in our view of what Haystack systems can and definitely should do by anybody’s memory, including mine, of what the first two such systems — the Goldratt Institute’s and later TOC Center’s BAS/Diaster/TheGoalSystem, and Thru-Put Technologies’ and later somebody else’s Resonance — did in the 1990s.  A big part of the reason I haven’t hesitated to suggest on these two pages things like changing the Parts table to include a lot more rows for in-process parts and do away with the Work Orders file and table and replace what’s left of it when wip is removed to a new Operation Results table, change ALL the affected transaction programs to match these relational database structure changes is I know it won’t be that big a deal for software companies with up-to-date concepts and development tools.  Same goes for suggesting the new transactions also maintain the Perpetual Schedule-Based Decision Support data set in main memory.  Same goes for signalling with my long list of Haystack “applications” (Scheduler, Purchasing Manager, etc) on the previous page.  That sounds like lot of development, but, if you understand a little about concepts and tools for fairly easily building relational database tables, queries, views and reports, and fairly easily building transactions around them (Larry Ellison’s Oracle development system was already doing that on my desktop computer in Natick in 89 or 90 or so), you know that most of the actual “people time” and “people work” that needs to be applied to get this done right — prior to having entire new and complete enterprise-wide database-based systems created by flipping the switch to “go” (that’s a metaphor, btw … there’s no switch … it’s a mouse click … same thing, right?) on the programming automation software-that-writes-software tools — is the kind of thing that’s been going on in these two pages, is thinking through what each person/role wants to know and do by verbalizing it out in a systematic application of common sense way.  hlayk cc ,/♥ 1p or so

What brought the monitors idea to mind is when I was explaining the fact that dynamic buffering is going on behind the scenes during subordination processing (which means applying lead time offsets to calculate material release times for individual batches).

What I did say was, when subordination was finished, a first day load display would show if a particular work center was having enough capacity on average in the scheduling horizon, but not enough capacity in certain time windows within the scheduling horizon.

If you think about this, this can easily happen from whether due dates for orders from customers are spread out evenly over time or if there’s no customer orders with due dates in one week, but LOTS of due dates in other weeks.  Some people pound the table and argue that a great manufacturing company controls what orders they accept to get some leveling in the sales and stock order line items.  Others pound the table and say, a great manufacturing company gives the customer exactly the due dates they want and, by god, makes it happen.  Well, all that huffing and puffing on both sides is a little unnecessary.  Because you try to do both things up to a point.  Range of validity.  Internally, you have the culture be about moving mountains and enjoying pleasing the customer.  But, realistically, there are a lot of issues.  There’s the advertised leadtimes.  There’s the competitors’ lead times.  There’s what you can deliver with soft demand in the plant.  There’s what you can promise when you’re full.  There’s helping an important customer get out of a jam.  There’s it’s sometimes appropriate, depending on a company’s position in the market (everybody really wants them even at a higher price than elsewhere), to get a premium price for an unreasonable order that causes disruptions.  All of these realities are always there and a Haystack system lets the management team make more intelligent short- and long-term decisions based on realistic views of capacity, constraints-based TVA views of the effects of alternatives, and that’s a big deal.

My thought was that there’s really no need to wait until the subordination processing finishes to find out that a particular work center is having a lot of peaking in load, giving rise to dynamic buffering increases of batch time buffer length.  If a monitor is in place with nice graphical views that can be clicked on for different perspectives and detail, the scheduler can … i’m now trying to remember if the peaking already was brought to the scheduler’s attention … it was on the “red lanes” (batch orders on resources between a physical constraint and shipping) … a little window popped up that gave several realistic common-sense options that reflected in software what a production manager would do in reality (off load, add hours, odd overtime), but also the option of declaring a second constraint, which is the idea of sequentially-selected constraints (vs. pre-selected constraints or even pre-selected bottlenecks) …

I don’t remember for sure what was there at the time.  Doesn’t matter.  What matters is what new Haystack systems should have today and going forward, which is, not only little windows popping up with exception situations to decide about, but also well-thought-through monitor and inquiry and analysis programs with appropriate actions that are available throughout the potentially-somewhat-lengthy (though computer speeds are up, so scheduling times will go down, except that more things will be added, because “reasonable length of processing time” is part of what program designers use to decide what parts of reality to include and what parts to make assumptions about) Identify and Subordination processes, but also on any runs of the standard and location-specific custom utilities that schedulers will spend a LOT of their time with in Exploit (drum scheduling), with financial TVA “what if” support … if a little drum batch utility is started, a monitor should keep the scheduler up to date on what’s going on, and let her/him intervene and cancel and take some other action vs. just have to wait until it’s done and realize from the result that took a long time to get that some other action is best for getting the best drum schedule, which is batch sequence schedule on the physical constraint, the process of establishing which “master schedules” the factory by making intelligent adjustments to customer order and stock order due dates.

The need for these monitors becomes obvious when you use a Haystack system.  It’s useful to add them as part of the 2011 Haystack specs.  By now, folks who looked at Haystack systems in the 90s and said, “oh i think that’s too complicated and too much implementation trouble for me,” and then said “i’ll just go simple visual jit or flow”, or “i’ll just tinker with my system and get my mrp to give me a pretty good drum buffer rope”, or “i think i’ll go with another finite scheduling package”, or “i think think i can do the part i want with an MES” — all of which intelligently select and go after a then-practical subset of the overall benefits haystack systems will give for their particular product structure, plant structure, level of computer skill and budget, and typical order mix/timing —  all of these people can now say, ok, right, that’s now not only interesting, but necessary, and not only necessary, but practical and affordable to acquire and implement.

Way2Go, Avraham Mordoch!

So my friend and mentor, Avraham Mordoch, the Goldratt Institute’s development program manager for BAS/GoalSystem, who learned along with Eli Goldratt and Oded Cohen and others at Creative Ouput during the OPT era of innovation, has finally gotten it done.  Way2go, Avraham!   ,/a

xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx

xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx

“Clean Slate” Time regarding … First Impressions of Haystack Systems formed in the 1990s …

… that obscured the real nature of the conceptual maturity and full future and even then-existing potential of Haystack systems

What I’m talking about here in this section about starting from a “clean slate,” a new “blank page,” to clear away “first impressions” formed in the 1990s about Haystack systems is this:  There were a lot of scenes in the 90s that went like something the one I’ll describe in a moment, and then word got around in the small global world of factory employees, consultants, and software folk.  What it amounts to is a lot of people, without time or inclination or commercial agenda to think things through, formed and passed along incorrect impressions about the promising, but imperfect, blend of the ideal and the practical that Haystack systems were back then.  Only a few were able to see around the temporary issues, that were arising mainly from a lack of consensus and resources, and not really arising even back then from lack of current practical physical possibility for most cases.  I may be over-stating the “current physical possibility for most cases” for the 90s.  Have to think about it.  There’s big difference between a few, many, most, and virtually all.  The situation is different for simple TVA accounting vs. simple buffer management shop control vs. real simple flow “buffer rope” vs relatively simple dbr vs dynamic buffering.  The situation is real different for a-plant assembled product “discrete part” plants vs v-plant “process industry” plants.  Real different between simpler assembled products plants (our lawn mower plant) and really complex plants (General Electric or UT’s jet aircraft turbine engine’s or my pal, andy nicol’s caterpillar huge and small tractors, big difference in size of data sets and run times of data extraction, conversion, which would go away if systems provided interfaces and transaction support, and scheduling processing times, which could have been adjusted for by “porting” BAS/GoalSys or Resonance to Sun or other workstations).  And real different for basic commercial vs. aerospace/defense/medical.  And real different for products made routinely including configure order vs. full-custom-engineered-once-then-reordered vs. full-custom-engineered-only-buy-one.  I guess I’m saying a lot more manufacturing companies could have used Haystack even back then even with need for batch extraction and conversion of data to get the 1990s form of the non-perpetual SBDS data set memory object, and even with need for transaction support of maintaining and making the SBDS data set object perpetual, and even with the speed and main memory of systems at the time.  Not the extremes of complexity like the stainless steel factory at coshocton, and ge aircraft engines, and cincom’s base of aero/defense maybe at that time, but A LOT of others — if there had been enough understanding and industry consensus to have better word-of-mouth, more data extraction interfaces, and more transaction support and where needed, more “porting” support (that means reprogramming the program to work on another operating system, like “porting”, say, Microsoft Word from Windows to Apple or Sun Microsystems or IBM whatever their operating systems are called these days).  For the more complex systems who often do their own systems in house or have major “systems integrators” and systems divisions of major accounting firms (there’s one part of the commercial agenda … the big 8, then 7, then 6, then 5 or whatever accounting firms made BIG bucks converting factories into simpler visual flow factories … I told anybody from those firms who would listen they should be building TOC and DBR and Haystack elements into their product and consulting and education and custom programming and database Value added reselling VAR and implementation services already very profitable mix) … jim semerad can tell you i was lobbying any EDS person i knew to do that for GM and its other systems integration and systems facility management clients … i also lobbied anybody i met from the huge integrator based in El segundo, ca , and my german friend tom in IBM i met via Carol Ptak (tom, i think, mannmeusel from IBM germany was first i heard speak from having thought through having toc/haystack functions based in internet) and Ptak who had clout in ibm and anybody I met from Cap Gemini, the big integration house, and of course my pal, David Cahn at Computer Associates who was so helpful to me in my early CM SIG days to help people understand that the opportunities were serious, and my pal Kenichi Igarashi who stopped by my home and office with one of his friends as Ken, Igarash, was doing a LOT with toc and toc systems in NEC worldwide and his friend was including TOC in education within NEC worldwide … that must have been, i gave ken a draft copy of my book, but i mentioned that event in the notes in my book, so that’s between june and nov 97 …

The 1990s: Both Progress and Missed Opportunities

So, anyway … the old joke in the American idiom/slang, “coulda woulda shoulda” (as in “could have” “would have” “should have”) … lots of good things happened … some nice opportunities were missed … but, obviously, the opportunities never went away … the right solution remained the right solution throughout all the free market and free speech understandings and misunderstandings … throughout all the progress and obstacles … all the successes and “learning opportunities” …


The Great TOC Haystack vs. JIT Showdown in the Old Midwest

I give it that name because that’s how people within a large multi-division manufacturing company and the external anti-TOC american JIT community of consultants, professors, and activity-based-allocation-product-costing-accountants-turned-systems-integrators who were watching it were thinking of it.  For me, Avraham Mordoch, Donn Novotny, Eli Goldratt, Bob Fox, and the Goldratt Institute programming staff, it was just the logical extension, the next step, in a logical progression, in 1991, of the conceptual, software, and implementation knowledge development concerning the new system — the first Haystack-compatible system, the first system developed based on Eli Goldratt’s 1990 book, The Haystack Syndrome.

This will give a little picture of the kind of thing that happened in the 90s that created impressions that were not incorrect, but that, considered simplistically, obscured the real nature of the conceptual maturity and full future and even then-existing potential of Haystack systems.

Auto oil seal plant in the mid-West US.  Two brother/sister/sibling (whatever) plants in I think two different states — one in Indiana — are competing within a large corporation.  One plant is filled with enthusiastic Just In Time/Flow fans.  The other with equally-enthusiastic TOC and Haystack fans.

It’s 1991.  The scheduling module of the first Haystack-conformant system, BAS, had been released to several Goldratt Institute development partner manufacturing company clients (GM/EDS was one, there was a European privately-held big auto muffler maker, I don’t think Kent Moore Cabinets was in yet, Zycon was in early, a few others in the first meeting I was invited to attend and observe in July 89) before I was around starting in June/July 89, then 3 more that I worked with in 1990, then several more that others worked with.  How many development partners total?  Not sure.  A dozen?  Probably.  Two dozen?  Probably not that many.  No more than that.  In late 90/91, “pre-release” partner versions were made available.  The product was called, BAS, Business Analysis System.  My friend and mentor and really nice guy, Avraham Mordoch, was the president of the Institute’s development company, although his motherly assistant, Harriet, was really in charge of things on the Institute’s headquarters third floor.

BAS was renamed in late 90 or early 91 to “Disaster” by the colorful Eli Goldratt to discourage people from buying it and trying to implement it without first understanding TOC.  Later renamed TheGoalSystem for obvious reasons, but that’s not a good name either if you’re real picky/knowledgeable, because the details of the system part of at least the earliest two editions of the Goal aren’t true drum buffer rope (i forget why i thought that when i reread it, but i’m sure it’s true), and what’s in the goal, at least in the first two 94 and 96 editions is also certainly not dynamic buffering or haystack-era drum scheduling (with bull dozer metaphor, et al).  That first Haystack system should get yet another new name, like, how about Haystack Classic … not perfect … worked for Coke Classic, was worth a try … maybe Original Haystack? … Monet would be a very good name for somebody to jump on and service mark right quick, since impressionist artist, Monet, famously painted all those paintings of Haystacks.  Cute.  Elegant.  Encoded a little bit.  Built-in fabulous public domain art to put on screens, manuals, bar code scanners, stock bins … what?  stock bins? sure.  get Gene Simmons to work with you on the product proliferation strategy … Monet mostly clear plastic shop paperwork holders, art on inside back, clear on front.  Leading to phrases, Monet: Making Manufacturing Even More Beautiful.  Somebody should definitely do that.

Next:  How Come the Database Suppliers Didn’t Help? …

… They could have programmed the interface (data set extraction) programs once and re-used them throughout their installed base and new installations …

Back to the story.  A company would get excited by the pretty much unavoidable logic of the 1990 book, the Haystack Syndrome. They’d have internal data processing staffs jump through all the hoops to write little custom programs to get the data out of their manufacturing system’s parts master file, routings file, bill of material file … others?, maybe that’s it … oh, of course, orders file … will need open work orders to get the full picture of “parts” … what else? … sure, work center file, schedule file … though it was easier to just type them in rather than write another interface/extraction program …. that might be it … but that was A LOT OF TROUBLE … not having interface/dataExtraction programs already in the underlying peoplesoft, manman, maxcim, jd edwards, oracle, ask, or other database system made just getting to the first schedule A REAL PAIN IN THE NECK … notice, though, that this part of what led to exhausting and wearing out the patience of those who thought they wanted Haystack could have been avoided if the database system supplier had seen the opportunity, did it once at that location, had the code for the rest of its installed base and new installations, relieving the local factory data processing and manufacturing employees from having to do that part … that would be helpful because there was a LOT more to do …

i’ll explain.  what?  of course I will? smart ass …

An Exciting Foot Race to Lower Leadtimes and WIP

So, in maybe late spring, maybe summer 1991, the race was on between the two factories that made pretty much the same type of automobile oil seals using pretty much the same materials and factory equipment.   Which plant would finish first?  And which plant would get to lower leadtimes and lower work-in-process inventories?

Next may be AGI’s fault:  Proprietary SBDS memory object?

I’m not sure if that memory object I’ve been calling on these pages, “the Schedule-Based Decision Support data set memory object” — and needs to become the “Perpetual Schedule-Based Decision Support data set memory object,” (it didn’t really have a name in the 90s and it needs to have one and be discussed about in the same kind of depth object-oriented programmers talk about, build, and use other object in main memory that reflect real things in the world), was really intentionally proprietary.   But I don’t recall ever seeing a published spec that described it.

As I’m thinking about this situation in the 1990s, with the benefit of my experience in 2006-7 learning object-oriented software development concepts and tools, I’m realizing that, without such a published “open” specification for that SBDS memory object, there was no way that the programmers at PeopleSoft (was PeopleSoft even around then?), JD Edwards, ManMan, CA MaxCIM, Cincom, and all the custom integrators could write programs to create the SBDS memory object on a non-perpetual basis or to modify transactions to maintain it on a perpetual basis.

So that’s going to nip in the bud (pre-empt, stop before it much gets going) the little energy flow that was just starting about, “well, you know, the manufacturing mrp database suppliers could have made changes to make all of this easier,” which was a little disingenuous anyway, because I know there were a lot of other things going on (tva vs abc/aa/abm, new inventory system, everybody used to old work order, perceived loss of revenue for mrp II suite which was … oh, ok … of course …

and Eli Goldratt didn’t just want to create a new standard.  He also wanted the software revenues, at least the licensing slice, from all those BAS/Haystack workstations (not the database, but, actually negotiating could have gotten him a slice of database sales too, like sanjiv and sharma apparently did at i2) for himself, of course, he liked both being a smart physicist and getting rich, but also, at the time, he was talking about using BIG money, he talked about “a billion dollars”, sounds like a lot but think of software profits and billionaires bill gates and larry ellison (oracle) and sanjiv sidhu (i2) and probably the Wang brothers at Computer associates walked away with a cool billion , and Tom Nies at Cincom made a lot of dough don’t know if a billion , anyway, Eli wanted the dough for himself and his partners, of course, but to fund TOC for Education in a big way … in schools … k-12 sort of thing …

So, now I’m guessing that memory object was proprietary on purpose …

[Update.  later. 1/15.  second thoughts to the second thoughts … third thoughts? … anyway … Hm … but, if had made perpetual SBDS data set object “open,” and if database houses supported it, that would helped get the education project funds from the software sales because implementations would have gone easier and workstation function could have progressed faster … so not sure if it was on purpose or not … either way, would be a good thing to have an open spec for the pSBDSds now, that both manufacturing application and database houses could write to … ]

Thinking further … but, in my view, I think using a “getting a smaller slice of a MUCH bigger pie” theory might have said go ahead and publish the spec for the memory object and get your billion from partnering with somebody huge (oracle, ca, eds, etc) who will use the brand to get attractive market share and price enough while everybody else gets some too by changing over to Haystack

Big game.  Big stakes.

In a very real way, that game’s still on.  Where’s the original BAS/disaster/Goalsystem now?  there’s a great brand play there for somebody.  And agi/eli can authorize/certify license (for a fee) all the others that come on board.  Some may sell as “haystack compatible” without getting certified by agi, but the ones on the Goldratt Institute (AGI) list will have an edge in most selling situations …

and, as to the SBDS object … i’m not sure to what level and extent that will or should get “open” spec treatment beyond the contents of those orders, arrows, stations, resources, raw materials, and calendar files … ah, will have to be “open” or at least available as an API for someone selling only the workstations who wants transaction support from database providers … but say an integrator like computer sciences or EDS or whatever the old anderson consulting outfit’s called now decides to follow Mother Nature and public domain Haystack like I suggested they all do back then and create their own … that can be a proprietary perpetual SBDS object … which leads to the usual open vs proprietary business decision in any technology area … if somebody leads a committee to form an open perpetual sbds spec, many will use it, but some may do it differently, claim correctly or incorrectly it’s better than the spec created by a committee (horse and camel joke and analogy applies, camel is a horse created by committee etc)  cp,/

There was a published spec that described the interface to BAS/disaster in terms of an intermediate set of files, made up of rows, records, and data elements (a file each, in text format, comma delimited, maybe not quote and comma delimited, just comma delimited, for Orders, Arrows, Stations, Resources, RawMaterials, and maybe others, there was a little schedule utility to allow putting in hours for resources, oh called the calendar utility to create the shop calendar file), but that was for input into a BAS converter program that created the object.

In other words, headache number one was modelling.  I’ll come back to this.  This one doesn’t go away.  It’s not so bad, really.  and it will get easier as the standard files evolve in the way we’ve discussed (parts file with wip, etc)

Headache number two:  mentioned above.  writing data extraction and data massaging programs to get the BAS arrows and other files for the converter.

Headache number three:  running converter program for troubleshooting data extraction and data massaging program.  once and then fix errors and exceptions input.  actually, going back to the extraction programs and fixing the problems in how the data was selected and truncated, upperLOWER case adjusted, zero-filled, special character replaced, etc etc etc.  several days later, new arrows and other files.  run converter program a second time.  ditto for less errors.  keep doing this until the errors were so few and trivial, but hard to fix comprehensively, the production control folk would fix them by hand rather than go back to the by-now a little exasperated and always busy data processing people to further fine tune the data extraction and massaging custom inhouse programs … bob vornlocker and gary smith spent a LOT of time working this with the ITT ac pump IT people.

Headache number four: once a pretty good BAS converter program data set was available, the run time for creating the non-perpetual SBDS data set memory object.  non-Perpetual since it’s being built from a snapshot of data extracted over night or over the weekend and, meanwhile, with every hour that passes, the factory is making the extracted data out of date, so the nonperpetual SBDs data object is also going out of date before it ever gets to main memory to start scheduling

Headache number 5 … but first …

Who Was Likely to Win the Leadtimes and WIP Showdown?

What was the very smart corporate-level management team hoping to find out by letting (and funding) a little friendly constructive within-the-family inter-factory rivalry turn into a case study of JIT vs. TOC Haystack computing?

I’m actually, as I keep typing “JIT”, I’m not sure if they (the “other plant”, the “non-Indianapolis plant”) were going to do JIT like kanban … ok, I remember now … thinking about kanban jogged my memory … the other plant was definitely going to go to kanban-signaled-and-managed JIT.  That means bins (containers) and cards (kanban, I think, means “card” in japanese) — visual things and then free batch flows for JIT vs. computer things and then free batch flows like TOC.

Three Ways to Keep Only the Right Materials Released in the Shop

Kanban JIT is one way to make sure only what is needed to be worked on soon gets released to the shop floor.  TOC drum-buffer-rope and TOC Haystack dynamic buffering DBR are two other ways to do the same thing –make sure only what needs to get worked on soon gets released to the shop floor.

Why Controlling Material Release Lowers WIP and Leadtimes

Here’s why making sure only what needs to get worked on soon gets released to the shop floor creates low leadtimes and low work-in-process inventory.

Zero WIP.  If you start with an image of everybody at their workstations ready to work, but no materials released to the shop for people to work on, you have work-in-process inventory equal to zero, right?  Right.  We TOC used to joke about the “zero inventories” crusade and crusaders and say, “hey, you guys want to get factories to zero inventories?  just get rid of all the parts,” which was dumb, but fun, and also a little bit right.

WIP = 1. Anyway, now release one batch of 1 part into the factory.  you know have wip equal to one unit at whatever it’s cost is.

the magic of “overlapping” work stations.  now watch how fast that part moves from shop release all the way through all the operations to shipping when it doesn’t have to wait for a machine to stop working on some other part and the work centers up ahead can get those 15 minute or 5 minute or 30 minute setups done before the part arrives and also make sure any needed tooling, fixtures, drawings, etc are on hand.  man, that part just flies from work center to work center, with the total time from release to shipping being the total “run times” of the operations, right?  maybe the setup of the first operation added in too depending on whether that first op gets to know what’s coming soon.  but that’s maybe, with the setup at first op, 15 min setup + 2 min for op + 4 min next op + 15 min next op + 5 minutes last op …  15 min setup, 26 minutes runtime … completely through the plant in less than an hour, or less than a half-hour, depending on that initial setup … but not 1, 2, 3, or 4 days …

wip = 10. now it’s an empty plant again.  now release a batch of 10 parts.  same thing happens.  the downstream work centers all set up for the parts so there’s again no waiting for a setup … the first part sails thru the 2 min and 4 min ops, and is still at the 15 min op when the 2nd and 3rd part catch up with it. the parts sort of line up at that 15 min op but otherwise sail through the ops before and after.  check my math, my eyes aren’t so good, but the first part arrives to shipping in  the same 26  minutes as in the first case and the last part gets to shipping in something like 150 min plus 6 plus 5   minutes, or 161 minutes, less than 2 hours.   not 1, 2, 3, 4 days.   that’s what an “overlapped batch” looks like and behaves like.   that 10 could be a new batch split off an original batch of say 50.  that would make it the famous “splitting and overlapping of batches.”  why famous?  because when policy constraints from the preMrp and mrp era are stripped away, and systems structures like the new Parts table we’ve been discussing are in place, splitting and overlapping batches becomes part of the key to getting the efficiencies of larger batches when the current load on the plant allows for it, and the low wip and leadtime of smaller batches or even batch of 1, when the current situation says that makes sense.  it’s what made OPT and OPT-like i2 technologies work amazingly in well-selected circumstances.  it’s one important part, within a more comprehensive and widely-applicable TOC DBR concept with fewer restrictions and side effects than Opt or I2 or JIT or nonTOC Dbr flow, of what makes it possible for TOC DBR and TOC dynamic buffering DBR to be more simple than OPT or I2, more flexible and better ROI than JIT, and still get low leadtimes and wip.

TOC Haystack systems provide loose management …

(like “first in first out” self-scheduling of non-constraints, and approximations of timing of likely arrivals of batch loads in nonconstraints during subordination, policies letting shop workers and supervisors combine or split batches when the obvious opportunities are in fron of them … the haystack system does NOT schedule splitting and overlapping and combining and recombining batches … people do that as things happen … trying to do that in advance is normally silly … but range of validity applies to nearly every statement, right? … post-schedule-release utilities to assist managers in considering and implementing splits and overlaps and combines recombines of batches should be part of Haystack’s buffer management shop control module … system can maybe advise, provide “split advice” and “overlap advice” messages … so the BIG statement is the computer doesn’t start out scheduling splits and overlaps … in fact the Haystack systems gives release advice and preliminary DRUM batches in quantities matching the sales and stock order line items … that’s a BIG DEAL that this is true and that these Haystack counterparts of mrp “net requirements planned orders” on the “second layer of glass above the zero-layer canvas in main memory” show sales and stock order line item quantities and also carry “sales and stock order line item identifier” tags to allow linking back to and using the due dates and other customer and order information …

(THAT IS A VERY BIG DEAL.  DO NOT MISS IT.  STOP STOP STOP RIGHT NOW AND SPEND 3-5 or 30 MINUTES THINKING ABOUT THIS, AND COME BACK TO THIS POINT OVER AND OVER AGAIN, SO YOU NEVER FORGET IT.  this seemingly-small, easy-to-hear-but-not-really-hear fact about how Haystack data structure for net requirements batch orders get created during Identify processing (that also gives data on overall resource loading) IS THE KEY TO EVERYTHING THE HAYSTACK SYSTEM WILL DO IN ALL OF ITS FUNCTIONS THAT KEEPs ACTIONS AND DECISION-MAKING — EVERYTHING — ON A GLOBAL/COMPANY-BOTTOM-LINE-EFFECTING BASIS VS. LOCAL/MaybePrettyIrrelevant Basis …

what?  stop shouting?  ok …

you can tweak a materials requirements planning (mrp I, “little mrp”) program to force it to create “net requirements” “planned orders” with batch sizes that match the quantities on sales and stock order line items (by zero-ing the batch size data element in the routing file for all the parts), but you can’t force the mrp program to put sales and stock order identifiers — that can always link to information about whether its a customer or stock order, who the customer is, and what’s the latest decision about the due date — into any of those “planned orders.”  on a Haystack system, you can tell the difference between a batch of 2 units of product that your most important customer is asking you to pay very special attention to and the other batch of 2 of another product that the customer has already told you he doesn’t need right away and might even cancel.  On an MRP system, all the “planned orders” that come out of the “net requirements” process look alike.

…  back to splitting and overlapping and combining and recombining batches …

… so the Haystack computer doesn’t schedule the splitting and overlapping and combining of batches … people see it and do it when they think it makes sense … but range of validity always … so the SMALLER comment is the newer Haystack systems can have little routines, some standard some with local parameters, some full custom, like DRUM/constraint scheduling mode, for assisting … this paragraph and even this parenthetical comment are enough to keep apics members and ops and mfg and ops analysis professors writing new and very useful technical papers for quite a while … right, jim cox, johnny blackstone, tom hoffman, steve king, dan t, eli s, boaz ronen, lisa scheinkopf, bob stein, my pal prof jim in washington state?)

… [loose management] when getting the best of Mother Nature means not trying to over-control, and pretty precise clear controls (like DRUM scheduling and control of shop releases) when getting the most out of when Mother Nature says more structure begets higher performance, and allows moving among the loose and tight in response to what’s actually happening … vs pre-establishing things that are hard to change later and that only work best in particular circumstances of machine, product, order mix, and due date distribution.

1/2 hour or 2 hours, vs. 4 days? we’re talking less than 2 hours for the batch of 10, with first part arriving in only 26 minutes so that final assembly isn’t sitting around waiting, when the stated leadtime is usually about 4 days.  what?, you say? less than 2 hours vs. 4 days?  well, it makes a huge difference if everybody’s not standing around waiting for that one part to show up, but remember that speedy one part, because waht he had it do is called “overlapping” and it’s part of the key to how drum buffer rope works …

Normal DBR “first in first out” flow sequence (in non-constraints areas).  most of the time, batches flow steadily along through the non-constraints,

then priority and even overlap for buffer holes.  but, when a “hole” in the buffer management display says we need “one or a few” parts to maintain the overall schedule, the overlap situation is established, the needed parts scoot up to the front of the line, and the schedule is met.

How JIT does it with kanbans

[Notes 50 and 51 and 52].

We showed that we can reduce WIP inventories a lot by never releasing material to the shop.  But then we get no production, so that’s not our best solution. : )  Another way –especially if the product isn’t “full custom and one-time buy,” in other words, especially if you know you’re going to keep making some of the same products or at least the same parts, and even more so if you’ve decided to narrow your products into group technologies in order to make the manufacturing “repetitive” — is Just in Time with kanban.

In the circumstances where it works, Just in Time with kanban was a huge improvement over the prior typical way of running the factory.  In a way, for the non-custom simpler and more repetitive manufacturing, manufacturing has been flip-flopping.  First, before computers, things were done with paper, by hand, and visual ways.  How else could they do it?  No computers.  Then computers came and some pre-MRP and then MRP came in and it worked pretty well in terms of getting things better organized, but it had side effects.  And the more customers demanded shorter leadtimes, the more chaotic it got with the early computers.  So companies like Toyota threw the computer out the window, at least the shop floor control part, kept the financial database, had to do that, probably kept basic planning for ordering raw materials, and went back to pre-computer days on the shop floor.  Not everybody could do that, but Toyota could and a lot of firms did.  So they got a LOT more clarity, less confusion, and more control and made history — the famous Toyota Production System.  How did that work?  Well, let’s take the example a Toyota factory that made, say, transmissions, or maybe axles.  We like axles on these pages, especially when they go to our hammering work center.  Anyway, let’s use transmission so as not to confuse our images here.  Toyota might (and i’m making this up … i haven’t read the books or studied the famous TPS, but what i’m going to say is probably right) set up a few transmission final assembly “cells.”  A “cell” is a team of people that will get some work done on the parts that arrive to the “cell.” In this case, the final assembly cell, the workers in the cell take an upper housing (i’m making this up too), a lower housing, a front shaft, a back shaft, say 12 different gears, a cork gasket, 24 bolts, and assemble them into finished transmissions.  So how many parts did we create?  2 plus 2 plus 12 plus 1 plus 24 … did i get them all? … my eyes aren’t so good these days… so 41 parts … um, yes, but no … let’s make all the bolts the same part … so 2 2 12 1 1 or 18 different parts, 41 total parts in each transmission.  That’s for one model of transmission …

and that’s where things start to become a problem for toyota’s kanban system, when any one work cell needs to change from one model of transmission to another.  Not a huge problem yet, but we’re going to find that — compared to TOC DBR and buffer management, and using the computer a little bit instead of completely walking away from the computer — is going to keep adding one annoying and troublesome little inflexibility and inefficiency after another if we ever want or need to change things.  ,/tfbg

For that one model, let’s call it transmission model 873, part number TRANS-873, we have 18 different parts, 41 total.  That means … first let’s get a “canvas layer” image in our minds of how things are physically flowing in this Toyota transmission factory that’s using JIT kanban to manage production flow.  Let’s must make it like our lawn mower factory.  Raw materials and purchase parts receiving and maybe stores … let’s keep the supplier side simple for now and have both RM and PP receiving and storeroom at the left of the canvas.  Final assembly on the right.  Even further right at edge of canvas, finished transmission stores and shipping.  We can deal with both receiving and shipping stores later, and turn them into kanban, but not right away.  So in the large middle, work centers to make the housings, gears, and shafts.  We’ll let bolts and gaskets be purchased parts.  Housings have castings or stampings … lets make it castings for RM, gears have blocks of metal for RM, and shafts have steel rod for RM.  Left-to-right physical flow.  Cool?  ,/sk ,/ph&theFiddler ,/ss.cc  Great.

On the left side of the final assembly work center, there are 18 sets of 2 bins to hold parts.  2 bins for each of the 18 different parts.  am i going to be able to make this work?  should be able to.  the whole idea Dr Ohno of toyota had was that it should be simple and visually obvious.  at any given time, the workers in the final assembly work center cell are taking parts from one of the two bins, leaving the other bin full if it’s back yet from getting more parts.  when they have emptied the bin for gear number 4 for transmission TRANS-873, that’s part identifier GEAR-873-04, somebody takes the empty bin back to receiving storeroom … hm… if they’re sending the bin back, why do they need the kanban card? … oh.  maybe because a bin is just a bin and something’s got to have the part identifier and quantity on it … i’ll assume that’s the role of the kanban card for the moment … so the empty bin and the kanban card that says, “GEAR-873-04 quantity 5,” are taken back to receiving stores to release 5 more raw material steel blocks to be machined into gears …

now here, with what I’m assuming so far is the right understanding of the nature and role of the kanban card — part identifier and replenishment quantity — we start to see another limitation of the kanban-style JIT.  why 5?  why not 2?  or 12?  or 22?  why not zero if the 5 GEAR-873-04 parts that are back at final assembly already in that other full bin will do fine for the 2 transmission model TRANS-873 we have orders for from the car assembly factory in the near future.  They won’t need any more TRANS-873 for a little while because of the mix of car models they’re building over there in the car assembly factory.  So we already have 3 more GEAR-873-04 than we need anytime soon and our bin and kanban card have already gone back to receiving stores to start making 5 more.

Doesn’t it seem like kanban two-bin just-in-time has difficulty keeping inventory low when there are changes in the mix of demand?  Unless somebody tell the parts person to stop, there will be 8 units of GEAR-873-004 on the shop floor.  For that cell to work on another transmission model, it will need to get 18 more sets of 2 bins for the parts needed for the next model.  Meanwhile, what’s leftover in the 873 bins and what’s being made in response to empty bins and kanban cards going back to release more raw material … all of that is material that should never have been released and worked on … so changes from pure repetitive to changing models can be a problem …

Bet you a buck (a dollar) that Toyota now uses TOC drum buffer rope instead of kanban JIT.  If they didn’t, they probably should do it now.  ,/pn

Far-fetched?  I don’t think so.  I’ll make up some more things that might be true for the Toyota Production System in its circa 70s kanban JIT mode.  The final assembly cell next to the cell dedicated to building TRANS-873 is the final assembly for building TRANS-901 which are needed at a pretty steady rate by the downstream car assembly factory.  So it’s no problem when the kanban card and bin go back to start the production of 5 more GEAR-901-003 gears.

notice we have to keep a kanban card — two kanban cards, one for each bin — for each different part.  how many parts do you have to have before keeping track of all the kanban cards starts to become a bit of a hassle?  compare that to printing out shop paper for GEAR-901-003 when it’s needed, in the quantity it’s needed, and tossing the paper when they’re done, and only printing another when it’s time to make some more that are definitely needed.  ok, now i’ve pissed off the recyclers.  great.  but there’s a solution.  probably reusable bar code cards solve that.  at release, put 12 parts in the bin, scan the little reusable bar code card so the computer knows which … we don’t have to solve this now, although i think i just did … but leaving aside my concern about the recyclers … : ) … kidding … the point for the moment is keeping track of kanban cards when parts stop being used for a while and getting the right ones back out when the parts get used again … see it? … see the problem? … if it’s very much a “repetitive environment,” where you can count on the continuing demand for and use of a part, then the kanban card just keeps going back and forth with the bin, no problem … unless somebody loses one … but if demand falls off for a part you’ve had in two-bin status, you have unnecessary finished parts inventory … even if demand doesn’t fall off, you’ve got those two bins of inventory ….

i have made a mistake here that makes just in time kanban look better than it really is … i sent that GEAR-873-004 kanban card and bin all the way back to receiving/releasing stores to release raw material … that’s more like DBR … i forgot kanban JIT is a “pull system” that has kanban and 2 bin apparatus between all the work centers.  i remember now Bob Fox saying JIT puts inventory throughout the operation.  and it does by having a kanban 2-bin system between final assembly and the machining work center … getting tired …

wow, and having two-bin kanban for each part between each work center means a lot more kanban cards and a lot more parts in bins and having that throughout the plant makes it REALLY tough to deal with demand for parts becoming non-repetitive, non-fairly-constant, and mix changing.  people want to know why we TOC folk say DBR gives much more flexibility than JIT while still reducing leadtimes and WIP?  well, the flexibility thing has become pretty clear.  I hadn’t really worked it through like this before and had focused on other inflexibilities, but this nails it.

you also can’t print special instructions on a premade kanban card like you can on shop paper from the text notes in a computer … we’re going to find a lot of little things the computer lets us do that a pre-printed and plastic-covered kanban card with part number and fixed quantity aren’t going to do for us … doesn’t that make sense? … if you haven’t been shouted at for years by visual systems consultants who tell you computers are bad and wrong, it’s obvious to you using computers at least a little bit is probably going to be helpful in a factory environment with so much to keep track of … there will be a lot more of these little things and then some really big things …

Remember, I’m making this all up.  It’s not necessarily really what Toyota when it became famous for the TPS Just In Time system, but what we’re building is probably close.

 

at receiving stores (stores means storeroom, btw),

Back to The Duel in the Old Midwest

At those auto oil seal factories, there were assembly stations, “presses”, that pressed the rubber part of the oil seal to the metal part.  maybe with heat that worked the rubber and the metal  so they bonded.  i don’t remember any discussion of glue.

so each press, and there were quite a few of them as i recall. if we set up a left to right view in our minds of physical flow, at left behind a wall was a rubber mixing and pouring into molds room for the rubber circles.  along the top of the rectangle was a tooling area and some admin I think.  far right was big stamping machines with i think both receiving and shipping and stores further right beyond stamping.  that leaves lower middle for all the press/assembly stations.  somehow i’m thinking there was something before presses, not sure what that would be, but i’ll assume some other op working on either the rubber or the steel part from stamping.  fairly simple routing for any one final product part.  rubber was batch mix, time per batch, not time per part. stamping had maybe 30 min setup and then sort of a time per batch as several rings were were stamped out together. sort of a hybrid time per part/batch op.  the assembly/press op and op before press (can’t imagine what op would be needed but i can still see a few views from outside the rubber mix room wall there that i think were views past/thru some work stations to the area with all the presses beyond) were (i’ll assume) time per part and fairly low setup.  the tooling/jigs/fixtures were for stamping and the presses.  they wore out.  that’s why there was a sizable tooling making area.  we were tempted to take a somewhat unusual step and declare tooling the drum/constraint/pacingResource, but we didn’t get past first working on the more straightforward step of using some of the assembly presses as drum/pacing/constraint resources.  stamping as i recall was a clear non-constraint and there was no interest/intuitive/orLogical reason to think of rubber mix as constraint.

#######################################################

NOTES
#######################################################
Note 1

If Dave and Marie and Patricia were here, they’d smirk and say, “That sounds like one of your CM/TOC systems workshops, Tom.”  And I’d have to put on a bemused expression — part-smiling and part-chagrined — and say, “You’re right.  When you’re right, you’re right.”

[Skip the rest of this note if you’re not very much interested in some personal musings about being practiced and polished and structured vs. being, let’s call it “flexible” and “responsive to questions” and “inventive” while delivering TOC systems courses on behalf of a professional society in the late 90s.  These comments are pretty much off the point of “inventory systems,” but touch on some interesting issues in other areas. ]

Dave used to tell me my workshops always got both highest marks and praise and “just ok”/mediocre marks on the feedback sheets.  I understood that.  Both marks were correct.  Every time I stood up in front of a group and started out with a basic game plan for covering the 4 days of material, somebody would ask a question.  For a while, I could bring myself to just answer it with the standard professional trainer’s strategy of politely answering it a little bit and saying, “we’ll get to that in more detail in the afternoon of Day 3.”  But, truth be told, while I had slides enough for the entire time, I had not practiced giving and timing the courses like I really should have.  And, pretty soon, somebody who was in the class who already knew something about TOC and had tried to use it would ask an excellent more-advanced question about some basic principle I was trying to cover, and — off I went — and, what would happen is, as I was answering that more advanced question, I would be coming up with new ways of expressing old stuff, and new little and medium and (in my eyes) big new solutions right there on the spot and I’d have to grab my notepad to make notes of them (which was really not necessary, because, truth is, once verbalized, the newly-invented stuff comes back more readily all the time after initial invention/creation/construction and verbalization).  And I noticed that, if I let questions run a substantial part of the show, the energy level stayed high, and all the basics got covered faster and over and over again from different basic and advanced directions than usual.  So the flow of the course became cycles of (1) me starting into wherever I left off in the planned material, saying, “oh i guess we covered that in the questions already”, very quickly restating the slide, reading quickly through slides until we got to something the Q&A hadn’t covered, starting to teach that, (2) up comes another really good question, (3) A & Q & A & Q & A until we all felt a natural end/closure had come to that part of the discussion, (4) back to step (1), repeat, repeat, repeat.  TOC beginners and TOC veterans (some with successes, some not yet successful) would be asking the initial and follow-up questions.  I was aware both levels were in the room and made the discussion run at both levels at the same time.  Since that seemed to be working, since I was busy doing other things, I rationalized to myself that it was ok not to place sufficient priority at that point in practicing and polishing a standard course … plus I also perceived at that time, right or wrong, that those verbalizations from all those angles were useful, putting me in ever-better in position to lead/guide all the various stuff I was leading/guiding/writing.  On the other hand, some people who come to standard Apics courses come sent by their companies for the particular purpose of taking the slides back to their companies and training everybody there themselves.  And, while they had my slides and understood a LOT of stuff they’d never know from somebody more practiced and polished but less knowledgeable and experienced, they didn’t feel they got the usual thing they expected from an Apics course.  And, finally, some people were just accustomed to professional trainer presenter skills and room management skills and I knew what that looked like from my corporate days — I even developed training and documentation materials that professional trainers used within ATT when I worked for AT&T (for something unrelated to TOC, that was 1984, 5 years before I became aware of TOC, Goldratt, or even Apics) — but I was a man in a hurry and never really settled down with a subset of the subject and practiced it and polished it and figured out where the jokes should be and where the breaks should be and anticipated questions to deflect, etc, like the professional trainers did.  I just said, Let’s go to work and think about this together.  If you’ve been reading this or the previous page, you have a sense for how, for better or for worse, those courses went.  And, depending on your taste, you might love it or hate it.  But at least here it’s free, right?  On balance, by the way, I think, in those Apics CM SIG TOC 4 day systems courses, ideally, I should have worked at breaking the either/or and created more organized and polished presentations that matched the usual expectations as to form, that still ran at both beginner and advanced levels and let me deliver known basics while verbalizing new stuff.  That’s in retrospect.  At the time, I was just always happily in a hurry.

Oh, you know, there is one more thing.  If Lisa were here, she’d remind me that I was ragging constantly on her and every other TOC person who would listen to try to get her or my pal Gordon at deVry, or my pal, Bob Stein, or other TOC people to develop Apics-owned CM SIG 1, 2, and 4 day TOC workshops, because that was one of the things in my CM SIG proposal and business plan to the Apics board in august/fall 1994, and so that Apics could make more money on TOC while spreading the TOC knowledge, with nice standard student manuals and instructor guides, but there was concern all over a lot of the TOC community that getting Apics into the game of TOC education, and suddenly having large numbers of people teaching Apics-owned TOC courses would hurt existing TOC practices.  But my original point, Jim Cox’s point, John Blackstone’s, and a bunch of others was that having Apics running Apics-owned TOC courses, and having every consultant and systems house making money on having legit TOC in their consulting and education service mix was how we were going to know TOC had become standard practice.  And that it shouldn’t hurt existing TOC folks since the market, the “pie” as we said at the time, would get bigger.  So, that’s part of what was going on with me winging it a bit with the Apics CM Sig 4-day systems course.  I was pretty darn busy already, but “developed” the course anyway, made it happen anyway (had it all in my head anyway, made the slides on powerpoint overnight listening to alanis morisette jlp cd, ran to 24-hour kinkos to print ’em, 3 hole punch ’em, put them in binders and make overheads), and that’s part of what was going on with those courses too.  Ran it for Apics I think three times, once at the Wilmerding facility (where I lost my voice half way through and had to whisper into a microphone and drink lemon juice and honey the rest of the time : ), once in-house at a company site in utah, and once at a cm sig symposium I think maybe 98 in maybe seattle.  The other thing about running a course for Apics is Apics makes most of the money which, strategically, made a lot of sense to and for me, but not to and for a lot of other TOC and even non-TOC people.  The same issue existed on the CPIM/CIRM side of the body of knowledge.  What I wanted was for the guys and gals, for whom being somewhat-low-but-still-pretty-reasonably-paid Apics cirm/cpim instructors already worked (that structure didn’t work for everybody, but was right for some, like anything else in life … and apics staff made the mailings, accepted and confirmed the registrations, collected the money, deal with cancellations and refunds, gave people/students suggested hotels and helped with reservations, prepared enough copies of the student guides and brought them to the workshop site [if the guy developing and delivering the course wasn’t making them himself at kinko’s the night before], greeted people and signed them in and gave them name tags, arranged for the snacks and lunch, arranged for the classroom and the overhead project, flipcharts, and little pens for the overheads markers for the flipcharts … if you do this yourself, and i did, it’s a lot … so being an apics instructor didn’t work for all consultants, professors, and systems professionals, but it worked for some, and I was going to make it work for me, and the strategic/industry impact of having Apics running Apics-owned TOC courses as part of its education program and body of knowledge WAS HUGE HUGE HUGE …), and for new instructors, to also become qualified to run Apics-owned cm sig TOC courses.  and also to do it myself as one business line in a flexible independent consultant’s mix of consulting and education service lines.  anyway, a lot of cool stuff was going on at the time.  ,/ls,ra ,/cc ,/d&m&p.selim

One-hour paper presentations were a little different.  I didn’t let those go off quite so serendipitously.  And, even if I had an off day verbally, which I did sometimes, the paper was there to make the essential arguments.  I’m a “people person” and love the whole gamut of people-related activities involved in being an effective change agent, but, also as an effective change agent, I knew that getting the professional papers, the so-called “professional literature”, regarding TOC established with the right information was essential to TOC becoming a global standard practice that everybody would come someday to take for granted as, oh yeah, sure, that’s the way it’s done.

But I’m remembering now a perspective I came to view as very interesting and useful and — for people new to applying TOC to any aspect of life — reassuring and empowering.  It also prevents having the use of the TOC thinking processes become a drudge for people who aren’t accustomed to tying themselves down to analytical procedures.  It’s also a way for people accustomed to analytical procedure to give themselves more flexibility and fun in the analytical work. [note 47]  (At the end of the 90s, I felt I needed to write three more TOC books.  One of them was my take on the nature and use of the TOC thinking processes.  That would have made for a real party with me and Eli and Tracey and Dale and Lisa and Dettmer joyfully and lovingly throwing those grey foam-rubber rocks the greystone gives out as promotional items at microsoft developer conferences back and forth at each other about all sorts of procedural, experiential, and philosophical things.  Yeah, man!).

Anyway, this idea was to use the image of an impressionist pointillist artist creating a painting on a blank canvas as a metaphor for the process of creating a solution.  The canvas can represent the essence of the logic of a knowledge area, or, if logic tree format, just one of the logic trees, or a full set of trees.

Anyway, a pointillist impressionist painting can be made by scanning like TV or CRT screen does, line by line.  But it can also be created by placing a point of color here, then there, then over in that spot, then having a cluster of points back up top, and a point down left, then a cluster of points center right, and so forth.  The first way, linear orderly fashion, you don’t begin to get a sense for the overall picture or current reality or possible future reality or implementation or Apics CM/TOC course contents … hi, dave and … until you’ve gone completely over the canvas … the second way, the overall picture starts to come into view more quickly … but i’m not pushing the “more quickly”, it’s not always clear it’s more quickly … but I do like to emphasize that, because we’re using the TOC intrinsic order principle, TOC’s expression of the idea from physics and science that there’s often a pattern in nature that’s already there, invisibly waiting to be found, that you can start pretty much anywhere and you’ll get to everything else that matters just by paying attention to the associated issues and following them out … each keeps taking you back to the others … i’m tempted to further verbalize why that works when it works, when it works, when it might not, how cause-effect and CLR and importance scale and system definition and goal presumptions might play a role in my experience that the pointillist impressionist painting principle is valid, but i’ll just put a lid on this yet another interesting digression right here … yep, sez dave and , that’s what happened in those courses …
#######################################

Notes 2-45

There are no notes 2-45.  I was just messing around when I started with notes 46 and 47 and then pulled some stuff out of the main flow and made it Note 1.

Note 46

[On another unrelated subject.  Save this until find a place where it fits.]  I guess Paul N was right.  It’s sometimes MUCH better to do things a little at a time. ; ) ,/pn  re:  foot and 1/2 of snow and no end in sight.
#######################################

Note 47

This way, if you’re working on the current reality tree and somebody gets a flash of an element of the solution, rather than squash the person for being out of sequence, just go with the person and put the idea over as one of the ideas to consider.  Oh, it’s a another judgement call … that’s what we say when there are different things that can be most important in a situation … in this situation, we have (a) keeping the motivation and energy high naturally by letting ideas come to people when they come, running with them a little, then placing them in their spot on the canvas, or at least on a notepad … vs. (b) the benefit (maybe) of having people just stay with the entity or link under discussion on the tree they’re working on … i REALLY should have written that Tom’s Take on the Nature and Use of TOC logic tree processes …

Note 50

I was wondering — as the “How does JIT work” section above was getting long, as I was having moments where I realized I wasn’t reconstructing JIT correctly yet, was fixing that, etc — wondering if I maybe should move the whole thing out of the main flow and into a note.  As i started to verbalize, both for myself and for readers, why I had been certain since probably 1989, for a few specific reasons (not just because Bob and Eli said so, though that proved pretty quickly and over time to be sufficient and reliable reason to accept something for the moment until having time to think it through for self), that DBR and Haystack with a little bit (simple dbr) or moderate amount (haystack) of computing in scheduling and only a little bit in basic shop control (material releases and constraint batch sequences and time buffer arrivals) were a LOT better for strategic flexibility, rational/profitable “global vs. local” effects of seemingly-small local decisions, and ROI in wide range of circumstances than overly-visual (shop control of dbr is simple and visual) and overly-computer-phobic (toc and dbr and even haystack keep computer out when it shouldn’t be involved, and use computer where it does make sense) JIT — but wasn’t sure right away about, and had to verbalize/derive, how it must have worked at Toyota and other repetitive mfg kanban JIT success stories.  ,/ptak

I’d never implemented a JIT line.  Still haven’t.

I had never heard of MRP or Apics or Oliver Wight until a  moment in late 1988 when I was doing an online search with what was pretty much the only online search system available at the time, a newspaper and magazine article database called, “Lexis/Nexis.”  I had decided in June of that year to start a systems integration consulting practice, but hadn’t, until later in the year, picked an “application market” or “industry” to focus on.  One doesn’t need to focus on application/industry to do well.  Computer Associates and Oracle and Microsoft chose to focus on a slice/market/niche of technology, respectively, internal mainframe utilities, relational database systems, and operating systems.  I thought about focusing on a tiny tech slice, but, with all the interest in US seeming to lose manufacturing base, and the thought I had about maybe someday buying a manufacturing company, “turning it around,” and going forward in that way, I decided the initial systems integration consulting independent business would head in the direction of manufacturing.  So I was doing a Lexis/Nexis search to learn how to spell, “manufacturing.”

I hadn’t ever worked in a factory.  Still haven’t.

In 1988, I was a systems-savvy management consultant, educator, and implementation support kind of guy with a “improve performance a LOT by using computers in intelligent ways (don’t just use computers to carve your existing processes into stone)” story to tell potential clients.  I had done that within corporations I’d worked for and thought I should do it with a little more flexibility and for a little more money for clients as an independent business guy.  By March of 1989, I had a hunch that the book, The Goal, was pointing to a better way than the already-pretty-good not-quite-yet-fully-verbalized way I had of finding breakthrough solutions within complex organizational contexts.  By June 1989, I was a part of the Eli Goldratt team of folks figuring out how to put Eli’s expression of the physics method to work in manufacturing and other areas of life.

But I had never worked a day of my life as factory employee of any kind.

I’d worked for 15 years in the military and in large corporations that did some manufacturing, but I had never worked in a factory.  Actually, two of those 15 years were getting an MBA.  The various jobs and the business case studies provided the baseline “general manager” viewpoint and skills I spoke of a lot.

I got into TOC manufacturing-related computer systems a bit accidentally.  I got into systems consulting and systems integration on purpose (with jobs I took and how I approached them and things I studied on the side during 1980 through 1988), but only decided to pick an industry segment to focus on a little later (late 88) and that was because … and only became aware of TOC in 1Q89 and became focused on TOC by June/July 89 … and was happy to see that focusing on TOC wasn’t just a niche in manufacturing systems, manufacturing operations management, manufacturing company management, but also

Note 51

There is no Note 51, but there might be if I decide to move that verbalization about how JIT works.


Note 52

Timeline Stuff

What was the timing of the big worry over “the US losing its manufacturing base to Japan”?  Of “Japanese methods” being all the rage?  Of “quality” and “H Edwards Deming” and “Joseph Juran” (no, not Duran Duran)?  Of media coverage of “Toyota Production System” and “Just in Time” and the “Shigeo Shingo Award” and all that essentially logical and often useful, but also too-often dangerously over-applied manufacturing improvement “best practice” vs. “cause and effect thinking” politically-correct crusade stuff?

Data point, but not an early one in all of that.  The book out of the MIT community that discussed the Toyota Production System as “the machine that changed the world.”  No question that it did.  But also no question in my mind that, compared to doing the same thing with DBR, it did it unnecessary side effects in strategic and operational flexibility for adapting to change and in related ROI in different business circumstances.  But DBR hadn’t been invented when Dr Ohno made the bold and brilliant break from the MRP systems that were giving his company more immediate headaches.

The book:  Amazon.com: The Machine That Changed the World : The Story of Lean Production (9780060974176): James P. Womack, Daniel T. Jones, Daniel Roos:

oh, copyright 1990.  I was wrong about it being not early.  Actually, I’m right about it, since the action happened in the 70s and 80s, but i was thinking this book came out later in the 90s.  i’m sure i cited it in my “TOC and Lean Manufacturing” and “TOC and Agile Manufacturing” Apics papers somewhere in the 95,6,7,8 timeframe.  That’s probably why I was thinking it came out later.  I can picture having the book in a place I lived in 92 … anyway, 1990 it is.

But all that Toyota, Just in Time, losing manufacturing to japan stuff was in the press before that … well, heck, amazon lets us read books online … let’s poke around in this one and see how it places the timing … that’s the kind of thing that should be in the inside cover, preface, or introduction …

http://www.amazon.com/Machine-That-Changed-World-Production/dp/0060974176

http://en.wikipedia.org/wiki/Taiichi_Ohno

Here are some dates.  Dr. Ohno died in 1990 and “Taiichi OhnoShigeo Shingo and Eiji Toyoda developed the system between 1948 and 1975.[1] … Eli Goldratt, who became active in manufacturing starting around 1975 would probably have been aware of Ohno’s work.  And Ohno could have been aware of Goldratt’s OPT principle and cost accounting and local/global verbalization principles.  It would be like Goldratt, part of his method and contribution, to look at something formed partly-intutively, partly-logically, by a genius like Dr Ohno and Goldratt would be able to verbalize the “meta logic” of what was done.  And it would be like the practical genius, Dr Ohno, to see the inflexibility and other shortcomings of his historic breakthrough and, either from knowing of Goldratt’s verbalizations/inventions/concepts or finding/inventing the same Mother Nature stuff himself, approve and maybe guide the adjustment of the original toyota just in time production system into more like a TOC DBR.  The wiki article on TPS reminds us the TPS was not just justInTime in a factory but also lots of policies about people, suppliers, and customers.   A good phd paper for somebody:  study post 1990 or so TOC vs. 2011 TOC, note similarities and differences, was pretty much worked out in early 90s … then compare both to the original TPS and to what toyota’s doing now … that might be one phd thesis, or two, or four …

http://en.wikipedia.org/wiki/Toyota_Production_System

Before i get started, I should say that I never read the book much.  Scanned it a bit.  But all the “lean” folks at the time were talking a simplistic “fire and lay off people” “cut costs” “zero inventories” … and if they all weren’t saying it, the leadership wasn’t saying “right sizing” “capacity” “constraints” “get rid of allocation based costs” loud enough or clearly enough to get the right message through for industry consensus … all of the other approaches made sense sometimes, but were marketed as if they should dominate thinking and action all the time … that’s MRP, MRP II, JIT, TQM, “lean”, what else, oh activity-based accounting ABC and activity-based management ABM, those are the big noisy ones that come to mind … oh, process improvement … that was another paper I wrote, “TOC and Process Improvement” oh, and “TOC and Reengineering” … wow, I wrote a lot of papers … suffice to way, as all those papers and my book and all the TOC folks were saying, use TOC with any or all of them, and everything works better and with less side effects of simplistic application of an otherwise good idea …

Note 53

Save for later, find a place where it fits.  Ok, we’ll put it here as Note 53.

Well, at that point, as we say in the American idiom, a metaphor, “the bloom was off the rose” [ regarding some 1990s Haystack enthusiasts who didn’t have Bob Vornlocker and Gary Smith and Gene Makl helping them do the custom programming needed to implement Haystack systems in the world before database suppliers provide support programs for creating and maintaining the perpetual Schedule-Based Decision Support data set memory object, the pSBDSds ]


Note 54

I often use the idea of what “Mother Nature” intended for manufacturers when speaking of various aspects of TOC and the solutions in The Haystack Syndrome. : ) ,/ss.eg

I use that to say the TOC solutions are not just additional practical solutions that make unnecessary compromises, and not just impractical ideal or idealistic solutions, but solutions that are the intrinsically best ideal practical solutions for the large, important, and complex class of situations called manufacturing companies.

I use that to share my experience of TOC as a reflection of the intrinsic best thinking process that allows one to find the simplest and best and most natural solutions within complex systems and that Haystack is yet another one of those collections of intrinsically best, right, and natural solutions (in this case, within manufacturing).

And, yes, I am very aware that Mother Nature needed a little help from Brother ans Sister Man and Womankind when it came to creating both computers and manufacturing companies.  ; )

Note 55

Note 54 was about Mother Nature.  This is about Father Time.

This note is a tribute to my friend and Apics mentor, Peter Langford, who made all the difference.

Note 56

TOC accounting wiz, Dr. John Caspari, saved and posted some online discussions of “Theory of Constraints:  The Infrastructure of Agile Manufacturing” that went on back in 1996.  Some guy with email name, TBMcM2, posted some comments and an excerpt from a pretty good conference paper:

http://casparija.home.comcast.net/~casparija/dweb/l76.htm


Note 57

John Caspari’s Essay, “Full Costing,” and Price Decisions

I just saw this 1998 essay post, “Of Farm Animals and abc,” by my TOC pal, Dr. John Caspari.  I’m thrilled to see from several of his comments that he liked the Detective Columbo skit from Chapter 6 in my book a little bit.  : )  On the web page linked below, John’s essay is about 4/5 of the way down the page after Tony Rizzo’s essay (yet another jewel from Mr Rizzo) and and several questions and comments. : ) ♥

the essay page:

http://casparija.home.comcast.net/~casparija/dweb/l118.htm

john’s site: http://casparija.home.comcast.net/~casparija/

oops … hold the phone here … do i have to take on my pal, john, here?  oh dear … i have to read his essay again … i love that he refers to my Detective Columbo allocation-based product costing skit, with minimal explanation about it, as if all his readers have already read or should already have read my book and know what it is. : )  my book came out in hard cover in april 98 and john published his essay in september 98 … and i love that he picked up on my TVA finesse on the TOC term, “throughput.”  But, oh dear, tell me when i go back and read the essay more carefully that john’s not speaking favorably about “full-cost” accounting being necessary or even advisable as guidance for setting prices … actually, let me think about that again before i read it again and before i go into enthusiastic allocation-bashing mode again … : ) …

… ok … time for a re-think of allocation-based “full costing” of products and the relationship to pricing of products … for those new to the issue, TOC uses totally variable costs (usually just materials, but also any other costs that vary with volume such as many outside processing services like electroplating, and specifically not direct labor) … factory overhead can be allocated on some basis, but when factory overhead and division and group and corporate overhead are allocated to products, that’s called, “full cost.”

I used to like to use the phrase, “fully-allocated product costs,” as in, “Companies can easily, and should, begin using TOC’s TVA-I-OE decision basis to replace fully-allocated product costs for making manufacturing company decisions.”

that’s making decisions vs. corporate, GAAP, and tax reporting which have their own sets of rules that often interfere with making good decisions

but those reports always come out better when TVA-I-OE is used to make better decisions to get more TVA cash money in the door.

Somebody’s wondering if “always look better” is right.  Not sure.  It’s definitely, “almost always.”  If we want to look for exceptions, for the range of validity of the statement, ok, that’s always a good thing to do with statements.  We need to find circumstances where getting more TVA cash flow money in the door either (1) looks bad in the corporate, GAAP (external financial accounting), or tax reports or (2) is bad for the company, or (3) both.  When is having more cash flowing into the business bad for reports or bad for the business, or both?  Let’s use our imaginations.  Maybe some company has a special loan or grant or subsidy that only applies if their reported profits are less than some number in a year.  Making too much money would show up in the reports and knock them out of being qualified for the support before they could rework things to another basis.  I know that’s wild, but, hey, that’s part of my point.  How often can tax rules, GAAP rules, or corporate reporting rules make a company worse off it uses TVA-I-OE (which includes and assumes dealing with capacity and constraints intelligently to assume only mixes of product sold that can realistically be built by the factory) to get more TVA throughput cash flowing into the company?

Regarding the issue of “full cost.”  The first comments that come immediately to mind are the terms and pricing strategies known well to marketing people and company strategic planners:  “loss leader” priced products and “value priced” products, neither of which are based on costs at any of the totally-variable, factory overhead allocated, or full cost levels.  Many prices are set simply by “what the market will bear,” be that nice and high or too low.  That’s the first comment.

The second comment is that some businesses, especially with some defense/military customers is “cost plus,” where the rules of what “cost” means are either in federal regulations or negotiated in the sales contracts.  Here’s one place where some sort of procedure that probably includes using direct labor and some sort of agreed-upon allocation formula does apply.  BUT THIS IS NOT MAKING DECISIONS ON PRICING.  THIS IS FOLLOWING RULES ON PRICING.

So back to DECISIONS on pricing.

Third point:  Goldratt and Fox’ TOC point, and Detective Columbo’s point is that we should use “throughput,” what I call, “TVA,” without allocations for decisions.  It was decisions that led to point 1 and the “loss leaders” and “value-priced” products.

[  not sure i’m happy with this next paragraph … need to come back to it … helped me refresh my memory on allocation and pricing … but may need revising …

What that leaves is whether “full costing” of products provides any sort of pressure, influence, or guidance to help us ensure we don’t under-price and go out of business by having prices too low.  I want to say, yes, it forces managers to keep prices high, but … ok, no … i just “saw”, refreshed my memory, that the amount of allocation changes as volume of overall unit sales goes up or down and when mix changes.  Actually, I’m not sure about that … let me see … if mix changes so that one product goes to zero units, the other products have to pick up that product’s share of overhead … allocation rates on all products definitely go down if overall units goes down … but i’m not sure about whether individual rates on individual products change if only mix changes within same total unit volume … allocation rate per unit might stay the same … but, the thing is, what i’m exploring is whether a confusion factor will be small or large … gaah … i don’t like this paragraph …]

hm … after the quick and easy-to-express “loss leader,” “value pricing”, and “cost plus” comments, i’m getting drawn back into the rest of the allocation quagmire … one set of distinctions coming clear is that i’m not now if “full cost” is necessary to pricing — Detective Columbo proved and I know it’s not necessary — but whether “full costing” can be helpful or provide safety against mistakes or, on the other side, whether it increases the possibility of making unnecessary mistakes … what i know is, the full cost allocations are not necessary for pricing decisions because, in a TVA-I-OE decision scenario workup, levels of sales of all products in a period — some priced rationally in the market at or above “full cost” and some priced rationally in the market at or below “full cost” — either produce enough TVA/throughput/cashInflow, or not, to cover OE and other things Net Profit needs to contribute to (tax, saving cash for new machines or debt retirement, desired short- or long-term returns on capital and to owners).  It’s the purpose of the TOC throughput TVA analysis to make plans that use available capacity to sell and produce mixes of products that generate the required TVA.

The only argument I’ve found even a little bit persuasive is that showing operating managers the “full costs” gets them mad about overhead and creates a political pressure to keep overhead costs from increasing too easily by not saying “no” to enough things in the corporate office.  Though I agree overhead creeps up and that is dangerous, I think using the arbitrary fiction of allocation-based product costs as an imprecise emotional pressure is a cop-out that encourages sloppy thinking.  The culture should discourage wasteful overhead expense and should encourage keeping prices at rational levels.  What’s a rational level?  It depends on a lot things that fall into the knowledge area of “marketing” or “strategic marketing” or “strategic planning” which includes things like whether you have a “brand” that lets you price high regardless of competition, or if somebody else has the “brand” and you have to price less until you create a reason customers will or have to pay more, whether you’re trying to prevent/pre-empt people from entering your market, whether you have to sell a few things at a “loss” under “full” or even under “TVC” “cost” to have the complete array of components/pieces in a product line (you can’t decide, for example, not to have a 1/4 inch wrench if you sell a “full toolbox of wrenches” because somebody, on any basis, thinks it’s too expensive to make the 1/4 inch wrench.  you drop the 1/4 inch wrench from the box and nobody buys your boxes of tools anymore.  they buy somebody else’s box of tools because they’re willing to let the 1/4 inch wrench be a “loss leader” … actually, my point is right, but my term isn’t exactly right … “loss leader” is the term when you sell the first item, or the attracting item, the “lead” item, at a loss in order to get the customer started into your product so you can sell him/her the rest of the line that makes up for the “loss” on the “leader”.  that can be “loss” on just “fictional full cost” or even on more lossy “totally variable costs” basis), if price is too high then existing companies will jump in with copycat products, or new companies will form to do that, or somebody will substitute their plastic version for your metal product in the profitable part of your market where you set price too high, and these sorts of things Michael Porter of Harvard Business School first concisely and comprehensively verbalized in his book of around 80 or 81, competitive strategy and the follow-on books that gave more detail …

full-absorption costing, oh, that’s another confusing accounting name — based on a backwards-thinking concept of “absorbing” overhead, forget it now that i’ve said it, but know what it is because people still use the term — for “full cost” or “fully-allocated product costs” obscure natural “granularity” or levels of aggregation or levels of detail, and can get in the way of making realistic capacity-based cash-flow-effecting decisions in strategic and marketing situations … you have to recalculate allocation rates for every change in overall sales volume and mix … THINK ABOUT THAT … THINK ABOUT THAT … by contrast, TVA-I-OE makes changing the overall sales volume and mix, in units or dollars, many many times for many different major scenarios and many different little sub-scenarios without changing the stupid arbitrary-anyway allocation rates each time … THAT ALONE SAYS DON’T USE SO-CALLED “FULL COSTS” FOR DECISIONS …


xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
TOC Full Costing is born … ah, that little riff just exposed another opportunity to change terminology to be even more effective … the idea that costs are only “full” if allocation is used is part of the problem … so … ahem … henceforth, I shall call TOC’s TVC, totally variable costs and the OE line items, the … drum roll, please, professor caspari … TOC Full Cost … so there … yet another great little gem … we stop fighting “full costing,” because it sounds like we don’t want to take everything into account … we get fully on board as always having been champions of “full costing,” which, in reality were, only by dumb outdated definition of ‘full costing’ were we not, and we just say, hey, over here in toc land, we’re just replace the outdated World War I-era concept of “full costing” with 20th and 21st century TOC Full Costing.  Our TOC Detective Columbo chart has Fully-and-Logically-Revenued [hey, another new term! does that make us “revenuers”?  that’s a american idiom joke.  who were the revenuers?  prohibition cops?  tax guys?  whatever …] (logically, meaning realistic given constraints and correct cause-effect thinking in price decisions) figures.

 


#############################################

Important TOC Community Intellectual Property Statement

“Intellectual Property” is the modern legal term for copyrights, trademarks, service marks, proprietary information, trade secrets, and that sort of stuff.

So here’s an important announcement:

So we have TOC Detective Columbo [tm] Fully (and Logically) Revenued [tm] and Fully (and Logically) Costed [tm] TVA-I-OE [sm] decision scenario procedures and financial projections charts.  Why not?  or Whyn’t?, as my pal, Dutch, would, among other prescient things, say.  Oh, and those are [tm] trademark and [sm] service mark of the TOC community which is anybody who likes or understands TOC, or even both.  🙂  What?  Why some tm and others sm?  I don’t know.  Just wanted to use both to set up the joke.

And full credit to my TOC pal, John Caspari, without whose nice acknowledgement of my overall TOC book in general, of my little TVA invention in particular, and of the basic value of my Detective Columbo skit’s message about the uselessness of allocations to products for decisions — and without whose his raising the always-pretty-much a sticky wicket (sports reference from England, as in the UK, sport = cricket.  wait, actually, that might be croquet, for pete’s sake.  does cricket even have wickets?  ok, croquet reference, from the wry understated perky idiosyncratic upper class British idiom.  glad we got that cleared up …) and usually-oversimplified and usually-taken-for-granted subject of how to make decisions on pricing — these new TOC inventions and this remarkable and historic TOC intellectual property announcement would not have been possible here today on jan 16, 2011 at 5 pm chilly northeast usa time.  bravo and thank you, john!

############################################3

So we have TOC cause-and-effect strategic pricing, TOC Sales – TVC = TVA, TVC and OE are TOC Full Costing, and we have the whole picture and go off the defensive on the stupid “full costing” issue … : ) … yep, it’s fun being back in the 90s again for a while …

[this next part was written early in the life of this note/essay.  a lot got written into the middle above this the day after … just in case anybody’s made it to this part of the page and is wondering wtf re: flow and sequence of points being raised and addressed]

i thought a re-think on “full cost” and pricing would go more quickly than this … instead of trying to reach closure on it, let’s go back and see what the essay’s saying …

“tom mcmullen’s detective columbo and his chorus” : )  cute, john.

“… some responders suggested tony tone down his language [re: entrails and allocation-based product costs] … but nobody gave examples one way or other …”  : )

oh, nice.  i missed this on the first scan, “in tom mcmullen’s excellent summary of toc …”  yay!

oh too bad … john went into rebuttal and got it wrong.  oh well.  pleased to be mentioned anyway.  thanks, john.

ok, here’s where my pal, john, goes wrong back in 1998:

“As a more responsive answer to Detective Columbo’s question, and with respect to management (internal) uses of allocations, McFarland (Note 3) observed that “Cost accounting originated primarily to develop product costs for pricing . . .”

[Ok, that’s just a fancy way of saying, “that’s what we’ve always done.”]

It’s a shame that apparently neither Detective Columbo, nor any of hes group, had the opportunity to intrude on a management accounting class at one of the nation’s influential universities.

[that’s clever]

At the Harvard University Graduate School of Business Administration, for example,

[that’s even more clever, although evil … my pal, john’s busting my balls here, and he knows darn well he’s doing it, because he knows i’m a harvard mba.  very nice. he even drags out the full formal spelling of the name of the school.  not just, “harvard business school, ” that whole thing.  vicious.  deadly.  funny. wish i’d seen this back then …]

he might have encountered Professor Anthony (Note 4)

[now john’s really playing rough … : ) … mentioning famous HBS professors by name.  brutal. he’s matching the comedic level of the Columbo skit.  i’m in deep trouble here. lol. ]

saying, “Managers normally arrive at the selling price of a product on the basis of its full cost.

[ok, that they normally do something doesn’t make necessarily it right, or, moving out of “right/wrong” thinking into “less or more valid” thinking, that they normally do something doesn’t necessarily make it best at description and/or at creating the most beneficial effects and avoiding the most potential negative effects].

Full cost includes a fair return on capital employed. . . .

[ well, only if enough volume of the right mixes of products are sold, and only if those mixes can be made given the actual factory capacity, which is already the situation before allocations are used to change from the physical reality of “totally variable costs” to create the largely-arbitrary fictional figures of allocation-based “full costs” of products … Detective Columbo’s analytical method shows whether TVA/throughput figures (“direct margins”) for products at assumed/planned volumes/mixes cover OE and everything else, including capital charges, that management and owners are requiring of the business … in other words, adding the allocations of overhead to convert TOC TVC and TVA figures to “full (fully-allocated) costs” (1) takes more internal thrashing and arguing (because product line managers want less overhead on their own and more on the other guy’s products and people within the product marketing and factories know certain parts are necessary to have in the product line regardless of what overhead allocation makes them look like profit-wise so they’re lobbying and politicking to save the needed parts from “full costing”) over what the allocation rates should be, (2) creates more work by activity-oriented accountants that looks useful in this allocation context but is really only useful in the context of “activity analysis” or what TOC would call “cause and effect analysis of the effect of potential decisions on the OE line items,” (3) creates non-natural non-unavoidable “cost” levels that look precise but are not precise or unvarying depending on assumptions (work through an allocation example some day and see for your self how many arbitrary simplifying assumptions are needed to get overhead costs allocated at some specific rate to various products.  i have and everybody who’s unconvinced about my “arbitrary” claim should work through one), where was i in that sentence?, (4) if eli goldratt were here i’m pretty sure he’d say something like having those fictional “full costs” that look precise and have the accounting department’s air and tone of priestly authority invites simplistic thinking that if the price is above “full cost” that’s good enough when true strategic and tactical cause-and-effect thinking would say higher or lower mix of prices now in year 1, then different higher or lower later in year 2, and then different higher and lower in year 4, and (5) all of this and, as Detective Columbo says, we already had a complete picture from setting prices with the right thought processes to create the unit TVA figures in the product/geo market segments.  In short, compared to TOC TVA Detective Columbo analysis (that assumes using constraints know-how to verify realistic buildable product mix), allocation-based “full costing” is more trouble, more work, no incremental benefit, and risks not supporting/provoking/causing  and, in fact, risks discouraging serious savvy strategic and tactical competitive and customer-based thinking about pricing of the company’s full product line. In other words:  Don’t create “full costs.”  Or, if you do because it sort of feels odd to be “afraid of them”, to be avoiding them (it no longer feels odd to me, because I’ve gotten used to thinking about pricing the right way, and about capacity-realistic-product-mix the right way, and then showing the complete picture with Detective Columbo’s chart, but it will feel odd to some people for a while), to have them be sort of an unspoken unhealthy “taboo” that’s “politically incorrect” to suggest even looking at, then, ok, roll ’em up, roll up (calculate) those costs, using one set of arbitrary assumptions, then another to see the difference it makes, or at least have an attachment that details the process of arriving to the allocation rates (in other words, rather than enforce having people not like “full costs”, give them the facts and let “full costs” just die off at their own pace in people’s minds), but, in the meantime, still make sure everybody knows very clearly that some people, including Detective Columbo and me, and probably Eli and Bob and many others in the TOC community, are always somewhere in the wings pounding the table about “full costing” being an unnecessary and potentially dangerous fallacy, even for pricing.  Make “full costing” just like Bob Anthony and other accounting experts have taught me in mba school, and lots of others via their books, that handling “direct margin analysis” and “variable margin analysis” in the old non-TOC non-DetectiveColumbo way was appealing but dangerous (the TOC TVA way of handling direct/variable margin analysis in NOT dangerous, it’s right), so “full costing” should be taught as seemingly-logical at first, but misleading and dangerous]  empirical evidence supports the premise that prices tend to be based on full costs.” [what’s that fancy language mean?  could be just saying once again, “that’s the way we always do it,” or that “there’s some correlation data that says costs tend to be higher than full costs, which, of course, they would be if they’re covering OE and meeting other profit requirements”]

This view, which is widely accepted by the 1950s, is built on a foundation originating in World War I, experimented with during the 1920s, and cast into American national policy in the 1930s. (Note 5)

[oh, dear … is saying “the practice started in 1914” supposed to be a way of saying “it’s the right thing to do?”  ouch.  even if not that, it’s definitely another way of saying, “that’s how we’ve always done it.”]

Thus, we see that product pricing is an important purpose of cost allocation.”

[ um … who’s “we,” keemo saaby?  (that’s a famous line from the American tv show, The Lone Ranger). you may see that.  somebody else reading this may see that.  but i definitely don’t see that.

Quoting big names like McFarland and Anthony proves nothing except that big names can be dead wrong sometimes and lead entire generations of people astray.

Revolutions in Accounting? Hey, even Ptolemy, the “sun revolves around the earth guy” was a big name until Copernicus came along with the “ridiculous” and “completely counter-intuitive” idea that “the earth revolves around the sun.”  Yeah, I know, that’s possibly stretching a little to make a point, but you can’t deny that the Harvard Business School’s MBA program revolves around Dr. Robert Anthony’s Essentials of Accounting. : )  Yee-haa.  Made that work … Anyway, Anthony’s Essentials — in its, what, 184th revised edition? — was a required text when I went to Harvard Business School in 1979-81 and, yes sir, “full costing” was one of the things taught when I was there, along with “standard costs” and “actual costs” and “absorption costing” and “cost allocations” and end-of-period “mix variance analysis” and “price variance analysis” and “volume variance analysis” and “cost variance analysis” and — not only that — “direct cost” and “direct margin analysis” (also called, i think, “variable cost” and “variable margin analysis”) was taught correctly as a dangerous fallacy that could, if used incorrectly, lead to failing to cover overhead, or failing to fund future replacements of machines, or failing to cover other things that the money flow TOC calls, “Sales – TVA/throughput = TOC net profit,” needs to cover and cripple the business or even put it out of business.  But TOC throughput TVA projections are NOT the same thing at all as the famous “direct” or “variable” “cost” “fallacy” because we pay attention to directing marketing and sales efforts to get short- and medium- and long-term mixes of products at whatever prices, in combination, and using our capacity well to do it.  Detective Columbo’s chart on the previous page and his argument say allocations aren’t needed.

Everybody in the TOC game should do 3 allocation projects and 3 variance analysis projects.

Let’s make this Note 59

So I’m back to “full costs” are definitely not necessary.  I don’t think they’re helpful either.  The allocations take time and are arbitrary.  Why confuse oneself with arbitrary numbers when the natural numbers are there, both on an individual basis and on an aggregate full-company mix for a planning/accounting period?

There are two potential justifications I can see for giving “full cost” figures to managers, but they’re both only helpful if the managers are weak.  If they’re smart, reasonably tough, disciplined, and have negotiating ability, then they don’t have to blink when the powerful customer leans on them and says, “hey, you’re making a lot of profit margin on my product and you know it.”  a reasonably strong president, general manager, sales manager, or trained salesperson can be aware of the throughput/tva margin, and still, without knowing/seeing an arbitrary inflated “full cost” number and say, “with all due respect, not really.  we appreciate your business very much and are pleased to offer you these products at these prices which we think are fair.  we’ve invested a lot in product development and continue to do that to serve you well and …”  to say that and hold fast, the company needs to know its strategic position (harvard michael porter stuff, not harvard robert anthony and warren mcfarland stuff, things like what are competitors giving for prices, and are the competitors pricing irrationally too low and likely to go out of business, and you have to know whether you’ve established “switching costs” and what level of price premium you can keep and not have a customer switch, or maybe you’re pricing lower with higher quality to starve your cash-strapped competitor … I’M PRETTY SURE IT’S TRUE, NEED TO COME BACK TO THIS, THAT THE IMPORTANT THINGS IN PRICING DECISIONS — SINCE YOU KNOW WHERE TOTAL TVA VS TOTAL OE STAND FOR A PLANNING/ACCOUNTING PERIOD –HAVE, NOT NOTHING, BUT COMPARATIVELY LITTLE TO DO WITH ANY INFORMATION THAT ARBITRARY ALLOCATIONS ADD TO TRULY VARIABLE COSTS) … [i wrote the above boldface comment in a prior paragraph the day after I wrote this SHOUTING comment in this paragraph, so if this comment seems to re-open the door to “full costing” of products being a good idea, it doesn’t.  having not re-thought it for a while, i wasn’t as sure last night as I am today.  but, now that i’ve both refreshed old cause-effect argument and found even a little bit more important stuff (like discouraging proper process for thinking about pricing in strategic, industry, and competitive context and i have one more new one that i think eg would say if he were here, and it’s one of his what i call “leadership points” …

or “leadership statements”, where, sure one can argue some issues many ways, but sometimes leaders need to highlight the simple concise expression of a principle that draws everything else, including all the confused arguing, in the right direction without getting into all of the analysis, you get into and back out of all the analysis to know that the right or best “leadership statement” is, but it’s because you can’t often, except for Kent Moore Cabinets who got everybody trained in TOC thinking processes, get everybody to do the full analysis quickly enough to do things and go places together fast enough ….

… [and it’s one of his what i call “leadership points] that i’m adding right now … which is … showing employees the idea that a company-wide pool of TVA, not an individual product unit TVA margin or unit full cost margin, a company-wide pool of TVA is needed to meet OE and other business net profit requirements encourages the right simple company wide thinking, and knowing that the combination of all the prices have to be held high enough in general to get that TVA from the products they make encourages quality thinking and not just cost thinking, and it encourages thrilling the customer thinking and thinking leading to product and process changes that will help keep prices high, none of which getting comfortable with prices that are above pretty much arbitrary … oh and i just remembered something else about the procedure for calculating allocation rates … it has to assume a volume of product anyway … to see that, consider if all the overhead had to be allocated to one unit of product … then two, then a thousand units … big difference … that’s one of the reasons why “standard” allocation-based product costs are arbitrary … and here’s another i’m now remembering … if the allocation-based cost comes out higher than the prevailing competitive price, what’s the price going to be?  well, you say, drop that product … but what if there’s no other product in the short term to make and sell?  dropping the product moves the company from a slight loss to completely out of business, bankrupt, no cash flow, gone … and i’m remembering yet another perspective … it’s not, by a long shot, only goldratt and fox and people like me who have thought about what they’ve been saying about TOC and throughput and my TVA who think prices aren’t set based on cost … ask any marketing professor or professional, or even and especially any uneducated self-taught marketing wiz, about how pricing works, about what thought process should be used to set prices and a lot of these issues just written here will pop out … but, maybe getting a little too carried away with the tone here … because pretty much all accounting experts would also say cost work, including “full cost”, is not the only factor to use in pricing … after all, it may have been the accountants who gave us the concepts of “loss-leader” and “value pricing” … hm … that was probably an early generation of marketing theorists, like Harvard’s Ted Levitt, inventing concepts to free companies from early generations of accounting experts … but that’s “early generations of experts” … these days, with every functional area expected to know something about the other functional specialty department area’s work, and all of them expected to take a more company-wide vs local/departmental view … both the accountants and the marketers and also the ops and IT like karen alber at qkr/ppsi/heinz would agree we need to know at least something about cost, maybe not “full allocation cost”, and need to support smart approaches to pricing … so john knows a lot of that stuff, probably all of it now, and probably most of it even back then in 98, and he was sending a diplomatic and good-humored and clever “leadership statement” … heck, he probably anticipated that, a dozen years later, that i’d discover his essay on the internet and write this stuff … he was, as the american golf players say, “tee-ing me up,” to make all these points … now that’s Socratic method! … ok, that’s good, coming back off the high-energy allocation-bashing fun joyride and back into, “ok point’s probably made” mode … back into level flight … : ) …]

… and when it comes to keeping overhead down, even without seeing scary ‘full cost’ numbers, the management culture can be oriented around activity analysis (aka, TOC’s “cause and effect analysis of the OE line items and associated activities to keep the overhead staffing and costs “right sized/lean” vs “fat” vs “under sized too-lean.”

hm … day after … and after a lot of day-after verbalizing out (in american idiom, we sometimes use the cute phrase, “teasing out”) additional issues … without yet going back to read and maybe revise … the word-streaming above is maybe a little overly-strong in tone in some places? … maybe need to revise a bit? … i’m not sure, with these not-so-great eyes these days i can unwind all those parentheticals within parentheticals … so they might just all stay there … with me chiding them a bit here after the fact … i’m pretty sure that each of the points made with each cluster of words made was pretty useful and clear as standalone points, even if the overall flow and repeating stuff and sequence might be a little clunky here and there … hey, it’s free …  and, if anybody got this far, they must’ve found it useful or entertaining, or both …

well, anyway … concerning finding john’s essay on the internet yesterday …

i was and am thrilled to see two of my fav little goodies, TVA and Detective Columbo, getting mentioned in the literature.  thanks, john.  : ) ♥

Note 58

One paragraph in the previous note almost got revised to become:

“oops … hold the phone here … do i have to take up toc cause-and-effect light-sabres and engage here with my pal, john, in a bit of what amounts to some CLR dueling about an invisible intuitive TOC explanation logic tree concerning “full costing” and pricing decisions? oh dear …”

Yes, that makes this another footnote to a footnote.  Like in my book.  Which, if I’m not mistaken, was widely-acclaimed at the time as being, “Yet another remarkable innovation from McMullen Associates.”


Note 59

Homework Assignment:  Everybody in the TOC game should do 3 allocation projects and 3 variance analysis projects.

(I’m remembering now, once I got the production Apics-owned CM SIG TOC workshops in shape for apics, that a lot of people could and would want to teach, i was going to build a TOC financial system workshop, 4 to 4 1/2 days too, that I would teach, fewer could teach this one, that had a lot of stuff from these two pages including stuff like the following … That was back when Apics was positioning itself in both the CPIM and CIRM market spaces … these days … I noticed the other day that Apics re-located its headquarters to Chicago that always had that huge chapter, changed its slogan from “educational society for resource management” to something like “the society for operations management,” and dropped the sort of mini-MBA-like CIRM certification … so, anyway, given Apics’ repositioning that happened sometime in the last 10-12 years, these days, I’d put a 1 day, maybe 2 day course on financial stuff effecting ops, or maybe better to sprinkle some of the right stuff into other Apics non-Toc and Apics TOC courses, and have the big cahuna “solves all the arguments” TOC financial course over as a week-long course for the IMA, Institute for Management Accounting, or still better, joint apics/ima somehow, so share mail marketing and revenues.  these days, i’d also be at least thinking about maybe nagging FASB (Financial Accounting Standards Board) to let me run a  serious TOC workshop for them that brought TOC in from external GAAP, generally accepted accounting principles, for shareholder, CFO level, audit committee, boardmember attendees … bring it in from toc financials and let all the boardmembers get a sense for ops, for how to think and see ops when they’re making plant tours and stuff instead of just doing PR and sight-seeing when they visit … anyway, this next little essay about 6 homework assignments every TOC person should somehow create the opportunity to work through for themselves would have been part of an Apics course I was talking to Patricia, Dave, Marie, and Lisa and the other Apics CM Sig committee members about from time to time … oh and also to Bob Stein … )

Incidentally, about end-of-accounting-period “variance analysis” … elsewhere on this page, I suggested that, if you haven’t been through the process of making the required mostly-arbitrary assumptions needed to allocate overhead line items to products — using any allocation “basis”, be it direct labor hours (traditionally the most common) or activity-based (newly, since the 90s, much more common, though i bet direct labor basis is still used by people who know allocation is nonsense, but still have to allocate somehow, not for decisions, but for external bank, shareholder, and tax reporting) — then you should do it at least once so you just plain know from doing it vs just hearing TOC people like me going on and on and on about it, but you should also have the joy of, at least once again, working through end-of-period variance analysis … get clear about the Detective Columbo skit, chart, and message, then do one each of setting up and doing allocation-based costing on any basis … if it were me, i’d do one of each, direct labor basis and one activity basis … i’m sure harvard business school has a case study or a series of cases that let you do exactly that, that show how that’s done … and, by now, if they don’t have a three-part series of cases on direct labor-based allocation, activity-based allocation, and TOC-style no-allocation that are required in the first-year management control systems course, then shame on you, HBS … and you know I love ya, HBS …anyway, somehow, some way, if you want to be quietly or noisily invincible in this allocation and product costing area, do six things with paper and pencil or maybe using some simple spreadsheets … paper and pencil’s best, so you can re-do it for yourself and others anytime on restaurant and airplane napkins … not kidding about that … many a sale of important product, service, or concept’s been made with just a few lovely little napkins and a ball-point pen … more than a few by somebody i know very well and who can’t decide whether to capitalize or be EE Cummings when the notes really get to rolling … anyway, do allocations to product costs, from design assumptions through getting the full costs for all three of direct labor-based allocation, activity-based allocation, and toc style no allocation and use the costs in a simple five year chart like the Detective Columbo chart (but, this exercise, don’t bother with the stuff below net profit line, that’s another thought process) … then for direct labor, activity, and toc style full costing … see that?  … three types of “full costing” … i love it … that works! … for all three types of full costing, use or make up some actuals somehow, if cases don’t exist, get caspari, tim sullivan, charlene spoede, eric noreen, larry shoemaker, tony rizzo, charlie from rotron (where, after harvard, i got some additional experience with variance with my pal there, steve), mark allen eg&g and guy and nice other guy there with us, working with robin cooper or robert kaplan, to build some simple, but complete examples, and work through all the end of period variances … hm … now having said all that, i’m remembering that i think the allocations are causing some sort of extra work and hassle unnecessarily in variance analsysis, but i may have to think it through again to know what i thought that and to see if that thought i had was right or not … it’s occuring to me that part of variance analysis isn’t going to go away, people will still want to know who profit and cash flow and inventory, stuff that needs to get reported and is intrinsically important anyway, didn’t happen in the period … both identifying variances and cause-effect on why as part of early-warning troubleshooting of potential problems in meeting the business plan … but i need to come back and think again about why having allocations might make the necessary, useful, but kind of time consuming end of period variance analysis worse than it has to be … every month or every quarter and definitely end of year … we’ll let that line of thinking pick up in Note 60.

Note 59-a

#rightnow – 21jan2010 – just after midnite

Simplifying Assumptions for Overhead Allocations and their effects on “Full Costs” (the discussion that became the “Nuggets Incorporated” Case Study)

I was just reading Note 59 and wondered what the fastest and most persuasive way would be to show how even activity-based allocations of overhead to product costs requires arbitrary simplifying assumptions that produce arbitrary, potentially confusing, and potentially harmful variations in what appear to be precise product costs figures.

I also thought, well, the first Detective Columbo skit in my book demonstrates definitively that allocations on any basis to form allocation-based product costs are not necessary or even helpful (in the sense of providing any incremental benefit once the 5-yr TV-I-I-OE-Cash view is constructed), but that’s not necessarily quick and that’s not demonstrating the negatives.  (The “allocations not necessary” point can be made pretty quickly on napkins over drinks, with or without the Detective Columbo setup.  Just say, ok, here’s TVA I OE for a base case.  What don’t I know that I need?  Here’s another one for an incremental case.  Cause-and-effect analysis has been done in both cases to form the OE line.  In both cases, the mix and volume are realistic given constraints both in the shop and in OE roles like customer service engineering.  What don’t I know already that allocating overheads is going to tell me?  The answer is there’s nothing we don’t know that’s real.)

And, again to self, “… and, Self, I’m sure that allocating overheads even on an activity vs. direct labor basis is a bad idea, but I only have an imprecise memory of the simplifying assumptions that have to be made to allocate overheads on any basis, and I don’t have one of those quick back-breaking examples that nail it quickly … I wonder if there’s a quick way into it and out of it …”

How About Volume Effect?

Oh, right, here’s one quick way in.  If Lisa S were here, she’d say:  Volume.  And I think she’d be right.  Let’s see …

Spreading Overhead Across Volume = 1 Unit

Let’s say we have a million dollars of overhead for the year and only sell one unit each year.  The materials costs are $9 per unit.  The outside electro-plating service is $3 per unit.  So the Totally-Variable Costs (TVC) are $12.  TVC plus allocated overhead of $1,000,000 gives us a “Full Cost,” a “fully-allocated product cost” of $1,000,012.  Ok, John Caspari, what shall we do with this pricing decision?  : )  Yes, I love you too.  Bob Fox and Eli would love this example.  This may join P&Q as a core TOC example or, as Eli called them back in 1989, a Gedunkenexperiment, or something like that, an experiment a physicist runs in her or his mind.

Change From Direct Labor to Activity Basis

But, wait, that was allocating on a direct labor basis.  Let’s change to an activity allocation basis.  By now Robin Cooper and Bob Kaplan would be laughing their lmao off, shaking their heads, and referring to the lot of us by using some friendly, but somewhat un-scholarly, terms and names … : )  But, ahem … back to business here … Ok, that’s materials cost of $9, outside services of $3, and overhead, allocated this time on an activity basis of $1,000,000, for a total “Full Cost” of $1,000,012.  Oh, no!  That totally disproves my idea that allocating costs on any basis gives rise to changing “full costs.”  And Robin and Bob stand over us, gloating.  Just kidding.

Spreading Overhead for Volume = 2 units per year

Materials $9.  Outside services $3.  TVC $12.  Overhead $1,000,00 divided by 2 units = Overhead rate of $500,000 per unit.  Full cost = $500,012.

Next, 200 units

We’ve, of course, been partly joking with those silly low volumes, but the way the “full cost” varies there creates a useful impression.  And those weren’t true “full costs” because we had direct labor in the overhead pool.  But that’s beside the main point.

To take a step toward more realistic, but still pretty far away from a real case, we’ll bump up to 200 units, 100 units each of two different products, products G and H.  G uses three times as much direct labor as H.  Let’s say G takes 3 hours, and H takes 1 hour.  What factory accountants do is take the forecast of total hours for the coming year which here is 100 * 3 hour plus 100 * 1 hours to get 400 hours total for the year.  To allocate that to products, they divide the year’s overhead 1,000,000 by 400 hours to get an “overhead allocation rate” “on a direct labor basis” of $10,000 per direct labor hour.  So the “full costs”  … let’s make this a little more correct and add direct labor per unit as everybody except TOC people and companies do … let’s say the cost of an hour of direct labor (there are assumptions here too to get direct labor per hour, but nevermind those for now) has been found to be, let’s make the math easy and pay these folks, $100 per hour.  Now we can do “full cost” the right way:

Full cost for G = TVC $12 … plus 3 hours * $100/hr … plus 3 hours * $10,000 per hour overhead … = $30, 312.

Full cost for H = TVC $12 … plus 1 hours * $100/hr … plus 1 hour * $10,000 per hour overhead … = $10, 112.

Clearly, the unit volume and mix of products forecasted to be sold in the year makes a big difference in the “full cost” of the products.  Meanwhile, the TOC money entity, TVC stayed the same throughout the volume change from 1 to 2 to 200 units per year.

You might ask, why not include direct labor in TVC, totally variable costs? (aka, another of my little TOC terminology inventions with three, no four, names … “totally variable costs”, “true variable costs”, “TOC variable costs”, or “throughput variable costs … you be the judge) … the reason is direct labor isn’t really variable.  the people are paid for a 40-hour week and aren’t laid off, so the real money flow for the company for direct labor isn’t truly units-variable.  the people have bathroom breaks, have training days, train new employees, visit suppliers, visit customers, give plant tours, consulting with product and production engineering and marketing, have lunch and coffee, and so forth.  there’s a lot of analysis of this with an entire vocabulary that i’m not to introduce here, but, unless the work is 1930’s style “piece work wages” or unless the company screws itself up by trying to make every paid hour a “on-machine productive hour” and by laying off and re-hiring for every nuance of demand change, then direct labor is NOT a variable cost.  Period.

Incidentally, if my GM union pal from Jonah conferences were here, I know he’d want to set a nice direct labor rate, so the thought of him inspired the $100 per hour loaded labor rate in the above calculations.  ,/gmup

Let’s say these products sell for $2,000 for product G and $1,000 for product H.  So, in the year, if they produced and sold to the forecast the allocation rates were based on:

Product G sold at $2,000 with “full cost” of $30, 312 for a loss on each unit of $28, 312, times 100 units for a loss on G of $2,831,200.

Product H sold at $1,000 with “full cost” of $10, 112 for a loss on each unit of $9, 112, times 100 units for a loss on G of $911,200.

That’s not right.  Loss should have been something like … oh wait, prices are so high maybe it’s … no … hm … strange … does that make the example bad or good … not sure … first better check my math … if my math is right, it may bring out some confusion factor issues in “standard costs” and variance analysis … or it may just be a goofy example … not sure yet … hm … i think there’s something wrong with my math … yeah, there it is … the allocation rate’s wrong … $1,000,000 divided by 400 hours is $2,500, not $10,000 …

Ok, let’s try that again …

“Standard” “Full cost” for G = TVC $12 … plus 3 hours * $100/hr … plus 3 hours * $2,500 per hour overhead … = $7, 812.

“Standard” “Full cost” for H = TVC $12 … plus 1 hours * $100/hr … plus 1 hour * $2,500 per hour overhead … = $2, 612.

And if they sold the forecast unit “standard volume” at “standard prices” of $2,000 and $1,000:

Product G sold at $2,000 with “standard” “full cost” of $7, 812 for a loss on each unit of $5, 812, times 100 units for a loss on G of $581, 200.

Product H sold at $1,000 with “standard” “full cost” of $2, 612 for a loss on each unit of $1, 612, times 100 units for a loss on G of $161, 200.

Ok, that’s better.  At least as far as getting the stupid math right.  But, hey, not so good profit-wise for our little factory built over a gold mine that lets employees go into that hole in the basement floor, pull out a few gold nuggets at zero raw materials cost, take them to the shop floor, work on them a little bit, add $9 of cost for a mounting or a frame, and $3 of outside service electro-plating for the mountings, which turns them into products G and H for sale at $2k and $1k.  Right?

Look at those losses on the two products!  Maybe the company shouldn’t make and sell those 200 units next year.  What do you think?  Isn’t it pretty clear, with losses like that on both products, that the company would be better off making and shipping nothing than to keep selling those products?  What do you think?  Hey, hurry up.  The next production year’s about to begin …

Year 1 Results So how did our little manufacturing company — let’s call it, Nuggets, Incorporated — do this past year?  Looks like a combined loss on the two products of $581, 200 plus $161, 200 for a total loss for the year of $742, 400.  Ouch.  Big loss.

Drop the “Unprofitable” Products? How should we think about products G and H?  They’re creating the loss, aren’t they?  What?  Drop them?  Ok.  Good decision.  Let’s cut those losses right now.  And here we go with Year 2 of operations of Nuggets, Incorporated.

Year 2 Results Ok, year 2 is over.  We had the same $1,000,000 of overhead, no production or sales of products G and H.  Sales of zero – overhead expenses of $1,000,000 gives us a loss for year two of … what’s this? … a loss of $1,000,000?  But that’s larger than the loss we took in year one.  And we dropped those lousy dog products G and H that had “full costs” higher than their sales prices.  What happened?

What?  You say make and sell G and H again in year 3?  Oh man, but those products are unprofitable, aren’t they?

Let’s see what a TOC analysis would tell us. Ok …

Product G has unit sales price of $2,000 less TVC of $12 = TVA = $1, 988

Product H has unit sales price of $1,000 less TVC of $12 = TVA = $988

TVA is the real money that comes from sales that is available to pay for the direct labor force, overhead, and other operating expenses (OE).

For the 200-unit year, the total TVA will be

Product G: 100 units times unit TVA of $1,988 for TVA total of $198,800

Product H:  100 units times unit TVA of $988 for TVA total of $98,800

Grand total TVA for both products for the year:  $297,600

My math’s off a little bit somewhere.  This TVA figure plus the year 1 loss we calculated should add up to the million of overhead … unless it’s off by the amount of the direct labor i added to show the old way it was done … let’s see, at gm guy’s $100 per hour times 100 units times 3 for G and 100 times 1 for H, 400 hours times 100 … $40,000 … if the number’s off by $40k, it’s the mixing of apples and oranges in a hastily-assembled example that’s the problem, not the math …. let’s see … $297,600 plus $742,400 is  $1,040,000 … (real loss is $702,400 … it shows that in non-TOC organization of cost factors, you not only have to allocate the non-direct-labor overhead, but you also have to make sure you take the direct labor costs out of overhead, budget/forecast direct labor production hours along with forecast units of products made and sold, in order to have the total overhead and total direct labor some out right after the allocations … compared to TOC’s TVA vs. OE method of organizing and dealing with cost factors that’s a LOT of unproductive “wheel-spinning” (American idiom referring to when a car’s wheels spin impressively on snow or ice, but the car goes nowhere) with no benefit and a lot of potential confusion and mistakes …) … ok that was the problem … instructor’s guide comment here:  i think this says, in the setup of the first year, break the overhead and direct labor pools of annual expense into separate pools and show how the calculated “loaded labor rate” swells in low volumes too … maybe … not sure … or, for shorter entry into main issues, just don’t do like i did and “make it more correct” by adding a direct labor charge … so much for instructional design for the moment … for now, we’ll just ignore that and go with the bigger point, the point that the two products contributed a LOT of money in year 1 and a simple TOC analysis would show that clearly … also it points out another problem with “full costing” and the old non-TOC way of dealing with direct labor as a variable cost … the overhead pool has to be allocated and so does the direct labor pool … i didn’t get into the calculation … let’s not get into productive, non-productive, budgeted labor … keep it at a high level right now … bottom line is the old way, the non-TOC is a quagmire for no good reason or benefit … but people are used to it and don’t see how unnatural it is … until they see and work with the TOC way and then they can never go back … anyway, back to the story of that world-famous manufacturing enterprise, “Nuggets, Incorporated” …

Wow!  It’s a good thing we made and shipped those 100 G and 100 H in year 1.  They brought almost $300,000 into our company!  And we decided to drop those products in year 2 because “full costs” made them look “unprofitable.”

“Nuggets, Incorporated” – The Exciting Case Study Continues …

There’s a lot more that could be done with our little “Nuggets, Incorporated” factory.  Feel free to use it in your courses and other places, TOC people.  You’ll need to add more detail.  For example, we need a company slogan.  Something catchy and stylish like:  “Nuggets, Incorporated: We put gold nuggets on mountings for you.” We’ll also need a company song like … what?  oh, ok.  you’re right.  we can take care of that later.

Obviously, our little company is a real gold mine! : )  But it seems to have a constraint in the market, probably not a physical constraint, but a sales or marketing policy (“thinking”) constraint.  Needs to grow some volume.  Maybe introduce some new products, although the current two are pretty good little TVA and cash flow generators, right?

We never got to a real look at an activity basis (vs.  direct labor basis) for allocation.  Maybe next time.  We got to some good things, though.

Enough for now.  1/21 – 0300

1/21 – 1423

What to do in Year 3? So what about year 3?  You were saying sell products G and H again.  But what about those losses in year 1?  Doesn’t “full costing” that allocates direct labor and overhead and adds it to totally-variable costs clearly show us G and H are unprofitable?

Cost-Based Pricing? Oh, you’re thinking the sales prices are too low.  Ok.  What new prices should we set for year three?  What’s that?  We need a consultant?  Who’s the best when it comes to making pricing decisions based on “full costs?”  Hm … well, if I remember correctly there was this good friend of mine who wrote an essay back in 1998 that was elegantly teasing and correcting me on the Detective Columbo part of my book about whether “full costing” is necessary, or even helpful, for making pricing decisions … yes, absolutely, let’s get my pal, John Caspari, to help us here … ; ) …  I’ll try to reach him, but, in the meantime, let’s keep working on this year 3 plan …

So we need new prices.  What were those “full costs” again?  The ones based on allocation direct labor and overhead to add to totally-variable costs.

In year 1, Product G sold at $2,000 with “standard” “full cost” of $7, 812 for a loss on each unit of $5, 812, times 100 units for a loss on G of $581, 200.

Cost $7,812.  Need to make a profit on each unit.  So let’s set a price of $8,000 instead of the old $2,000.

Also in year 1, Product H sold at $1,000 with “standard” “full cost” of $2, 612 for a loss on each unit of $1, 612, times 100 units for a loss on G of $161, 200.

Cost $2,612.  Need to make a profit on each unit.  So let’s set a price of $3,000 instead of the old $1,000.

Great!  Rock n roll.

Year 3 Results Ok, we spent $1,000,000 on overhead including direct labor.  Spent $12 of totally-variable costs for materials and outside services and mounting for each of 100 units.  That’s $1,200.  Unfortunately, no one was willing to buy products G and H at those new higher cost-based prices, so sales revenue was zero.  So our results for the three years were:

Year 1 – $702,400 loss selling 100 units each of G and H at prices of $2,000 and $1,000

Year 2 – $1,000,000 loss – we decided not to make or sell anything

Year 3 – $1,001,200 loss – we made 100 each of G and H, but no sales at prices of $8,000 and $3,000

There seems to be at least two ways to make zero sales.  One, don’t make and sell anything and, tw0, price so high nobody will buy.

This case also seems to show that the only way to not make money even when selling gold you get for free is to use allocation-based full product costs for decisions, including decisions about pricing. What?  Oh.  You’re right.  Sorry.  Couldn’t resist.  It was just too perfect.

[update: of course, if robin cooper and bob kaplan were here, they’d probably say that what this case actually proves is that you have to be a company selling gold you get for free to be a successful TOC company … hm … oh, man … that’s rough … if they said something like that, i’d certainly never post about it here …] [1/26 6:25pm]

Change “Standard Volume” assumptions to get “Standard” “Full Costs” and Prices Low Enough that something will Sell?

Maybe the problem is the “standard volume” — the volume in units we’re using to calculate “standard full costs” for products — the number of units we are forecasting to make and sell for the year.  We assumed 100 units of each product and that gave us an an overhead allocation rate of $2500 per hour of direct labor used on the product.  But if we forecasted, or just assumed, a “standard volume” of 200 units of each, we could get that overhead allocation rate down to $1,250 per hour.  Hey, why stop there?  If we assumed 1,000 units of each, we could get that rate down to $250 per hour and really get “full costs” and cost-based prices down to low levels, right?  Just by changing the number of units we’re assuming as “standard volume,” we can set the price pretty much anywhere we want it, right?

Let’s use 200 units of G and 200 units of H – that’s 200 x 3 hours for G and 200 x 1 hours for H for a total of 800 hours in the year – divided into the 1,000,000 – gives an overhead allocation rate of $1,250 per direct labor hour.

that makes the full unit cost of G = $12 TVC + $3,750 allocated overhead +

hm … maybe i’m going to have to deal with this “apples vs. oranges” issue of combined (like TOC does) and separate (like non-TOC does) annual total overhead and annual total direct labor costs or else I’m going to keep getting differences in bottom-line results from the full standard costing system and the TOC system … due to double-counting the direct labor cost in the variable cost and in the overhead … didn’t want to get into that … but … maybe i need to anyway …

So that opens up a question:  What should the prices be?  How do we know that?  Well, we should know what competitors are charging for what product and service features.  We could consider the value to the customer and the role of price in the perception of that value.  We could consider the customer’s other options to create the same value or satisfaction for themselves.   There’s the general level of the economy and whether people have money to spend.  A lot of things.  And then pick the smart right prices for G and H … [by the way, where was “our cost” in all of that? … the customer doesn’t care what our costs are … it cares about what the product is to them, both in itself and compared to alternatives, and what the price is] … and then set forecasted/assumed volume for the year to the level that, when added to the totally-variable costs, gives us “full costs” at a level lower than those prices that make sense given customer and competitive and economic conditions, right?  But hold on a moment … that’s not using the “full costs” as the basis for deciding price.  That’s using price to decide what level of “standard volume” to assume to get allocated-overhead-per-direct-labor down low enough to make the prices we want appear as “profitable” to us, right?  And there’s no guarantee, as we saw in year 1, that setting lower prices and arbitrarily forcing “full costs” down will get enough sales and profits to cover overhead and create a real profit for the overall company.

So we’re into two kinds of circular thinking here.  One is a good thing.  We can call it, “iterative,” instead of “circular.”  The other is a waste of time and causes confusion.  The first one is just part of the situation.  That’s the part about going through iterations of product mix (in units) and unit prices for a known unit totally-variable cost (TVC) per product.  That one is just Mother Nature.  It’s the actual natural level of flexibility and variation that is actually there in business planning to have money from sales pay for both units-sensitive costs (like TVC) and non-units-sensitive costs (like overhead and direct labor) and have money left over for profit, replacing machines, additional investments, etc.  The other is not Mother Nature.  It’s man-made.  It’s unnecessary extra steps and complexity added to the situation that provides no additional benefit.  It is adding — to each iteration of the real issues of mix, unit TVC, unit price for each combination of product and market, and volume — an extra series of calculation steps to determine the non-real value called “full cost.”  Doing that for all the planning iterations and then dealing with in after the period in variance analysis is a waste of time.  That’s why TOC doesn’t do it.  That’s why TOC uses the TVA and OE framework discussed in the first Detective Columbo skit (the first skit is in my book … the second one’s on these blog pages) that develops and discusses this chart/table.

Digression:  Remembering/Reconstructing How Direct Labor is Handled in the non-TOC systems

I’ll put the digression into a Note 59-b.

So what’s all that mean about Year 4?

Freddie

Let’s try paul n’s plan … one guy, we’ll call him, Freddie … Freddie’s fully-crosstrained … he will do all the production processing for the 200 G and 200 H products built and sold for year 4 … we already know he somehow manufactures one unit of G in 3 hours and one unit of H in 1 hour.

Not for TOC planning and measurement economics, but for old-style variable costing, let’s calculate the “unit direct labor cost” to have George manufacture one unit of G and one unit of H.  This will show us the assumptions involved in calculating “direct labor rate” and “product unit variable direct labor cost.”

Calculating “Direct Labor Rate” and “Unit Product Direct Labor Cost”

Our one factory production employee, Freddie, makes $60,000 per year before overtime.  We pay another 25% for benefits, that’s $15,000, for a total cost of $75,000 per year.  He comes to work 40 hours per week for 49 weeks per year, gets 2 weeks of paid vacation, and 5 days of paid time off for sick leave or whatever else he wants to use the days for.  That’s 49 weeks x 5 is 245 work days per year.  It’s an 8am to 5pm workday, 9 hours with a half-hour for lunch and 15-minute breaks in mid-morning and mid-afternoon.  That means he’s potentially available for work for 8 hours per day for 245 days.  That’s 245 x 8 = 1,600 + 320 + 40 = 1,960 hours.  But we’re still not where we need to be yet to calculate “unit product direct labor cost” or “direct labor rate.”

That 1,960 hours is the number of hours Freddie will be “on the job” (in the sense of “in the factory”).  It’s the number of hours Freddie will be “potentially available to work on products,” but, realistically, factory accountants know that Freddie won’t actually spend 1,960 hours working at a work center setting up the work center to work (standard and actual “setup time” for a processing step) on a product or actually working on a product (standard or actual “run-time” or “time-per-part”).  Why?  Because there are meetings called by management, trips to the stockroom, drop-by meetings of Freddie with the payroll department, questions Freddie has with engineering department, maintenance or down-time on the machines so Freddie can’t use them, late parts sometimes so Freddie has to wait for them, training time, people taking tours, and, if Freddie weren’t the only production employee, he’d be doing on-the-job training with new employees.  All of these are reasons why most factories keep track of when production employees actually “clock in” and “clock out” of being at work, and why they keep track of “actual setup times” and “actual run times” for all work.  They know that, I think it’s called “utilization” will never be 100% of those 1,960 hours, but they want to strike a smart balance of encouraging production employees to work in ways that are in the best interest of the company.  In most cases, that means try to spend more time actually setting up for and making products.  But sometimes it means taking breaks and otherwise not being over-stressed in ways that create mistakes and scrapped parts and breaking machines, and also being a reasonable good place to work so the best skilled people don’t go work somewhere else.

Anyway, that means, for purpose of calculating “direct labor rate” to make non-TOC non-TVA “unit product direct labor cost,” factory accountants forecast (aka, guess, estimate, assume) some % of those 1,960 hours will actually be “available for setting up and working on products.”

“Production vs. Non-Production Direct Labor Hours” and “Touch Time” Get the Nod as Better Term than “Productive vs Non-Productive”

(In previous discussions, like note 59-b below, this is what I think I’ve been calling “hours available in the period for production.”  You can see the potential confusion factors over “hours available.”  I forget what the official terms are for these various levels of “available hours”:  (1) “hours available before taking out paid vacation days and paid off days”, vs. (2) “hours available ‘on the job’ before taking out lunch and breaks” vs. (3) “hours an employee actually works as factory capacity on setups and run-times PLUS hours employees goes to meetings and other stuff” vs. (4) “hours the employee actually works on setups and run-times on products.” For our purposes here, I should give that number (4) a name.  Hm … the use we’re putting it to is forecast not actual. So maybe call it, “forecast direct labor or production or productive hours” … that “productive” sort of rang a bell … that may be the official term … the problem with using “productive” is it makes it sound like going to continuous improvement meetings and visiting with customers and engineering etc is unproductive.  Which, in a narrow sense, is true, but, in a wider sense, often isn’t.  And the simplistic ways these measures have been used in the past is part of the big measurement and culture problem that TOC and JIT and TQM all played a role in fixing.  Production hours” is better.  “Production vs. non-production hours” is more neutral and factual than “productive and unproductive” that gets immediately into forcing simplistic “good/bad” ideas where they aren’t actually that simplistic.  So “forecast available production hours” is pretty good for using here for what we called “available hours” type (4) above.  It may even be the official accounting term.  Not sure.  But notice that (4) is the smallest type of “available hours” of (1), (2), (3), and (4).

“Touch Time” I’m remembering now that this definition of “available hours” is also sometimes called by the nickname, “touch time,” which makes it clear to everybody who uses it which and what kind of time’s being referred to, but isn’t literally correct.  The worker doesn’t necessarily touch the product or even have the machine or tools touching the product throughout the entire setup or run-time.  Still, it’s pretty close and is a good nickname to use to clarify what type of “available hours” are being referred to.)

So we know the company can’t expect more than 1,960 hours of actual production time or “touch time” in the coming year 4 from Freddie.  We know it will be less than 1,960 hours.  But how much less? We now have to estimate (guess or assume or forecast) how much lower than 1,960 hours we actually use as “capacity” and “touch time” and “actual production time” from Freddie.  And the “direct labor rate” and the “direct labor part of unit variable cost” will depend on that assumption.  In fact, notice that it will, by the time we’re finished, depend on all of the following:  (a) Freddie’s base pay, (b) Freddie’s benefits, (c) the paid vacation and days off and planned lunch and breaks Freddie’s entitled to, and (d) our estimate of how much of Freddie’s available time will be “production vs. non-production time.”  [Update:  there will be a fifth item soon that i hadn’t remembered yet when i first wrote this]

The TVC in TOC’s TVA calculation doesn’t do all of this.  The labor planning for needed skills and determining hours available to express as capacity still has to be done in TOC, but TOC doesn’t take the additional step of translating that into a fictional arbitrary and often-harmful addition of “unit direct/variable labor component of product cost.”  TVC, totally variable cost, the costs that truly vary only with units, is the right and natural level of aggregation of costs to organize thinking around.

Assume 80% I think it’s called “utilization” … actually I’m not at all sure that’s what official accounting “utilization” means, so let’s just say it’s 80% of the 1,960 hours we derived above for Freddie – So let’s assume that data we’ve collected over the past three years says about 80% of Freddie’s “type (3) available hours” are actually used as “type 4” “touch time.”  But past isn’t necessarily prologue.  The accountants also ask if there’s anything different about the coming year than past years.  Things like more or less training.  We’ll assume not and just use the 80% again.  So 1,960 x .8 = 1,568 hours of forecast available production or “touch” time for year 4.

So TOC could stop here knowing how many hours Freddie’s skills were going to be available.  Pre-TOC or non-TOC accountants have to keep going to calculate the direct labor rate to use to create “standard” “direct/variable direct labor” element of unit product cost.

Hourly Labor Rate for non-TOC costing – $75,000 per year for Freddie’s salary and benefits, divided by 1,568 “forecasted available production hours” = $47.83163265306122 per hour.  Let’s call it 50 bucks per hour to make the math easier.

[save for later … What does Freddie think he’s making per hour?  He doesn’t really “see” the ]

[ save for later … So now, finally, we’re in position to create examples that …]

So for Product G:  $9 materials, $3 outside electroplating, 3 hours x $50 per hour gives non-TOC non-TVC non-TVA “unit variable cost” of $162 before allocation of overhead.

For Product H:  $9 materials, $3 outside electroplating, 1 hours x $50 per hour gives non-TOC non-TVC non-TVA “unit variable cost” of $62 before allocation of overhead.

Oops Oh, hold the phone a moment … I think I made a mistake here … Not on the rationale (logic) or calculation of getting to the 1,568 hours for Freddie, but on that step that calculated the hourly rate for variable costs.  That’s only the right rate when the full 1,568 hours will be used in the year.  To use those $75,000 total annual labor cost and 1,568 available production touch time hours for Freddie in year 4 to calculate standard variable direct labor cost, we have to divide it by the number of touch time hours we expect Freddie to work on products.  That’s 200 units of G x 3 hours plus 200 units of H x 1 hour which gives a total of 800 hours.  That’s a little over half of Freddie’s forecasted available touch time …

Are you getting the idea of how much unnecessary extra trouble is added by using old-style variable costing, using forecasted-volume-dependent and forcasted-mix-dependent “standard” cost elements, instead of TOC’s aggregation levels of TVC, OE, and TVA?  The assumptions make the number change.  And there’s no benefit.  And mistakes can be made.  And only the experts really understand it.  And some of those don’t understand it well enough to “see” the real TVC/OE/TVA phenomena working under the arbitrary numbers.  So the accounting experts can make mistakes that the common-sense other people have a hard time challenging because the process makes everybody’s eyes go cross-eyed.  Meanwhile, the TVC/OE aggregation decision and TVA-I-I-OE-Cash framework appeal to intuition, are clear from shop floor to boardroom by expert and non-expert.

Nevertheless, let’s keep going and get the old style numbers calculated for Freddie, Nuggets Incorporated, and year 4.

Planning for Year 4 – So now we’re in position to show how Nuggets International would use non-TOC accounting for Year 4.  (We’ll show how TOC does the same thing after we’ve done it the old way and shown how arbitrary simplifying assumptions play a role in creating non-real shifts in the fictions called “standard” “variable costs” and “standard” “full costs” at various “standard volumes” and various “standard mixes” (i.e., there isn’t just one possible “standard” combination of mix and overall volume).   Since we developed the financial information about Freddie, we can now improve over the previous examples to create an “apples to apples” comparison of non-TOC and TOC vs. the “apples and oranges” problem (of the implicit $40k double-counting of direct labor) cases we had earlier on the page.  This way, the net profit (or loss) will come out the same in both TOC and non-TOC calculations, but the relative ease and clarity of the two planning, decision, and measurement methods will become apparent.).  I’ve changed a few things, so I’ll review all the current assumptions going into Year 4:

In Year 4, Nuggets International has total company overhead costs of $1,000,000. That’s the owner and 9 of his close family and friends doing all the jobs in the company except providing the direct production labor to build the products (that’s Freddie’s job).  Those 10 company “overhead” jobs — president, company secretary, accountant/controller, product and process engineering, marketing and sales, personnel, building maintenance, shipping and receiving, public relations, and chef — pay $60,000 in salary and $15,000 for benefits each for a total of $750,000.  The remaining $250,000 of overhead is light, heat, rent, supplies, insurance, and other continuing expenses not directly related to units of production and sales.  Said another way, that’s all the company’s continuing expenses in the period (the year) including divisional overhead and factory overhead which, in turn, includes salaries and benefits of everybody except Freddie.  Freddie is the sole production worker in the company.  Freddie’s cost to company, total direct labor cost for the year, is $75,000 per year, $60,000 in salary and $15,000 in benefits.

TOC would immediately combine both the company overhead of $1,000,000 and Freddie’s total direct production labor cost of $75,000 into a money item TOC calls, Operating Expense (OE), for a total of $1,075,000.  No allocations to products.  TOC folks get this number and — instead of fooling around with parts of it in ways that take time, don’t help, and can hurt — just go to work getting the sales and making the products that generate the cash to pay for all it and also create a surplus for buying replacement and new equipment, profits, etc.

Nuggets International is forecasting production and sales in year 4 of 200 units each of product G and H.

The only thing Nuggets International needs to do to be ready for year 4 is to set prices for products G and H.  We now have everything we need to help Nuggets International calculate full costs in order to set prices.

So what do we need to set prices?  Full costs, right?  Isn’t that how this whole cool exercise got started?  Ok, “full costs,” here we come.

Non-TOC Full Cost First, let’s review what a “full cost” is.  It’s a “full” “product cost,” the “full cost” of a unit of a product.  It’s also callled, “fully-allocated product cost.”  Same thing.  Its components are described in a few different ways that are all the same thing.  One way is in three pieces:  materials cost, allocated labor cost, and allocated overhead.  Another is in five pieces:  materials costs, allocated direct labor costs, allocated factory overhead, allocated divisional overhead, and allocated corporate overheads.  Sometimes, but not frequently, they say, more correctly, “materials and other units-sensitive costs.”

TOC TVC and OE Ok, by contrast, TOC has TVC for “totally variable costs” (which include materials and units-sensitive costs such as outside electroplating services) and OE (operating expense, for all the other stuff, including unallocated direct labor and unallocated overhead).

Year 4 “Standard” “Full Cost” for product G:  …

Year 4 “Standard” “Full Cost” for product H:  …

Notice before I go further, that I’ve added “Year 4” and “Standard” to the labels of “Full Cost” … that’s because I’m realizing more and more clearly that the so-called “full cost” — that sounds so correct and complete and authoritative — can and almost always does differ from year to year depending on the assumed/forecasted so-called (it’s a misleading term) “standard” volume and mix

We need an hourly rate for allocating “total annual direct labor cost” on the basis of “production” “touch time” hours … that’s $75,000 … to be divided by 200 units of G times 3 touch time hours, for 600 hours, plus 200 units of H times 1 touch hour for 200, total of 800 hours … i think we did this math before … 75k/800 … yeah, we just did that for the variable cost discussion for Freddie …  $93.75 per hour of production touch time direct labor … that doesn’t look familiar … did i do it wrong before or this time? … no it was right both times … last time ( i just looked back), it came out to about $50, but i was using all of Freddie’s available production touch times hours, not the year 4 forecasted use … all of the expense for Freddie has to be covered, so the “standard” (they need a new term … standards are good things, but they need a better term for this use) or forecasted hours have to carry all of Freddie’s annual cost, or else our cost-based prices, in total, won’t be high enough to cover all the labor and overhead, right?

We need an hourly rate for allocating “overhead” on a “direct labor basis” (i.e., per hour of production touch time) too – …  same drill … we use the forecasted “standard mix and volume” 800 Freddie production touch time direct labor hours, not Freddie’s total possible 1,568 production hours (remember from above, the “So 1,960 x .8 = 1,568 hours of forecast available production or “touch” time for year 4″?) …

does it bother you as much as it’s bothered me since 1989 — actually, since my welded tubing experience in summer 81 — that these allocated direct labor and allocated overhead numbers, that seem so solid and authoritative when they show up as final figures on some fact sheet, have so much wiggling going on in the assumptions made during the process of creating them?  jeez …

So, to get that overhead allocation rate, we divide our 1,000,000 clams by 800 of Freddie’s forecasted production type (4) touch time hours to get  $1,250 per hour.  Now we can calculate the “full costs” for the “standard volume” and “standard mix” value of 200 G and 200 H in Year 4.

Product G – Year 4 “Standard” “Full Cost” for one unit of Product G:  $0 (zero) cost for the gold nugget from the hole in the factory’s basement floor (the factory was built years ago over an old forgotten and abandoned gold mine) … + $9 per unit materials cost for the metal mounting hardware for the gold nugget … + $3 per unit outside electroplating service for the metal mounting hardware … + 3 hours of Freddie’s production “touch time” per unit at $ [ 93.75] per hour giving variable direct labor cost of $281.25 … + overhead allocated per unit on the basis of direct labor hours at the rate of [ $1,250 ] per hour x 3 hours for $ 3,750 … for a grand total of $9 + $3 + $281.25 DL + $3,750 OH = $ 4043.25 FC.

It’s very possible I could have messed up the math along the way with my eyes being what and how they are.  But the thought process is there.  And, so far, I think the numbers might be right.  We’ll just keep playing them out until we see a problem.

Product H – Year 4 “Standard” “Full Cost” for one unit of Product H:  $0 (zero) cost for the gold nugget from the hole in the factory’s basement floor (the factory was built years ago over an old forgotten and abandoned gold mine) … + $9 per unit materials cost for the metal mounting hardware for the gold nugget … + $3 per unit outside electroplating service for the metal mounting hardware … + 1 hours of Freddie’s production “touch time” per unit at $ [ 93.75] per hour giving variable direct labor cost of $281.25 … + overhead allocated per unit on the basis of direct labor hours at the rate of [ $1,250 ] per hour x 1 hours for $ 3,750 … for a grand total of $9 + $3 + $93.75 DL + $1,250 OH = $ 1,355.75 FC.

Setting Cost-Based Prices for Products G and H

Year 4 Full Cost for Product G:  $ 4043.25. I wish we had my pal, John Caspari, here to help us.  His office said he’s on vacation this week.  Guess we can’t blame any of this foolishness or any mistakes we make on him now.  Darn it all … : ) … What say you, sports fans?  Price at $5,000 maybe?  $6,000?  Remember, when price was $2,000, we sold 100 in year 1.  When price was $8,000 in year 3, we sold none. So let’s use $5,000.  We’re using cost-based procedures for making price decisions (vs. all the other ways to set prices that TOC, among other things, suggests).  Year 4 price for G:  $5,000 it is.

Year 4 Full Cost for Product H:  $ 1,355.75. Oh, man.  I really wish we had my pal, John, here to help us with this cost-based pricing.  : )  What say you, sports fans?  Price at $2,000 maybe?  Remember, when price was $1,000, we sold 100 in year 1.  When price was $3,000 in year 3, we sold none. So let’s use $2,000.  We’re still using cost-based procedures for making price decisions (vs. all the other ways to set prices that TOC, among other things, suggests).  Year 4 price for H:  $2,000 it is.

if my pal, johnny p, were here, he’d say he used full cost-based pricing all the time and it worked well for him … and i’d say, i’m not surprised … it’s not that it never works, it’s that it sometimes doesn’t … and sometimes for different reasons … it’s always more work and more opportunity to make  mistakes, miss opportunities, and requires really brilliant people to “see around the numbers” to the realities of the company’s inherent range of savvy agile price, product, and mix options [ if jerry s and z were here, they’d say they spent plenty of time working around full costing’s inflexibilities to get prices lower sometimes and higher sometimes to match opportunities and wish TVA I I OE Cash were around and consensus back in 80s and 90s … although, it’s also true our working around full costing and intel’s strapping themselves to cost-based pricing helped us] … sometimes full costing is even required and very helpful, but not really for decisions … for other things, not decisions … in defense, for instance, like a fan company i worked with, the govt client and company agreed to a procedure for allocating costs and adding some for profit because the govt is obligated a lot of times to force suppliers to price at cost-plus, the theory being the supplier is entitled to a profit but the govt and public are entitled to not be gouged by excessive price especially on sole-source biz where sometimes only one supplier in the world has the knowhow to make something or the govt paid for the r&d and whoever had the r&d gets the production biz too but at a “fair” “win win” level of profit, not a monopoly profit … but that’s NOT making decisions … that’s price negotiation with a customer … the full costing becomes part of the persuasion, the agreement, the … mostly, companies don’t tell their customers their costs in that kind of detail … mostly, suppliers don’t tell their customers their costs at all because that’s usually a sure way to have your competitors know your costs too … full costing gives an arbitrary picture … we haven’t done overhead allocation yet in detail … pop quiz:  how do you allocate the company president’s salary to a particular product line or product?  and if you allocate overheads that can be allocated anywhere to somewhere, you make that product look less profitable … so you say, then allocate unallocable amounts evenly across products … ok … evenly by units? … by sales price? … by sales price in which market? … by direct labor hours?  … by activity?  what activity are you doing to use to allocate president, CFO, corporate attorney, the IT department, general company brand (vs. product) advertising, and lots of other things to specific products?  or allocate evenly by materials cost?  … see the problem?  … and, IF you allocate these things by any basis, you’re going to have arguments, internal politiking, people wasting time making presentations to get the overhead out of their product to somebody else’s, non-intuitive reports that have “cost elements” and “total costs” wiggling around year to year and “standard mix and volume” to “standard mix and volume” … and FOR WHAT?  FOR WHAT?  FOR WHAT? … if you’re going to force your people to do this, you better have a VERY GOOD reason to the question, FOR WHAT?  … and THERE IS NO GOOD ANSWER TO FOR WHAT … there’s only, that’s the way most people used to do it and it seems like it makes sense … well, it seemed like it made sense that the sun revolved around the earth, but, when somebody thought it through, something else seemed right … it’s like the game in the Hidden Pictures drawings in the highlights magazines in the dentist’s office where until you see the squirrel upside-down in the bush on the left, you don’t see it, but when you do, you can’t miss it again … or the old hag and young girl picture … until your perception shifts, you don’t see it … once you see it, you see it all the time … it’s not an intelligence thing … it’s a thinking it through and seeing it thing … so, anyway, it’s not that full costing can’t work, or that it never works, it’s that it sometimes doesn’t work a little, sometimes don’t work and creates irreversible catastrophes, and, oh by the way, it’s more work and less intuitive than a simpler way that works all the time without the potential confusion and risks … TOC’s TVA-I-I-OE-Cash framework was created by applying the methods of theoretical physics (which is never “just theory”) to manufacturing and seeing the most natural right way to aggregate and organize money entities, but which can be viewed as being a correction to make traditional “direct/variable margin analysis” work safely, comprehensively, and right … anything full costing can do for planning and decisions, TOC’s TVA-I-I-OE-Cash framework can do better, easier, and safer …

So are we ready to go forward into Year 4 selling 200 units each of Products G and H with cost-based prices set at $5,000 and $2,000?  No?  Why not?  Because we only sold 100 of G in year 1 at a price of $2,000 and we might not sell the year 4 planned (“standard”) 200 or year 1’s 100 and maybe none at all again like year 3 at a price of $5,000?  Ok.  I see what you’re saying there.  And what else?  The same goes for product H since we sold only 100 in year 1 at a $1,000 price and we don’t have reason to be confident we’ll sell 200 or 100 or any at $2,000?  Well, you’re making a lot of sense there.

I think you’re right.  We probably need to have Nuggets Incorporated change something in this year 4 plan — in this “year 4 standard price, standard volume, standard mix, and standard full cost” sales and operations and financial (think “traditional MRP II”, by the way, vs. a new specification for “MRP II” — i  wrote a paper on this too back in the 90s — that will have Haystack internals replacing or supplementing the vintage 60s and 70s internals) planning model/simulation/projection/forecast/plan — before we start making and trying to sell stuff in year 4.

But, before we change the plan, what if we did sell 200 each of products G and H in year 4 at those prices?  What would the result be in terms of profit or loss?  Let’s see:

Product G:  (“standard” price $5,000 – $4043.25 “standard” full variable cost) x product G “standard” volume 200 units = “standard” gross margin contribution =  $191,350 “standard” profit from Product G — from this set of assumptions.

Product H:  (“standard” price $2,000 – $1,355.75 “standard” full variable cost) x product G “standard” volume 200 units = “standard” gross margin contribution =  $128,850 “standard” profit from Product H — from this set of assumptions.

Total “standard” profit from this set of planning assumptions/”standards” is $320,200.

Hey, a profit!  Great.  But, if you think about it, it certainly should be a profit.  The calculations that set the “direct touch time labor-based allocation rates” to cover “total direct labor expense for the year” and “total overhead for the year” ensured we’d pay for Freddie and for all the overhead.  The price we set that was larger than the “full costs” created the net profit – IF, big IF, IF we sold the assumed number of units of products G and H at those “full cost”-based prices.

That’s why, for years — decades, maybe a century and a half, all of the 1900s and part of the 1800s — companies used this kind of costing and forecasting and profit planning despite the extra work, confusion factors, and potential risks of mistakes.  The reason people are passionate about “full costing” is that it “makes sure we cover all the labor and overhead costs.”  But so does using the TOC TVA-I-I-OE-Cash Framework in the way Detective Columbo discusses in chapter 6 and figure 6.1 in my book (Figure 6.1, The TOC TVA Financial Management System for Strategic, Product, and Improvements Planning and Control). ,/ ycalw,b=bpJack&tim ❤

Incidentally, the term “standard” is good and it’s not good.  It’s self-defining in one context, but I think causes confusion when taken and re-used in other contexts.  You can see how the term, “standard,” is used in this type of context.  It means something like, “the set of assumed rates for a single assumed planning scenario.”  It means all these assumptions, at the end of the period, will be used as “standards” against which to compare “actuals” in “variance analysis.”  I never got around to my “analysis of variance analysis” [note 60], but I think, when we do, we’ll find part of it is natural, useful, and good, and a LOT of it is stupid because it’s dealing with comparing actuals against” arbitrary/irrelevant/unreal forecasted things” instead of comparing actuals against “unavoidable/relevant/real forecasted things.”

Nuggets Incorporated needs to change the Year 4 “plan” (aka the “scenario,” the “assumptions”, the “planning scenario”, the “assumed combination of price, volume, mix, and costs”, the “richard ling sales and operations s&op plan”) But, like you said, there’s no reason to be confident we’ll sell 200 units each of products G and H in year 4.  That’s twice the units of sales in year 1 and with higher prices.  So we need to make a different plan.

thinking of my apics s&op brother, richard, on this pass through this section is making me wonder if i’m making too much of a fuss later in the page about the amount of extra work needed to recalculate things for allocations every time mix or volume changes … with computers around since MRP II days to do the recalculating, maybe … might come back to this … even if the recalc … let’s see … before i get back to the main thread  … if change overall volume, keeping mix percentages the same, unit direct labor changes for every part and product … even if it’s done fast, that’s not good … but what would it take? …   the new volume, at same mix, would change the number of units for all products … total runtime could be run once for all routings and keep the total there with one setup (another assumption that’s less good than TOC sbds with dbr or dynamic buffering for capacity-checking) … new units times total touch time for part/product gives needed labor hours for new volume … divide that into the labor and overtime pools, then apply back to all the parts/products … that’s for simple direct labor allocation basis … not sure how burdensome that is … but it is unnecessary … and it does create wiggly full costs and a lot of potential for confusion with no benefit compared to just doing things directly with a period or multi-period TVA I I OE Cash Framework.  Degree of burdensome would vary depending on whether the data set was in main memory or on hard disk.  Part of haystack is getting the data set into main memory for speed.  also burdensomeness of recalc would depend on whether the full period’s being done as a point in time, or spread out over time intervals within the period.  but, in any event, it’s going to either a little or a lot of extra work to get worse numbers and more risk of people not understanding what’s really being decided and making mistakes.

A quick look at the TOC way of running the same planning numbers -But, before that, let’s see what TOC-style financial math would tell us about year 4.  In TOC terms, OE is $1,075,000.

TVA forecast for G is (Sales price $5,000 – TVC $12) x 200 units = TVA for G $ 997,600.

TVA forecast for H is (Sales price $2,000 – TVC $12) x 200 units = TVA for H $ 397,600.

Total forecast TVA $ 1,395,200

Total forecast Net Profit = TVA $ 1,395,200 – OE $1,075,000 = $ 320,200

Notice it’s the same result for forecast/planned/”what-if” net profit.  TOC folks would still need to do the part of the work to know Freddie would probably only be available for 1,568 and not 1,960 and not more hours in the year (i.e., capacity), but we didn’t need all the further processing on the financial figures to, for each set of assumptions in a scenario, go around in a big circle to take “total direct labor expense” and “total overhead expense” apart into pieces, sized by the hours Freddie was forecasted to actually work on products, only to put all the allocation pieces back together again, in several different places, when calculating the forecasted net profit for the year.  It’s like And that’s just the first of what will likely be several or — ideally, many — price and mix scenarios the management team will look at (you want the company to not be discouraged from looking at many sets of assumptions).

Need to Change the Plan What to do now?  We did all that work with that set of volume, mix, and sales price assumptions …

— those are the only real variables, by the way.  The non-TOC way of allocation for direct labor and overhead makes it seem like there are many more variables involved.

… and found out that, if we live in La La Land, and believe we’ll sell 200 each of G and H at those prices, that we’ll get over $300k of net profit.  But we don’t believe it.  What to do?

Back to Year 1 Prices It turns out that, in year 1, we sold every unit of G and H that Freddie made.  Sold them all.  So we don’t really know how many we could have sold at year 1 prices, $2,000 and $1,000.  Let’s have Freddie use the full 1,568 available touch time production hours to build more G and H in year 4.  Why not?  Freddie doesn’t mind.  He likes building products.  He’s there anyway.  Let’s have him build more instead of playing cards during the rest of his available production touch time hours.

This isn’t “Full Cost-based Pricing”.  It’s “what works in the market” pricing.  Clear, right?  The full costs haven’t helped us with price.  We just know $2,000 and $1,000 worked in year 1.  There are other things we can do sometime, like paint a pretty sunrise on the side of one of the mountings and then try charging more than $2,000 for a new product J, but we’ll think about ways to raise the value perceived by the customer, raise price, and raise profits later.

How many more units can Freddie make? (Capacity) Freddie’s available production hours are 1,568.  The current plan uses 800 of them leaving 768 hours for additional production.  Each product G takes 3 hours of Freddie’s time.  Each H takes only 1 hour.  Big difference.  We could make 768 / 3 = 284 more G or 768 more of H.  Big difference.

Different Unit Contribution to Profit We could make a LOT more product H, so maybe we should … What?  We make different profit per unit on the two products?  Right.  Good point.  But how much more?  Which gives us more profit per extra unit made and sold?  Maybe our “full costs” can help us figure this out, but … uh oh …

Ouch.  Our “standard full costs” are based on volume and mix assumptions we’re about to change! Is there a way around this problem?  To calculate the profit we’ll make by using all of Freddie’s production time without re-calculating those allocation rates and full costs for each possible change in volume or mix we want to try?  oh, man … [the TOC folk are trying not to say, i told you so] …. we’ll, let’s just use the costs we have … ouch … we can’t do that either … we have full costs at the moment that were over $4k and over $1k that drove us to the cost-based price ideas of $5k and $2k in the first place that put us into this loop/iteration … ok, let’s just do it the TOC way …

New TVA for G:  $2,000 – $12 = $1,988 TVA per unit (at any volume and mix).

New TVA for H:  $1,000 – $12 = $988 TVA per unit (at any volume and mix).

This is becoming TOC’s famous “P and Q” exercise that deals with “TVA per unit of constraint resource.”  Why?  Because it looks like we make more per unit on G, right?  Over twice as much.  But … what?  twig says what?  that G takes three times as many Freddie hours (capacity constraint resource hours) as H.

TVA/throughput per constraint unit for G:  $1,988 /3 units = $662/Freddie hour

TVA/throughput per constraint unit for H:  $988 /3 units = $988/Freddie hour

One classic mistake.   What would the prior full costs have told us (though we’d only have used them if we forgot to change them for new volume and mix  … which is one of the mistakes companies make when using full costs if somebody doesn’t catch the error)?

Product G:  Sales price $2,000 – Full Cost (on out-of-date 200/200 assumptions) $4,043.25 = ($2,043.25) per unit loss

Product H:  Sales price $1,000 – Full Cost (on out-of-date 200/200 assumptions) $1,355.75 = ($355.75) per unit loss

TOC analysis of unit TVA showed about $1,000 difference in TVA per unit without considering relative rates of consuming scarce capacity (Freddie), favoring making more G.  But about $300 difference when considering profit per hour of Freddie’s soon-to-be-scarce time, favoring making more H.  Those are the real numbers reflecting the reality of the money effects on the business.  The “full cost” figures — admittedly the wrong ones for volume and mix, but that also makes a point — showed about ($1,700) difference in losses “full cost contribution margin” favoring H (unlike TOC on this less important measure) when not yet considering 3 vs. 1 hour uses of Freddie and a ($700) loss vs ($350) loss per Freddie hour difference which would, like TOC with it’s about $320 difference in TVA contribution per constraint unit, favor H on the more important measure, throughput/profit per constraint unit.

So it turns out, if we accidentally or incorrectly used those non-TOC full costs created for one set of volume and mix assumptions, for a planning scenario that had other volume and mix assumptions, we would, by luck, by coincidence, select the correct product to want to emphasize, product H.  But, friends, do we want to increase the likelihood of mistakes and only be bailed out by luck when it comes to serious business of planning a business? Do yourself a favor, and do your business planning the TOC TVA-I-I-OE-Cash Framework, not using “full costs” with all its arbitrary and volume-dependent ingredients. 1/23, 9:53pm 1/24 1:03 aldoon cGus ,/

What happens if we make lots more Product H in Year 4? Let’s review the situation.

In year 1, we sold 100 each of products G and H at prices $2,000 and $1,000, used only 400 of Freddie’s 1,568 production hours, and had a loss of ($702,400) for the year.

In year 2, in order to lose less money, we decided to build and sell nothing and lost more — and lost ($1,000,000) for the year.  The loss was higher because Products G and H, thought they showed as “unprofitable” using “full product costs,” were actually generating quite a bit of cash for the company.

In year 3, we used cost-based pricing to set prices of $8,000 and $3,000 that were higher than the “allocation-based full product costs,” used only 400 of Freddie’s 1,568 production hours to build 100 units each of G and H, and sold nothing because customers felt the prices were too high.  But we still had the costs of the materials for the mountings, so we lost ($1,001,200) for year 3.  As to the products Freddie built that didn’t sell, somebody left the factory back door unlocked in late December of year 3, and somebody stole all 200 of those unsold products.  There was an unconfirmed rumor that the owner left the door unlocked on purpose and the “stolen” products were sold to raise money for various charities in the town.  But this rumor was never proven.  What it means, of course, is that the company has no inventory of either finished products G and H or electro-plated mountings to start selling in year 4.  Whatever Nuggets International will sell in year 4, Freddie will have to build in year 4.  What?  You’re saying I just made all of that up to keep the math easier in year 4?  Not so.  That really happened.  Fortunately, the burglars didn’t find the hole in the factory’s basement floor.  What?  Back to planning year 4?  Right.

[An optional note.  Not mandatory.  Or even recommended. … Noticing another little glitch in the Nuggets International story line.  I’m realizing and remembering that our years 1-3 results aren’t going to be directly comparable to year 4 and later because we didn’t have a separate $75,000 for Freddie in the first three years.  I’m trying to think of a silly, but plausible — or at least possible — addition to the story line that makes that work somehow.  How about this?  In years 1-3, Freddie was both the company chef and its only production worker which is why he wasn’t asked to use his full possible 1,568 for making Products G and H, only making 100 each in years 1 and 3 and none in year 2.  It also explains how the overhead for years 1-4 is steady at $1,000,000 while OE for year 4 is overhead of $1,000,000 and direct labor expense is $75,000.  I’m pretty sure that makes it easy to make our upcoming year 4 be comparable with years 1-3, by using a $75k adjustment.  Glad we got that settled.  On with the story.

The first scenario we looked at for year 4 was the plan to make double year 1 volumes of G and H, to make 200 of each, and sell them at “full cost”-based prices of $5,000 and $2,000.  That gave us a forecasted profit of $320,200, but we decided that it would probably actually be a repeat of year 3 (no sales), but a little worse, because the “full cost”-based prices were probably too high to get any sales.

The second scenario we started to look at for year 4 was 200 each again (double year 1’s volume) and at year 1 prices of $2,000 and $1,000.  That would use 800 of Freddie’s 1,568 hours and give a forecasted year 4 loss of … we forgot to calculate it … we started TVA/constraintUnit math and hadn’t yet checked out double year 1 volumes and year 1 prices using 800 Freddie hours.  So we need a profit/loss forecast for double year 1 volumes at year 1 prices.  TVA for G will be ($2,000 – $9 – $3, or $1,988) x 200 = $397,600  … TVA for H will be ($1,000 – $9 – $3, or $988) x 200 = $197,600    …  Total TVA = $595,200.  That’s the same as year 1’s TVA.  Year 4’s OE is a little higher though, at $1,075,000.  So forecast profit or (loss) will be:  ($-479,800) loss.  Ok, so that gets us to the point we thought we were at before.  We’ve tested doubling year 1’s volumes to 200 each and using year 1’s prices and found it still doesn’t let us cover the cost of Freddie, the 10 overhead jobs, and the other overhead expense.  So now we use our TVA/throughput per constraint unit advice to have Freddie build more Product H.

Now, the third scenario we’ll look at for Year 4 is adding as many H as Freddie can make after building 200 of each of Product G and H.  The TVA per constraint unit for Products G & H were $662 and $998 which indicated we should make as many more H as Freddie could make.  Let’s see what that does for Nuggets International in year 4 … well, without making it harder than it is … product H takes an hour of Freddie production time … he has 1568-800 available in year 4 … that’s 768 … so he can make another 768 units of product H.  If they’ll all sell at $1,000, that’s 768 x (1000 – 12, or 988) for an additional TVA of $758,784.  Whoa!  That more than covers the ($479,800) loss in the previous mix we looked at.  Forecast net profit now $278,984.  That’s for a year 4 planning scenario that assumes production and sales of 200 Product G at and 978 Product H at prices of $2,000 and $1,000, and using all 1,568 of Freddie’s production hours in the year.  Is this “scenario,” this “plan,” this “forecast,” this “standard price and volume and mix and cost roll-up,” realistic?  Depends on whether it’s realistic to think the company can sell 978 units of Product H in the year.  And that’s yet another thought process.

[oh, and a little calculator humor … glad2c.u.runingthenumbers.again ,/alicehbs ❤ ,/t  1/24 2:44 am]

Nuggets Incorporated — Much Promise as a Company, as a Case Study

my fav mba program may have been too busy to make case studies on TOC.  if so, this is becoming a really good one.  can go to adding markets in general, adding markets selected and with product features to create perceived value for price.  we can reveal whether Freddie uses one or several machines, with more or less setup, with more or less tooling required, and maybe even add a second production employee to make scheduling even more exciting.  yes, hbs, my pals, this Nuggets Incorporated case study is destined for status as legendary, as a classic.

speaking of the man, the legend ,/jmasseyRTdept … ,/ kuriger.baker.morris.seese.crocket.phil.homesmoker .smitty.roundtree+pal+++grant.dave.bruce.ted.clint.eng.

Year 4 Operations

My 1998 book had what I thought was probably, at the time, the world’s longest footnote.  A “Guinness Book of World Records” sort of thing.  However, I think this Note 59-a, which has become the Nuggets Incorporated case study, may, by now, have taken over the lead.

Well, what about year 4?

We’ll discuss in a moment how Freddie decided which products to make, in what order, and when, but first a little perspective, a little retrospective view.

First a Retrospective View, A Recap

“Recap,” by the way, is short for, “recapitulation,” one of the primary meanings and uses of which is, “a summary of what’s happened so far.”

In year 1, the marketing and sales department person, Judy, used her understanding of the customers, the competition, and her sense for the market to set product G and H prices at $2,000 and $1,000.  She could be forgiven for using such imprecise, irresponsible, subjective, theoretical, experimental, trial-and-error, anecdotal, and common-sense pricing methods — and for not using the more accurate, precise, reliable, responsible, industry-standard, sophisticated, and scientific “allocation-based standard full costing” pricing methods — if for no other reason than she didn’t know very much at all about such costing or about using such costing to set prices.  Most marketing and sales people with a lot of more mundane experience, knowledge, and intuition about customers, competitors, industries, product life cycles, and markets know much less about the higher realms of experience and knowledge available to accountants who have studied and mastered the methods and uses of “full costing.”  Judy, the marketing and sales department person for Nuggets Incorporated, just knew the company got the gold for free, the mountings for $12, believed a lot of people would buy products like G and H for about $2k and $1k, and thought setting prices there was doing the right thing for the company.  Judy never imagined that selling at those prices could do financial harm to the company.  So the company went forward into year 1 at prices set on the principle of “what somebody with experience thinks will work,” instead on the principle of “full costing.”  The company sold 100 each of products G and H.  Since the company had about a million dollars of labor and overhead, it lost about $700k in year 1.

When the finance and accounting department person, Harry, was asked by the president what had gone wrong with the company’s financial performance in year 1, calculations were hastily performed to determine the “full cost” of the products.  Much to the company’s shock and surprise, those “full cost” figures showed products G and H to be very “unprofitable.”  Now better-informed, the president ordered that the company simply stop selling the “umprofitable” products G and H for year 2 in order to improve the company’s overall financial performance.

But then, at the end of year 2, another shock arrived.  Even though the company had stopped selling “unprofitable” products G and H — and, in fact, built and sold no products at all in year 1 — the company’s overall financial got worse. It lost about a million dollars in year 2 instead of about $750k in year 1.  This persuaded the company’s executive management that selling products was probably a good thing for the company to do after all.  They immediately formed a committee of the marketing and sales department person, Judy, the finance and accounting department person, Harry, and Freddie.

Year 3 Prices Freddie, at the time, was both the company’s chef and its sole production employee.  Judy, Harry, and Freddie — acting as a sales and operations planning (s&op) committee, or, if you like, team — followed Harry’s lead to create a sales and operations plan for year 3 based on prices which were based on “allocation-based standard product full costing” procedures really only known and understood by Harry, the company’s finance and accounting department person.  Freddie agreed, in addition to doing the cooking for the company in year 3, to produce a mix of 100 each of products G and H.  Judy agreed to sell them at cost-based prices of $8,000 and $3,000.  Harry agreed to advise Nugget Incorporated’s president, executive committee, and board of directors of these sales and operations planning (s&op) decisions for year 3.  The company was very optimistic about the coming year 3.

No sales in year 3 However, Judy, the marketing and sales department person — despite tremendous and very creative marketing and selling efforts by her marketing and sales departmental self — was unable to make a single sale of product G or H in year 3.  Because of Harry’s very detailed and authoritative explanations during sales and operations planning meetings …

[note:  quick note … i don’t want the Candide-like comedic stuff to paint sales and operations planning (s&op) as also silly … it isn’t … s&op is essential in any company, TOC or non-TOC, as my pal, apics dick ling, did me the favor of making clear in one of the annual april apics cm sig symposia in the 90s, in a paper i invited him to prepare and present in that event … s&op itself, is natural and necessary … the question is which accounting, planning, costing, pricing, capacity, volume, and mix thought processes will be used within s&op meetings … just in case the comedy pointed at “full costing” was starting to spill over onto s&op in the perceptions of folks new to all this good stuff … ,/x2.bro.rich …]

… because of those explanations, Judy was confident the “full cost”-based prices were not the problem.  Or, if Judy thought the prices might be the problem, she didn’t feel comfortable right away making a big deal about it.  Harry had been very certain and persuasive with people throughout the company about those very specific procedures for calculating allocating rates.  In fact, everyone had spent so much time learning about “full costing” and planning based on “full costing” that they didn’t really spend any time at tall discussing and doing research about things like the wants and needs and behavior of customers and competitors.  But Harry assured everyone that it wasn’t really necessary to discuss those incidental side issues anyway as long as the company made sure it was using “full costs” to cover all of its annual labor and overhead expenditures.

But, even back then, Freddie wasn’t so sure.  Even back in the s&op meetings, Freddie was a little uncomfortable when Harry was telling Judy that his $8k was a far more correct price than her $2k for product G.  Freddie had never seen or even heard of anyone ever buying a product like product G for over a couple of thousand dollars.  But Harry was so sure.  He had all that accounting education and training.  He was a CPA.  And Freddie knew he was only the company chef and production worker.  Back then, Freddie figured it was his own job to build the volume and mix of products in the plan (between creating great meals for himself and the rest of the company, of course), and it was Harry’s and Judy’s job to cost and sell the products.

Setting prices for Year 4 – The versatile Freddie to the rescue!

But that was then.  Now, after seeing the products he built in year 3 go unsold, Freddie felt he should speak up a little more firmly for what his own observations, experience, and intuition in the “market” called “everyday life and everyday people” were telling him about product prices.  You see, Freddie liked his job at Nuggets Incorporated.  He liked the people, including Harry.  He liked building meals (cooking and being the chef), but what he really loved was building the company’s products for sales to customers.  He knew he didn’t know anything about accounting except what Harry had taught him about “full costing,” but he didn’t understand for the life of him why getting the gold for free, paying $12 for mounting parts and outside electroplating, paying him only part-time production, and selling for $2,000 and $1,000 wasn’t really good for the company.  Maybe you have to be really really expert in accounting to understand things like that.  Still, Freddie believed he needed to do something about the pricing situation before year 4 got started.  And didn’t they make him an official member of the company’s sales and operations planning (s&op) team?

So, toward the very end of year 3, in-between sales and operations planning meetings for year 4 — when Harry wasn’t nearby — Freddie asked Judy what she really believed about pricing.  Finally, when Freddie told her that he thought the original year 1 prices were more correct too, Judy told him the truth.  Freddie asked Judy why she didn’t do surveys and polls and stuff like you hear about on television for elections.  Judy said she already did that, but nobody in the company could find the flaw in Harry’s way of calculating “full costs” and then setting prices based on those “full costs.”  So Freddie suggested that he and Judy meet with each member of the company individually and interview them about their experience of seeing and hearing about people buying products anything like those sold by Nuggets Incorporated as products G and H.  And have them tell everything they ever saw, heard, or thought about prices for such products.  They would leave Harry for last, the president for next to last, and the company secretary before that.  That meant they would meet with the product and process engineering department person, and then the building maintenance department person, the public relations department person , the shipping and receiving department person, and the personnel department person.  Oh, and since, at the time, toward the very end of year 3, while planning for year 4, they were interviewing for a full-time chef to let Freddie devote his full energies to production, they would also interview the person they were going to hire as new full-time company chef.  So they would go together to meet with the first person.   Then the three of them would go to meet with the next person.  Then the four of them would go to the next person and so forth until all nine of the company members — plus the new full-time company chef they were about to hire — would meet with Harry.

So that’s exactly what Freddie and Judy did.  They held eight meetings with the eight other people in the Nuggets Incorporated manufacturing company other than Harry.  In each and every meeting, the people said, “To be honest, those prices of $8,000 and $3,000 for year 3 really never did sound right to me … heck, I never heard of anybody ever buying products anything like our G and H for that kind of money … a thousand maybe or maybe more for a product like G, but not 3 thousand and certainly not $8,000 … but … why didn’t I say something? … well, Harry was so sure, and so detailed and those numbers looked so accurate … and, hey, Harry’s a CPA, a certified public accountant … and …”  They all realized their intuition and experience had been telling them something all year during year 3 about year 3 prices, but they hadn’t done anything about it.

So now it was time to meet with Harry.  They all — all ten of them, including the president, the company secretary, and the new full-time company chef they were about to hire — went into Harry’s office to talk about year 4 pricing with Harry.  And what do you think Harry said?  Harry said, “I never heard of anybody paying that much for products like our G and H either.”

And that’s how Year 4 Prices Were Set (Re-set to Judy’s Year 1 levels)

And that’s how prices for products G and H were set again to Judy’s original year 1 prices of $2,000 and $1,000 for year 4.

More Thinking to Do in the S&OP Meetings

With pricing worked out, Freddie, Judy, and Harry went back to work in the sales and operations planning meetings.

Freddie opened one meeting by reminding everybody that, in years 1 and 3, he had built 100 each of products G and H before even one of either product had been sold.  That worked ok in year 1 because they sold them all that year.  However, in year 3, Freddie built them and the finished products just sat there in the factory unsold.  Freddie had found a copy of Dick Ling’s book on sales and operations planning (s&op) and found out that companies need to think about how customers think about when they want to take possession of things they buy.  Sometimes they want to see it, buy it, and take it home right away.  Other customers, or at other times, customers are willing to, or even prefer to, place a reservation or order at one time and pay for and pick it up later.  So Freddie proposed a plan for how many units of products G and H to build, in what sequence, and when.  We’ll come back in a moment to discuss what he proposed.

Meanwhile, after Harry joined the other 9 people in the company (and the new employee who would soon join Nuggets Incorporated as 11th employee and full-time company chef) in supporting setting year 4 prices at year 1 levels, he worked with Freddie and Judy on the “TVA/throughput per constraint unit” calculations to determine that product H should be preferred if sales volumes go above 200 units each of G and H and if the company has a choice of which to sell.  Harry reminded the team about what that analysis had shown:

TVA/throughput per constraint unit for G priced at $2,000 with $12 totally-variable cost (TVC):  $1,988 /3 Freddie production hours = $662/Freddie hour

TVA/throughput per constraint unit for H priced at $1,000 with $12 totally-variable cost (TVC):  $988 /3 Freddie production hours = $988/Freddie hour

TVA/throughput is money, generated from sales of products, that is available to pay for total annual labor costs, overhead costs, and other things profits can be used to pay for such saving up cash for new machines, taxes, and shareholder dividends.

That meant that it wouldn’t be bad to sell an additional unit of Product G, but it would be better to build and sell an additional unit of product H.  In other words, every time the company used three more Freddie hours to build and sell 3 units of H, instead of using the same three hours to build and sell a unit of G, the company got an additional $1,000 or so of additional cash to pay for annual labor and overhead costs and other things incoming cash and profits can be used for.

Harry also reminded the team that, after Freddie built 200 each of G and H, using 800 or his 1,568 production hours to do it, he still had 768 production hours to make products.  Using all of those hours for Product G would bring in about 768 divided by 3 hours per unit or 256 x about 2000 -12 or about $500,000 of additional cash to the company.  Using all of those hours for H would bring it about $750 k in additional cash.  So, if Freddie used all his 1,568 production hours to make G and H in any mix beyond the basic 200/200 plan, the result would be at least $500k better and possibly as much as $3/4 million better.

Judy said she had some ideas for how use her monthly advertising and promotion budget to keep creating interest in the company’s full product line, but, at the same time, to increase interest in product H.  She said she would share those ideas with the team in the next s&op meeting.  She also suggested the company just go forward knowing there were four possible sales scenarios that were likely all at price levels of $2,000 and $1,000 for products G and H:

Scenario (1), the “200/200 base case” – 200 G and 200 H which gives a loss of Forecast Result: ($-479,800) loss

This is not the TOC community’s idea of a good “base case.”  But it will do for now to make some points in the continuing saga of the Nuggets Incorporated manufacturing company.  It’s a good example of a “base case” in the sense that a company will almost always want to, as discussed below in the TVA/CU section, sell a range of product models.  It’s rarely, if ever, practical to sell just one thing.  But it’s not a good example of a “base case” in the usual TOC sense of not stopping the thinking and planning process — including the marketing planning, sales planning, new and existing product introduction and improvement planning, process engineering planning, and hr planning — all add up to a “base case,” or a suite of likely “base cases,” that, under the range of likely external uncontrollable factors, all cover OE and other TVA/money requirements.  I discussed this in my Apics “TOC and Strategic Planning” paper at an Apics event in the 90s, my April 98 Apics Magazine cover article on “TOC TVA Financial Management System,” and in my book in 98.  Eli Goldratt, Bob Fox, and others in the TOC community said essentially the same things in some similar and some different ways in the late 80s and throughout the 90s too.  ,/unclerick ❤ …  ,/rwmrtm ❤ 1/25

Scenario (2), the “base case plus sell as much extra G as Freddie can make” case:  200 G and 200 H, plus up to 256 more G, which makes the company come close to “breakeven” (i.e., no profit and no loss, i.e., total TVA = total OE = total direct labor and overhead expense)  Forecast Result:  Breakeven

Scenario (3), the “base case plus sell as much extra H as Freddie can make” case:  200 G and 200 H, plus up to 768 more H which gives the company about Forecast result:  about $1/4 million profit.

TOC calls this a mix that “maximizes TVA/throughput per constraint unit.”  Unlike having a mix that generates enough TVA/throughput “money” to cover OE and meets other money-generation requirements (which is always a good idea), actually maximizing TVA/CU is not always the right thing to do, but it’s often a nice thing to do, and it’s always a good thing to know how to do, and to know whether you’re doing it or not, and why.  To know it and do it in this situation, for one production employee, with assumptions of zero starting inventories, is simple.

For this and other simple situations, “testing a mix to see if the factory’s capacity will actually allow it to be built” and whether it’s “generating maximum TVA from the constraint” can be done like this one on “backs of envelopes” and with handheld calculators.  Spreadsheet programs can be used for somewhat more complicated situations.  Relational database programs with nice reports generators can do for the next level of complexity.  What I’m building up to — and you think I’m leading up to something about haystack systems, and you’re right — is that a haystack system allows these things to be done for all plants from the simplest to the most complex.

Even this case isn’t exactly “maximizing” TVA/CU overall.  It’s maximizing the TVA/CU after making 200 units of G.  That’s often the situation.  A company will normally have many reasons not to exit all their other products just to make the maximum money by focusing on just one product.  For example, what if you went to one of those little short-leadtime meal factories known as, “your local McDonald’s fast food franchise,” and they only sold one product, like only french fries?  Well, that’s a bad example.  It would be ok if they still sold those great french fries, but … what?  you get the idea?  ok, jeez …

Scenario (4), the “base case plus sell any mix of G and H that Freddie can make and people will buy” case:  any mix of 200+ of G and 200 + of H (i.e., any mix with volume greater than 200 of G and 200 of H) which avoids a loss and gives the company a profit ranging from Forecast result:  very small profit to up to about $1/4 million profit for the year.

Harry and Freddie agreed with Judy’s suggestion to think about the year 4 plan that way.  Harry said he would report this plan — that consisted of four forecast scenarios of four likely possible actual ways year 4 could turn out — to the president.  Freddie said he would work on the details of his production plan and report to the group in the next meeting.

Judy Coming Up with a Plan for Assuring Increased Sales of Product H in Year 4

Since Judy was the head of the sales and marketing department at Nuggets Incorporated, she had the authority to give the order to the department to prepare a plan for selling beyond the 200 units of product H in the “base plan” up to an additional 768 units in the year.  This would make best use of Freddie’s available production hours beyond the 800 hours needed for the “base plan” of 200 G and 200 H.  Since Judy was the only person in the sales and marketing department at Nuggets Incorporated, she made the executive decision to carry out the assignment for the department herself.

A Plan Everybody Can Say “Yes” to

Judy wanted the kind of plan everybody could just say, “yes,” to.  In other words, it was very late in year 3, only a short time before the beginning of year 4, and she didn’t want a proposal people inside the firm and customers outside the firm would say, “yes, but …,” to.  Dealing with the issues behind the “but”s could take a lot of time and make things complicated.

So, as one Desirable Effect, she wanted the actual price to stay the same.  That way no one could start arguing about $1,000 working fine in year 1 and “why is Judy changing it in year 4?”  It wouldn’t be lower, so people wouldn’t complain about getting less money per unit of H.  It wouldn’t be higher, so it wouldn’t be taking a chance sales would drop in case $1,000 was one of those psychological price points most people can’t pay more than, and she couldn’t be blamed if making it higher made sales drop again.  So, she decided to introduce a “new product” that was really product H with some minor changes (she had several ideas for these) that the customers would like in a major way, that wouldn’t change the TVC of $12 and wouldn’t effect Freddie’s work.  She would have the company offer and advertise it at a “list price” that was higher than H’s standard $1,000 price, offer a “member discount” that made the price $1,000 again.  Voila!  Exciting “new product” introduced in year 4, that’s really product H, with advertising and promotional excitement as well.

At the next sales and operations planning meeting, Judy would tell Harry and Freddie about her idea.  But first she knew she needed to meet with Patrick, Nuggets Incorporated’s product and process engineering department person.  Patrick was a wizard at designing products and the processes used to manufacture them.  He would be able to understand the types of “minor changes” she had in mind and probably know how best to contract for the “outside processing services” to get them.  Next stop for Judy:  Patrick’s office.

[narrator’s aside to the reader:  i’m just wondering … is anyone else concerned about the fact that we’re now not spending all our time wondering about how to allocate labor and overhead costs for a specific volume and mix? … and not getting ready to change all the calculations again for another volume and mix? … aren’t we risking becoming considered by my TOC accounting pal, john c, and others as, “not full cost-oriented” … gasp! … in our planning and especially our pricing decisions? … no?  but aren’t you concerned we’re not obsessing over “full cost” allocations? … no? … me either … whew … finally out of the stupid make-busy numbers and into the real numbers that matter and the real issues that lead to making those real numbers happen …

and is anybody else wondering just exactly what Judy has in mind for these “minor changes” to product H, that are going to make “major changes” in the interest level customers will have, and will cause them to think the higher “list price” makes sense, and that the lower “discount price” that is the standard product H price from year 1 is a really good deal?  this should be interesting … ]

Judy consults with the Engineering Department

When Judy arrived to Patrick’s office, he was deep in thought about a new product idea he had for year 14.

[lol.  crack me up.  this story’s writing itself and turning out like the charming and quirky family in the 1938 frank capra-directed movie, you can’t take it with you, with the fabulous drew barrymore’s great-grandfather, the fabulous lionel barrymore …  ,/d.w/.kiala ❤

wow … i wasn’t always able to churn out imaginative images like this so quickly … could always verbalize, analyze, describe, organize, essay … but that’s different than creative writing or dialogue … in 97, while writing my book, the bookstore tour jumped out nearly whole-cloth from a need to make my chapter 1 a bit less hyper-analytical in tone … wanted to be analytical and clear, but drew g suggested a change of pace of some kind … and, whoosh, out it came … that same year, the detective columbo idea i’d been using in very short form over the years, just saying, “if detective columbo were here, he’d ask, what more do we need?” … anyway, that flowed out and turned into a full-blown skit that really worked … there were drafts of something called practical magic … and now, in these pages, the schedule history bit, the second detective columbo skit on depreciation, and this quirky little case study … i think i’ve developed another little skill in the area of story-telling and dialogue … wonder what changed or what i learned? … me, the guy that could make conversation and jokes ok pretty good, to be sure, but usually not the best to tell those, you know, those story jokes some people can tell and have be entertaining and funny and engaging long before they get anywhere near the punch line, often right from the start … i, a few times, thought that, in addition to conscious and/or accidental technique, there must be a place of experience from which those people tell those fun stories and jokes … like the guy on the radio show, lake woebegone guy, oh, what’s his name, can tell a story for an hour and not lose us … garrison keillor … i think i finally got there … like marty b and dutch and keillor … oh well … coulda woulda shoulda had the chance to do this stuff the right way … back to the story …]

Judy consults with the Engineering Department

When Judy arrived to Patrick’s office, he was deep in thought about a new product idea he had for year 14.

Let’s listen in …

[Judy leans into the doorway of Patrick’s office]

Hi, Patrick.  Busy?

[Patrick looks up from his deep thoughts about new products for year 14]

Oh.  Hi, Judy.  You know I’m never too busy for you.  What’s up?

Great.  Thanks.

[Judy goes in and sits in a chair in front of Patrick’s desk.]

Patrick, I have this idea for a new product for year 4 and …

Judy, that’s great!  Finally, there’s somebody on the same page with me.  For some reason, Judy, I haven’t been able to get anybody interested in thinking about or talking with me about products for year 14 and I …

Patrick.  Patrick.   That was year 4, Pat.  I have an idea about a product for year 4.

Oh.

Sorry, but 4, not 14.

Oh.  Too bad.  Well, ok.  So year 4 it is.  But I thought I heard at the management meeting the other day that the product plans for year 4 were set.  In fact, I heard there was even a specific 200/200 production and sales plan in place for year 4.  Is that not right?

Well, yes.  But really yes and no.  You see Harry, Freddie, and I have been meeting as the company’s sales and operations planning team and …

Right.  I heard about that.  I heard about your group.  That’s a great idea.  I’ve been thinking of forming a similar group for making plans about year 14.  I was going to re-read Dick Ling’s book on sales and operations planning (s&op) last week, but I think somebody swiped my copy of his book.

I think Freddie has it.  He said he found a copy, but didn’t tell us where.  Hm … Year 14. Year 14.  Well, Patrick, I’m very pleased to know our company won’t be lagging behind in the thinking for the years 14 and beyond, but here’s what I was starting to say.  Yes, we have a “base plan” that we call the “200/200 2000/1000 plan” for the units and prices it assumes, but, by Harry’s calculations, we’ll still lose about $1/2 million in the year.

Well, that’s not good.  We really should get out of the habit of running large losses each year.

I agree.  And that’s why I’m here, Patrick.  I’m here because Harry, Freddie, and I figured out that, after Freddie uses 800 hours to build the 200 G and 200 H …

… and he’ll have even more time for production now, won’t he?  I think we hired that new full-time chef this week.  I wonder if he makes Tex-Mex food like Freddie can?

I heard he accepted our offer and will start on January 1.  Plus, not to worry about the menu because Freddie interviewed him and told me the guy really knows his Tex-Mex.  We’re covered on that one.

Great.  So you have an idea for how Freddie can spend his extra time now?  He loves to cook, but he really loves building products for the company to sell.

That’s right.  This could really work out great.  Harry and Freddie figured out that Freddie will have about 1600 hours — the real forecast number is 1,568 “touch time” production hours — in year 4 for producing products.

Ok, and I set standard touch time direct labor times of 3 hours for G and 1 hour for H back when Wendell was still working for us in the factory.  That was back before Freddie joined us as part-time production worker and part-time chef.

Right.  Those are the numbers we’re using.  3 hours and 1 hour.  By the way, I heard the other day that Wendell and his family were doing just fine in California now and everybody’s happy that they’re living close to his wife’s parents and brothers and sisters out there.

Great.  Good guy, Wendell.  I miss him.  But, if he hadn’t decided to leave, we’d never have had Freddie and that great Tex-Mex.

True enough.  Anyway, so after taking the 800 touch time hours out for the 200/200, Freddie’s got 768 production hours remaining to make more product.

And I guess, to get the 1,568, you guys worked through the usual process of taking out vacation days, sick days, weekend days, lunch, and breaks?

Yes and also about 20% for meeting and other things that come up that aren’t “touch” production time.  Harry and Freddie did that.  They just told me the 1,568 number was the one we should plan with.

Sounds about right.  I’ll talk to Harry about it later.

Speaking of talking to Harry, why don’t you join our sales and operations planning team?  I think we could use your expertise there.

Not a bad idea.  We could talk over year 4 and then spend a little time on these ideas I’ve been working on for year 14.

Sure.  Maybe some of the year 14 ideas could be helpful for years 5 or 6 or maybe even 7.

Hm … possible.   I hadn’t really thought about that.  I’ll think about it.  So what did you have in mind for Freddie’s other 768 hours in the coming year?

Glad you asked.

Judy and Patrick design a “new product”

[Judy explained her idea about making “minor changes” to Product H, “list pricing” it higher, “discount pricing” it back to the same price as basic H, keeping the TVC about the same at $12, and using some smart advertising and promotion to get the sales.]

[Patrick nodded his head several times as she spoke.  When she finished, he thought about it a little while.  And then spoke.]

That sounds like a pretty good plan, Judy.  What did you have in mind for those “minor changes?”

Ok, great.  That’s the last part we have to nail down.  I have some ideas, but I don’t know the technology and the prices for the various outside processing services.

Ok, no problem there.  That’s what they pay me to know, or to be able to figure out.

That’s what I thought.

Ok, so what are you thinking?

Well, first off.  Do we have the option of using different mounting hardware and different outside processing services?

Sure.  It all depends on what you have in mind.  Different things work with different other things and not with other things.  And, depending on how you put things together, the price per product can change.

Right.  That’s the kind of thing I don’t know and Harry doesn’t know.  Freddie knows a little bit of that from his experience and general curiosity, but you know these things in detail.

Yes.  That’s what we engineers do.  We either already know what will work, or we figure out what can work, and what it will cost.  The ways things work best is I work with people like Freddie because the theory, principles, technologies, and practices I know from my education and experience get merged with Freddie’s experience of actually making a lot of things.  Working with financial people like Harry helps in understanding how much we’re willing or able to invest on a one-time basis vs. willing or able to expense on a recurring time period or per-unit basis.  And working with people like you helps get the whole package shaped into something that’s like to be perceived as valuable by customers and purchased at some reasonable price.

Exactly.  You really need to be part of our sales and operations planning team, Pat.

Yes.  Sometimes like when you get into the thought process of investing in new products, or in making changes to existing products, or to existing manufacturing equipment or processes.  The rest of the time, when the s&op team, during the year, is making important, but routine, plans concerning established products and manufacturing processes, you don’t really need me.

Fair enough.  We actually haven’t been meeting during the year like I think Dick Ling’s book suggests.  We’ve only been meeting to create the plan for the coming year.

That will probably change eventually, but I agree that, given what you folks have been talking about over there, I should definitely be involved.

Great.  That will help.  Patrick, I’m thinking about adding some options to the basic Product H.

Such as?

Well, suppose we could add an RFID radio chip to the side of the mounting so the Product H could become a back-up in case anybody ever lost their electric garage door opener?

For $12?

Well, how much would it cost?

Radio-frequency identification chips — or, as you called them, RFID chips — have gotten pretty inexpensive, at least for the so-called “passive” devices.  They are the ones that hold a little bit of information, like the RFID chips that get put on parts in factories that hold part numbers, and get “scanned” or “queried” by the more expensive RFID “active” equipment.  Another example is the “EZ Pass” and “Mobil gasoline” RFID chips people clip to their car keychains for paying tolls and paying for gas at Exxon-Mobil gas stations.  The inexpensive part, the “passive” chip that holds the “account number” information, that part you could add somehow to the Product H inexpensively if you come up with a way to use it.  The “active” part, like the scanner at the toll booth or at the gas pump, that’s expensive and kind of big for adding to a Product H.  Oh and the “active” side needs batteries or other electrical power.

And my garage door opener would be an “active” chip?

Is your garage door opener today requiring batteries?

Yes.  I have to replace them sometimes.

There’s your answer.  It’s active.

Too bad.  It would be so cool for people to be able to buy a Product H, keep on the dashboard of their cars right next to the GPS, so, whenever they lost their garage door opener, they could just ….. push a button … and up does the garage door.

I see your point, Judy.  I’d use one of them myself if they were available.  And what a bargain that would be.  You get all the benefits of having a Product H in your car and also have a backup garage door opener.   [ ,/dumont.g.a.t.seth.cindyb ]

Ex-aaaact-ly, Pat.  Perfect.  So how can we do it?

Well, let’s see … I may need to get back to you on this.

We’re getting close to starting the year.

I know.  You’re right.  Tell you what.  I think I can put some of my projects for year 14 on hold for a little while.  I’ll work on this product H garage door opener idea, ok?

Great.  And you can tell us about it at the next sales and operations planning meeting.

Good plan.  See you there.

Thanks a million, Pat.  I think we can sell enough of these Product H-based garage door openers to keep Freddie busy building products all year.

We’ll definitely give it a shot, Judy.

[narrator aside to reader:  anybody think we should go back to fussing over allocation assumptions to get “full product costs” for yet another set of assumptions of volume, mix, price, etc?  Doing that instead of being entrepreneurial and clever about customers, products, features, promotion, technologies, and other things that effect the real numbers and are what lets us make the money to account for?

no?  right.  that’s what i thought.                 ….

1/25, 4:10am ]

At the Next Sales and Operations Planning Meeting

[When Harry, Freddie, and Judy showed up in the conference room, Patrick was already there seated at the table.  He was tinkering with what looked like a Product H, except somehow this one looked a little different … ]  ….  [1/25, 4:19 am]

Judy:  Patrick!  You came!

Patrick:  I sure did.  Judy, I’m really glad you came to me with your idea.

Freddie:  What do you have there?  It looks like a Product H, sort of.

Patrick:  Right.  It’s a new product that will give all the usual benefits of a Product H while also providing a backup for a lost, stolen, or misplaced electronic garage door opener in the car.

Judy:  You did it!

Patrick:  Yes, Judy, but not only that.  This new version of Product H can also remotely operate any entertainment system or convenience device in the home.

Harry:  TV, DVD, and VCR?

Patrick:  Right.

Judy:  Cable box, DVR, stereo, and AM/RM tuner?

Patrick:  AM, yes.  RM, no.

Judy:  What?  Oh, that’s a typo.  You know what I mean.  AM/FM.

Patrick:  Ok.  AM/FM, yes.

Judy:  Engineers can be so literal.

Patrick:  [wink and little smile]  Just answering the questions, ma’am.

Freddie:  Toaster oven, Patrick?

Patrick:  Absolutely.  Any remote control entertainment or convenience device in the home.

Judy:  Wow.  That’s even better than I asked for.

Patrick:  Right.  And, for Harry and Freddie’s information, this was really Judy’s idea.  This idea and new product design started with Judy coming into my office yesterday asking me to design a modified product H that could serve as a backup electronic garage door opener.

Freddie:  Brilliant.

Harry:  So that’s what you meant when you said you said you had some ideas for selling more Product H in year 4?

Judy:  Yes!  This is great!  Oh, Patrick, thank you!

Patrick:  You’re very welcome.  Oh, you’ll be interested in the estimated costs.

Harry:  Yes.  Very much so.

Patrick:  I did a little research  on remote control devices.  I found some devices that are selling for under $10.  Here’s an example.  This product is being sold as a primary independent product of a business called, “remote controls dot com,” for $9.99.  Having a remote control as a primary independent product that needs to make a profit is different from have a remote as an included-in-the-price accessory of a larger profitable product, as in the remote control that comes with a tv or with a dvd player sold by a tv or dvd business.  In the “tv business,” no separate profit is needed from the cost and price of the remote control itself.  So that sales price of $9.99 tells us the variable material costs are quite a bit less, right?  The price has to pay for a lot of things, right?  Like some contribution to labor, overhead, and profit for the retailer, plus all the materials and some contribution to labor and overhead and profit for the manufacturer, right?  So let’s say the materials costs are about 25% or about $2.50.  Or let’s make the math easier and say about 1/3 or $3.00.  I can research this a little more with parts suppliers, but I’m thinking their materials costs are somewhere between $2-3.

Harry:   Interesting.  Notice that the price for ordering a quantity of 100 or more is $4.99.  That supports your idea that a lot of money in that $9.99 sales price is something other than materials cost.  Clearly, materials costs are less than $5.00 or the manufacturer and retailer would have no contribution to labor, overhead, and profit to share and no reason to be in business selling it.

Patrick:  I agree.  I think, when I get into my parts catalogs and talk to companies who sell parts that go into these remotes, I’ll find that our $2-3 per unit will work.

Judy:  So we could replace the electro-plating outside processing service on the mounting with adding remote control parts for the same $3 per unit.

Patrick:  That seems very likely, yes.  What do you think about the effect on actually building these, Freddie?

Freddie:  Well, it’s easier to work with the mounting if it’s not electro-plated.  Would we use the same mounting?

Patrick:  You could change from steel to aluminum.

Harry:  At the same price?

Patrick:  Maybe less.  Maybe more.  Depends on the grade of aluminum, but I’m pretty sure we could keep the combination of new mounting and remote control electronics at $12 or under.  Do we really need to maintain that $12 totally-variable cost (TVC) figure when we’re going to have a price after discounting of $1,000?   That’s $988 of new money coming into the firm to work with.

Harry:  You’re right.  We could be a little flexible on the $12 total TVC.

Judy:  If Harry doesn’t think we need to hold fast at $12, I don’t either.  Since the base plan had a $1/2 million loss, and since we knew Freddie would love to use those extra 768 hours building new products to sell, and since the start of year 4 was coming up soon, I was just trying to get something new to sell more H without changing things people might be concerned about.

Freddie:  You wanted to have a plan everybody could just quickly say, “yes,” to instead of having a lengthy period of dealing with a lot of “yes, but …” concerns.

Judy:  Exactly.

Patrick:  Well, if that’s the case, you folks have yourself a new product.  I’ll still try to bring it in for the $12.  If I need more, I’ll check it with you first so you know how much it is.

Harry:  Perfect.

Judy:  You know something?  We don’t have to use all of Patrick’s good ideas at once.

Freddie:  I was just thinking about that.  It’s really great to have a Product H that, in addition to all its usual benefits, will open a garage door.

Harry:  Or one that will just turn on a coffee pot and toaster oven in the morning.

Patrick:  Good points, guys.  Well, for instance, this modified Product H that I brought with me.  As you can see, it can change the channels on the TV.  That’s really valuable even though this particular H isn’t programmed yet for toaster ovens.

Judy:  Ok, I think it’s time for the marketing and sales department to suggest a decision here.  What do you say we focus on a version of Product H for year 4 that will open garage doors alone?

Freddie:  Then, during the year, we can talk about other models to introduce in year 5.

Patrick:  Excellent.  I’ll come to those meetings to help out and also to share some ideas I have for year 14.

Freddie and Harry:  Year 14?

Judy: [Quickly giving Harry and Freddie strong eye contact that says, “Just let me handle this”]  Absolutely right, Patrick.  We should have time in those meetings during the year to discuss lots of ideas.

Patrick:  Excellent.  That’s great.

Judy:  Hm …

Patrick:  Thought of something?

Judy:  Batteries.

Patrick:  Right.  You and I discussed that “active” units like the transmitters we’re talking about for the remote door opener need electrical power.  Since we’re talking about a car-mounted version, we can use the car’s electric cigarette lighter for power.

Judy:   Problem solved.

Freddie:  Is there enough market for car-mounted Product H with garage opener to get us 768 more sales?

Judy:  Another great question with another thought process.  I’ve been thinking about that.  The answer is, I think so.  What we can do right away is get the product designed, include it in our advertising, let the excitement about a Product H that opens garage doors create a word-of-mouth buzz that sells all three products — classic G, classic H, and the new “Open Sesame” product …

Harry:  “Open Sesame”?  Apparently, the new product has a new name?

Judy:  [smiles]  It does now.  And “Classic G and H” will just be our internal names for the original products.

Freddie:  I like all the new names.  One question:  What you’ve been saying is, by being back at year 1 prices, we should be able to assume sales of 100/100 for sure, like year 1, but maybe 200/200 since we sold out of product in year 1 and thought we could have sold more.

Harry:  If we sell 200/200, that still gets us to $1/2 million loss.

Freddie:  Right.  But, Judy, what you’re also saying is the buzz about the new Open Sesame product will help guarantee getting to the 200/200 “base case” and then also maybe get us more classic G, more classic H, and maybe a lot of Open Sesame?

Judy:  That’s what I’m hoping.

Freddie:  It sounds better than not introducing Open Sesame, but doesn’t sound certain of avoiding a loss or of keeping me fully-utilized.

Harry:  It’s better than we had before Judy came up with her idea.  All we had before was, “let’s go back to year 1 prices, build twice as many as we did in year 1, and hope they all sell too.”

Judy:  Well, I know from studying TOC that, if we were doing the marketing the TOC way, we be making sure we had enough products, positioned in the enough markets, at the right prices, to be certain we’d get the TVA we want, pretty much no matter what happened in the market.

Harry:  So let’s do that.

Judy:  Well, that what I have been trying to do.  That’s what this new Product H idea is.  Now that we all want to do that, we can do it during year 4, but we can’t make up overnight for not doing a lot of this all along.

Freddie:  Ok, I agree.  Open Sesame is a good idea that moves us in the direction toward eventually knowing we’ll get enough sales instead of hoping and guessing we’ll get enough sales and TVA/profit.

Judy:  TOC calls what we’re starting to do, “breaking the market constraint.”  We don’t have an “internal physical constraint.”  We know that because Freddie has 768 more hours he and we could use.  The “constraint” on our “company making money” is “not enough sales,” and it’s because of the way we’ve been “thinking” and approaching getting sales.  That’s a “policy constraint” or a “thinking constraint.”  There’s not really a physical limit to how many we can sell.  We’re creating low sales by the way we’ve been thinking about generating sales.

Freddie:  Ok.  We’ll get better at that in the coming year.  Meanwhile, we can act right away to get Open Sesame out there quickly, support it and the two classic products   with ads.  We can all think about new ways to get more sales of all three products — and maybe some more new ones.

Judy:  Let’s do it.

[Judy leaves to work on the ads.  Freddie leaves to cook lunch since the new full-time chef’s not on board yet.  Harry and Patrick stay behind to play with the prototype of the new Open Sesame product.]  [1/25, 11:27 pm]

What Other Effects of Increasing Volume?

[Judy and Freddie walk together from the conference room toward Judy’s office and Freddie’s kitchen]

Freddie:  Great ideas, Judy.  Way to go!

Judy:  Thanks, Freddie.  I think we’re going in a good direction now.

Freddie:  Definitely.  You know what else I was wondering?

Judy:  How quickly you could make us all some Enchilada Supreme for lunch?

Freddie:  [laughs]  Yes.  That was the main thing.  I’m very glad you like my Tex-Mex culinary creations.  Lunch will be served presently, ma’am.  [sweeping bow]

Judy:  [curtsy and smile]

Freddie:  But I was also wondering how the increase in product volume will effect other parts of the company.

Judy:  Right.  I was thinking about that too.  Like Ted in shipping and receiving, right?

Freddie:  That’s the first one that came to my mind.  He was handling shipping and receiving in year 1 and 3 for a total of 200 units and, with the way we’re working, he may have to deal with 400 …

Judy:  … in the “base case” or as many as 1,168 units if our Open Sesame idea keeps you busy all year.

Freddie:  Exactly.  Ok.  I’ll work on that.  I’ll talk to Ted and we’ll figure out what that will mean and how to handle it.

Judy:  You’re the best, Fredrico.

Freddie:  Judy, my lovely lady, I think you might be right about that.

Judy:  [laughs]  See ya.

[For discussions (not dialogs) of Ted’s situation and related issues:  Note 64 and especially Note 65 on the next page, Inventory Systems III, get into issues of “activity analysis,” “cause-and-effect analysis on the OE line items,” and “activity-based costing (ABC)” as in “activity-based allocations for product costing.”  Ted, the Nugget Incorporated factory’s shipping and receiving department person, is the star of the show] [1/26 2:05am]

#rightnow

Note 59-b

Ah, yes.  It had to happen eventually.  A note to a note to a note.  Note 59-b to Note 59-a to Note 59.

Digression:  Remembering/Reconstructing How Direct Labor is Handled in the pre-TOC systems

Since the TOC way of using the money measurement aggregation levels of unit and period TVA (as primary unit and period “contribution margin”) and Operating Expense (OE) — within a multi-period TVA, I (materials), I (plant & equipment), OE, Cumulative Cash framework — gives correct answers more easily, more quickly, and with less potential for confusion and for serious errors, and does that in a way that’s clear, natural, and persuasive for viewpoints ranging from the shop floor to the boardroom, I haven’t focused for a while on punishing myself with trying to work through the processes and sub-processes of doing things the old way.  However, I can’t really avoid revisiting the way that “direct labor cost per hour” is developed unless, in our little Nuggets Incorporated example, I’m going to take a step away from the way anybody does it and use a single allocation rate for allocating “overhead plus direct labor” to form unit “product full costs” on the basis of standard setup and run-time hours, without also having a similar rate to form the “direct labor” portion of traditional unit “variable (or direct) costs.”

I remember some pieces of how it’s done.  And there are some things one can assume are done in a particular way.  For one thing, there’s got to be a forecast of the total number of direct labor people at start of year, through the year, and at year end.  Then what their wages are likely to be (actuals will come from the punch clock and payroll system).  Forecast of their benefits costs.  Forecast of overtime likely to be used.  That should give a number for the year that gets all those people in the door and has them available for working on products or anything else they are asked to do or that they just do.  People can argue — and they maybe do, but we won’t here — about whether the tuition and travel expense part of worker training costs go into “direct labor cost for the year” vs. a separate overhead account for all training, though pretty much everybody would agree the workers’ time in training during the workday, or on customer visits, etc belongs in the total annual direct labor cost figure, but as “not production” time.  Higher and lower pay for more senior and more junior people goes into this.  It is what it is.  That’s a number we could call something like, “total annual direct labor cost.”

TOC leaves this number, “total annual (or other accounting time period) direct labor cost” — along with all the other factory, divisional, any group, and any corporate overhead expenses — in the money entity it calls, Operating Expense (OE) and never allocates any of it to products to form an “allocation-based product cost.”  Pre-TOC systems separate the “total period direct labor cost” from the other overheads, and do several processes on it, that involve things like maybe “indirect labor” and maybe “factory overhead” (unless they go into inventory, no, labor and those costs get into fictional versions of inventory values via labor, i think, not sure) and definitely “productive and unproductive time” … whew … to get to the rate that’s multiplied by the standard setup time and standard time-per-part (those are “good standards,” by the way, like the “standard purchase prices” for purchased parts and raw materials and outside services that go into TOC’s totally-variable cost (TVC) entity … you need a “standard” for each cost element because dealing with “actual” purchase price for planning isn’t relevant given FIFO and LIFO and smart-purchasing-agent issues of getting the average or “standard” price per part down though some buys cost more per item and some buys cost less per item. in planning, we want one number, though, if it’s changing permanently in year 4, we’ll show that in planning too … in other words, not all things called “standard” are “bad standards” like “allocation-based standard product costs”) to get “unit direct labor” portion of old-style “unit variable (or direct) costs,” in ways that I’m working at figuring out and/or remembering and/or re-constructing here.

Now we have a dollar figure for “total annual (or other period) production direct labor cost.”  We know how many people that is.  We mostly don’t fuss over more vs. less skilled, or higher or lower paid, at this point.  We assume they are all potentially at work 40 hours per week, 52 weeks per year.  But now we make all the simplifying assumptions and roughly-correct adjustments to account for the fact that people take vacations of various lengths, various numbers of days off, sick leave, training days, maternity, paternity, whatever, what else?, you get the idea.  So we had number of people  x 52 days …

… incidentally, all the time spent on this is time not spent fussing over little incidental things like which new products we’re going to introduce, in what markets, with what features, for what customers, vs what competitors … little things like that … and we’re not even through the first draft of the “labor cost” modelling … we also get to maintain it, change it, update it, fine tune it, troubleshoot it when it gives us surprising results, and stay so busy with it we never really give proper attention to the true internal and external issues leading to effectiveness and strong competitive position … we’re so busy looking very professional and earnest and using a language only accountants can claim to understand (though they don’t always) and fussing over the cost of paper clips that we’re ignoring the rivers of money we should be positioning to bring into the company to support high-quality operations …

… that last paragraph may be a bit too strong … maybe even not literally right … we may need an overall dollars and overall hours figure … overall hours figure … overall hours figure … if em were here … hm … well, we do need to know the hours the work centers will be staffed in order to know capacity in terms of mix of products that can be produced … how does that get done in a TOC shop?

… not sure how far or for how long i want to go down this road, but we’ll play it out for a bit more … if brother spangrud were here, he might say something like, we know we need a certain amount of pump heads to get through machining to make our numbers … that’s a number of hours of setups and runs in those work centers … that’s actually a number of hours of setups and run-times for a product …

… if steve king were here, he’d say … capacity requirements planning … in the sense of having mrp offsets and totaling up the setup and run-times in timeframes to see what labor was needed and when … it was crp that, when added to mrp, caused it to be called, closed-loop mrp …

… yep, back to the 90s … i’m remembering this now … the chicken or egg labor and machines capacity issue … and add in the cross-trained labor moving from one work center to another issue … and then cells with merged processing steps … there isn’t just one way it works … each environment has its own way … when walking into any environment, some things get accepted as givens and aren’t challenged unless some evidence suggests there’s an important opportunity there … and other things get focused on … logic trees, et. al. …

… how much different is that from the hours work i’ve been doing here so far? … not sure … i remember the last time i thought this through getting the impression that some of the old-style things, though not essential at the planning level, could be useful for one practical supporting thing or another … and a lot depended on the value of the information vs. the effort to get it … and on using the data intelligently vs stupidly … like it’s not a bad thing to know actual vs. standard setup and run-times … not a bad thing to know what the percent hours on setup and run-time vs other things or idle for people or machines/workCenters … i’ll have to look at this again once i’m through the process once of getting to the way “total annual direct labor expense” becomes a rate to multiply times setup and run-times to get “direct labor component of variable/direct cost” …

… one good idea is what paul n would suggest if he were here … start again with empty plant … 1 unit of product to make by one fully-cross-trained employee … : ) … good entry point to try and work through some time …

Standard Direct Labor.  … does direct labor portion of traditional “variable/direct cost” vary with forecasted volume too?  can that be right?  by dividing total direct labor cost by the number of forecasted “productive hours”? … it must be that way … otherwise, all the annual labor cost would not be “consumed”/accountedFor/paidFor in the “standard volume, mix, price, and cost” projection/plan roll-up … and, if so, that’s more really bad news about “contribution margins” that include more cost elements than TOC’s totally-variable costs (TVC) … that would make standard direct labor, standard factory overhead, and standard other overhead all cost elements that vary with assumed volume and maybe also mix … that’s not good for pre-toc way … so it’s probably right … it’s probably what motivated eli to invent the “throughput” idea (my TVA) in the first place …

… if steve king were here, he’d also make another good point that using “available productive direct labor hours” as the basis, in the “standard volume, mix, sales price, and cost” planning model, ensures the company at least plans to cover/”absorb”/pay “total period direct labor, factory overhead, and other overhead” costs — but also serves as rough proxy for capacity since the machines aren’t capacity without people.  that, i think, is a good verbalization of one of the reasons MRP II was a step forward for a lot of companies and worked as well as it worked …

yet another interesting discussion would be thinking through “available direct labor hours” as a proxy for capacity … show why it worked at a rough level … and show its limitations … i thought most mrp ii and jit and tqm characteristics through that way back in the 90s … as i’m writing this, i’m thinking, (1) for one thing, the “total period available direct labor hours” is a total for the entire period, not necessarily level over the period, so distribution of order due dates could create unworkable peaks of load in work centers and the “standard volume” plan would still look good … Haystack‘s dynamic buffering is specifically designed to make a level best use of the constraint(s), then check for peaks of load in other work centers, resolve them in various ways automatically or via scheduler action, and show the scheduler whether the mix she/he was testing was viable … (2) another issue i’m thinking might be the effect of mix … this becomes similar to the last issue … just using total labor hours and consuming them treats an hour on any machine as the same as an hour on any other machine … so mix and order due dates again could be over-loading one work center in some time frames and leaving another idle and it would all look just fine as long as all the direct labor hours got used up in the forecast/assumed/”standard-for-the-planning-period volume, mix, sales prices, and costs” plan … (3) next up … tva(money/cash/profit) per unit of constraint resource capacity (P&Q) … if i’m not mistaken, a very big shortcoming here … it’s a tangle of interesting inter-connections and needs its own thinking through …

… and here’s a place to do it …

ok, the question before the house is whether and to what extent the “period available direct labor hours” in a full-costed standard volume, mix, prices, and costs plan can be used for getting the right mix of products to get the most cash/throughput/TVA per constraint unit … what say you, fellow systems fans? … probably nay … but let’s see … we know we’re going to use all the available hours … bad start … has to be protective capacity in non-constraints if the schedule’s to work … nothing’s telling us what the combination of capacity and mix is causing to be the constraint … and the “profit margins” are coming from “full costs” with artificial unit labor cost, factory overhead, and other overhead cost elements …

(4) the limitations came to mind first, but let’s also see why it worked, why it was an improvement over what existed before … let’s see … without using “available direct labor hours” as a proxy for capacity … and assume also no rccp or crp from the mrp II suite … that’s rough cut capacity planning and capacity requirements planning, by the way … without any of those … probably did the same thing, but by hand … had sense for what was selling or could be sold, knew roughly from experience what mixes were built successfully or figured out roughly what might, build full costs, and totaled it up on their adding machines … remember adding machines? … so nobody in the mrp II community ever claimed using “total available direct labor hours in a period” as a proxy for capacity was good enough for dealing with all aspects of capacity … there was rccp and crp too … but all of that taken together was a solid step in the evolution of these concepts, methods, and systems …

… and if dick ling were here, he’d say that’s the heart of “sales and operations planning” … getting agreement on the volume, mix, and sales price to work toward …

so&p is a great subject and dick did a great job covering it in the context of toc in one of the apics cm sig symposia … there are so many options … capacity planning has a lot to do with how your company thinks about dealing with the state of demand for its products …

So what’s that mean for year 4 for Nuggets Incorporated?

pNeri’s plan just might be the right way to proceed.

Note 60

Analysis of Variance Analysis

[update:  note from above just put here real quick to save it someplace:

I never got around to my “analysis of variance analysis” [note 60], but I think, when we do, we’ll find part of it is natural, useful, and good, and a LOT of it is stupid because it’s dealing with comparing actuals against relevant/unreal forecast things instead of comparing actuals against relevant/real forecast things.]

When I was at Harvard doing required first year Management Control Systems course case studies on variance analysis, I didn’t like it much.  Mainly, I found it kind of tedious.  But I also vaguely remember, but only in retrospect, my Rickover program instinct for the clear vs unclear, and “man-made” arbitrary concept vs man-made reality-matching concept, was kicking in.  The sense for when concepts don’t seem to “ring true” as being a match with natural levels of “granularity” and “aggregation.”  But coming out of non-business military and only having been in technical ops in my first job out of Navy, never been an executive — ie, zero real combined biz marketing/sales/finance/hr/ops generalist like harvard mba does in first year — i just thought it was me not yet understanding the accounting stuff yet.  as someone who thought of himself as a general manager (vs functional specialist department manager) type, I was glad there were accountants and controllers who liked getting paid to do that kind of thing.  I was flip-flopping a bit on the finance interest though because, at one point, as the mba experience was opening up huge new perspectives and options, i was thinking i needed to understand corporate external financial reporting a little better (vs the required internal accounting course in the first year) since i might want to somehow buy a company and “turn it around” was one of the thoughts at the time.  so, in my second year, i took an elective course on corporate external GAAP financial reporting and how to try to use the numbers the managers in the corporation are required to submit to shareholders to try to figure out what’s really going on inside.  managers can hide a lot of stuff and still be within the rules of GAAP external reporting.  like the story eli goldratt used tell in circa 1990 1-day TOC lectures about the “hollowing out” of a major mid-western US manufacturing corporation.  I’ll reserve a spot for later in Note 61.

it’s so different describing the “money flows” from within a company and then trying to get a sense for the same money flows from outside the company.  inside, the TVA system shows the money flows, or some say cash, flowing in and out of the business …

Words, Words, Words!


Internal Words for “Cash-Like” Money Flows

Cash Flow, Money Flow, TVA, Throughput,  Unit Contribution (Profit) Margin, Direct Margin, Variable Margin, Cash Disbursements, Cash Receipts.

See here’s another example … i just went back and changed most of my “cash flow” words into “money flows” … why? … because one could argue with me about “cash flow” and be right because of the different ways people are thinking when they say the words, “cash flow”, in different situations …

i’m starting to break the habit of speaking of “TVA” as a “cash flow” or “cash-like flow” since “money flow” works as well and doesn’t raise the argument over whether it is actually “cash flow” like other “cash flows” … hm … but “cash-like money flow” is not bad … i think i like that … “unit TVA” like “unit standard cost profit” is a time-independent figure … what you could call the actual “cash flow” goes out to suppliers for raw materials and purchased parts and labor and heat and rent and lights in the factory and shipping costs and then 30 days before the bill’s due and the cash actually flows in, unless it’s pre-paid, but mostly not … there’s another set of words accountants use … let’s see … “disbursements” for actually cutting a check, but then a check doesn’t get through the mail, deposited, and cleared for a few days, right?  the point is not to decide on particular words, but to point to the fact that, part of the reason Eli Goldratt invented some new terms is to exit that entire quagmire of people talking past each other with different meanings for the same terms … let’s see the other term is, I think, “receipts” … “cash receipts” and “cash disbursements” … anyway, so the reality pointed to even by the seemingly obvious term, “cash flow,” can be very different depending on the question you’re addressing.

for planning purposes, “unit contribution margin” — which for TOC is unit sales price minus TVC totally variable costs, and for allocation based product costing is unit sales price minus fully-allocated “full cost” product cost — it’s a time-independent view that puts the sales price and totally variable costs at the same time point, ignoring, in order to keep focus on the right level of aggregation/granularity/accuracy for planning, and not let trying to deal with timing of disbursements for purchased parts and raw materials, and timing of receipts for product sales, make the planning so complicated it can’t be done at all …


External Words for Cash-Like Money Flows.

EBITDA, Cash Flow, Sources and Uses of Cash (including ups and downs in beginning and ending inventory levels) … or is that one, Sources and Uses of “Funds”?  Funds Flow statement?  Something like that …

Anyway, from the external view, there’s a entirely different way of trying to get an approximation of the “cash flow generated from continuing operations” … that’s something like the TVA – OE = Net Profit calculated directly by internal company employees and used by TOC.  External stock analysts and bank lending officers (some of them very lovely, indeed ♥ cc) try to get at that same money flow in a very indirect way because that’s all that GAAP rules provide.  Internal CFOs and CEOs know, when they’re preparing their external reports for shareholders using  GAAP rules (made by the FASB), that stock analysts and shareholders will start with the after-tax profit (“earnings”), then add back real money from the reported “taxes” line, then add back real money paid out for “interest” on debt, then add back the fictional non-cash “depreciation” and “amortization” line items, to get EBITDA (pronounced, “EEbit DAAH”), “earnings before interest. taxes, depreciation, and amortization.”  Every division of a large corporation contributes a money flow, that’s pretty close to time-independent “cash flow” … a better way to say that would be, “cash flow from operations for the period” …

… i have to slow down here a bit because a few potential confusion factors are flowing into my mind …

well, before i try to deal with them … a comment …

that EBITDA money flow in a GAAP “income statement” — a basic set of GAAP external shareholder reports for a quarter or year used to be (1) an “income statement”, (2) a “balance sheet”, and (3) a funds flow statement (or was it “cash flow”, or “sources and uses of” statement, probably all of those) — is what is calculated when the income statement starts with Net Sales (“net” after, I guess, discounts of various kinds, including commissions? not sure, but Net Sales) for the period (calendar quarter, year) and subtracts a fictional allocation-influenced and inventory-absorption-influenced … i won’t even call it a money flow … i’ll call it an accounting item … i might be off-base a little here … i’ll keep roughing it out and then adjust later … anyway, called, Cost of Goods Sold (COGS) … So, Sales minus COGS equals EBITDA … hm … maybe not right …

hm … i’m getting this all from memory, so it may not be perfect … i’m wondering if that means so-called “Gross Margin” is equal to EBITDA … but, if so, why don’t analysts just use the Gross Margin line item on the Income Statement rather than doing the backwards math to get EBITDA from after-everything-but-dividends earnings? … oh, you know what it might be? … there may be depreciation, a non-  cash item and maybe amortization, another non-cash item, in COGS … but, if that’s true, why does everybody use and talk all the time about EBITDA and not Gross Margin plus something? … plus what?  … plus depreciation.  i’m remembering that part now … i think “depreciation of plant and equipment,” might be a fictional part of an otherwise real “factory overhead” line item …

gonna need a NOTE 62 for the wonderful world of physical/real vs. accounting/fictional “depreciation” and how the realities under the accounting concept/fiction of “depreciation” (main realities being how fast the machine actually wears out and getting cash from ops or elsewhere to maintain and replace it) are dealt with in the TOC TVA-I-OE view, and how that differs from corporate (maybe), GAAP (definitely), and tax (definitely) accounting.

anyway, when it came to the first year management control systems course and the allocation and then variance analyses we had to work through, the wise brainy cool ops-savvy accounting professor, Chuck Christensen, emphasized that variance analysis wasn’t just about numbers, but about getting clues about what was happening or starting to happen or might happen somewhere throughout the various parts of the business.  More than just numbers.  But it seemed like just numbers and, even then, a decade before i heard about toc, something artificial about standard costs and allocation stuff bugged me.  I was glad somebody else was going to be in charge of variance analysis, but i’m glad i worked through the few examples provided in the course.

Nexst stop in attitude about variance analysis was in 93 and 94, with charlie and steve at bill’s rotron.  as i padded around with steve, as he made his end-of-month variance visits to various parts of the factory to get answers to why the different components of variance happened, so he could take them back to charlie the controller to bring up or not in management meetings, my attitude about variances changed.  i saw, as i think i wrote in my book in 1998, that end of period variance analysis was a good place to apply the toc logic tree thinking processes to go beyond symptoms and into root causes and thereby increase the effectiveness of scarce company time for continuous improvement projects.  if larry ellison were here, i bet he’d say, hey, let’s just, in the variance screens add mouse clicks to turn selected numbers into toc logic tree entities and give nice tools to build the trees on the spot and have them available on the network anywhere in the company.  and i’d say, wow, larry, i like it already.

So color me positive, very positive, on end of period variance analysis.


Note 61

The “Hollowing Out” of XYZ Corporation by using allocation-based product/part cost accounting for “make or buy” and product line decisions.

I heard Eli Goldratt tell this story the first time when I saw and met him for the first time when I flew to Chicago from the Boston area to attend his standard 1-day lecture on TOC in June 1989.  Coincidentally, I was at Harvard Business School when XYZ corp chairman, I’ll call him XYZ CEO — code name, CEO.  clever, eh? — came to the HBS auditorium in Gallatin Hall and was cheered by all of us for the amazing story of how he had very recently, and with a lot of media coverage, was turning the company around.  I was only half-cheering because I wasn’t so sure strike-busting was so …

am i remembering this right? …

yep.  good ol’ wikipedia.  the strike was nov 79 through april 20, 1980.  that’s toward the end of my first year at hbs, and the month i took my pals, nan and jay and john’s good advice, and did werner erhard’s est training for the first time:

http://en.wikipedia.org/wiki/International_Harvester_strike_of_1979%E2%80%931980

not sure the story i’m telling here is true.  might be, but not sure.  the cause-and-effect make sense … let’s try this …

well, i don’t yet see in this wikipedia article facts that support the story i remember eli telling and that was around that other people talked about and one person who had been working at XYZ at the time confirmed.  the article does say CEO was fired by XYZ corp on may 3, 1982, about a year after all the cheering in Gallatin Hall.  CEO claimed he wasn’t fired, but quit.  If the “hollowing out” story i’m telling here happened, i can’t tell if it was before the 79-80 strike or after.  one thing this shows is that it’s not often easy to use externally-available financial and other data to see exactly what’s going on internally.  hm … it’s possible the hollowing out by cost accounting, if it happened, happened before the strike and the strike was another symptom of an over-emphasis on cost cutting vs TVA growth (a TOC mantra is grow TVA as much higher importance than cut inventory and that more important than cut OE).  And that the part/product “cost cutting” and harsh labor treatment had weakened the company before the strike and made it too weak to recover from the long strike, which it never did.  Huge losses in fiscal 80, 81, and 82.  selling off a major biz in 85 and new name in 86.  so the only way this story i’m telling here could be true is for the hollowing out and the big reported profits from building to inventory to “absorb” overhead and give CEO a huge cash bonus would be that it happened before the strike and before all that cheering in Gallatin Hall.

CEO took over at president and coo aug 77, ceo later, then chairman by june 79. so that means the story, if true, had to happen in 77 to 79 timeframe, pre-strike which starts nov 79.  the wiki article says he cut costs right away.  also says modernization program.  also big market share increase, so the part of my story about less sales volume isn’t right.  the “modernization” and “market share increase” could fit though, if what he sold as “modernization” included dropping cash-generating parts and products or “buy vs make”-ing a lot in order to simplify manufacturing (become more of an assembly operation, assembling purchased major assemblies, that would fit the story that was going around) … and maybe the market share came from shorter leadtimes associated with only assembling … still not clear yet … ah, he cut expenses a lot right away, “modernized”, got big market share/sales increase, record profits, but still had profit margins on sales less than half of John Deere and Caterpillar Tractor, similar companies … with both cost cutting and modernization, still having low profit margins could be a sign of the usual thing that happens when you “buy vs make” parts that were cash-generating before you stopped making them yourself … and, to get the share increase, wonder where that came from … if from lower leadtimes, he’d be paying up for short supplier leadtimes … that all fits … a piece, though, doesn’t fit yet … if sales so great, where’s the build to inventory for generating phantom profits from “absorption of overhead into inventory”? …

oh, that could work … do inventory build to “absorb” overhead into inventory (what a dumb concept and term, “absorb” overhead into inventory) to get record profits, get bonus, oh, ok, and build inventory to be able to have something to sell during strike he knew he was going to force to happen … that would make the inventory build for “absorption profits” not as selfish as it sounded … the pre-built inventory helps company sell through a strike he thinks, rightly or wrongly, he needs to get new workrules or other contract stuff for the company, and he needs the reported profits to keep the shareholders taken care of so they don’t take a bath (american slang for lose a lot of money, as in bath in red ink, red ink pen for losses) on their stock price … he also gets a nice bonus, but keeps company, customers (spare parts, by the way, not just new vehicles), shareholders somewhat taken care of as he goes into a strike he thinks he needs to have to re-arrange work and maybe pay structures …

thinking again about “expenses reduced quickly by over $600 million,” “modernization,” “big increase in market share,” yet “margins on sales still less than half of deere and cat” … that’s real strong evidence for “buying vs making” parts that were once generating cash and profits for the company.  the reason a manufacturing company manufactures its own parts, vs. just buy and assemble, is to get high profit margins (TOC folk, think, high TVA/throughput as % of sales).

ok, he fired 11,000! of the 15,000 middle and upper level managers.  talk about reducing overhead. if they made 50-60k each of salary and benefits, that would say most, maybe all, of the 660 mil expense immediate reduction came from management and not cutting production labor folks.  that helps because i was wondering, if the cuts were made in production labor, how they built for record sales and profits and how built to inventory as part of that or after that …

ah, and if he farmed out parts … right … another piece fits … if CEO farms out parts, to buy vs make, he has more labor for assembly and can build more vehicles with same labor which is how he got record sales with same labor force … at expense of profit margins due to now paying so much to buy vs make components and sub-assemblies …

results reporting data and timeline … looks like fiscal year back ended in October.  So the big interesting fiscal year was the year ending at end of October 1979, the CEO bonus for which was announced in early nov and Q results announced mid-dec.

there were 3Q results for  may/jun/july announced in mid august around time negotiations started

apparently, CEO bonus of 1.8 million announced on nov 1 right before or at start of strike

“On November 1, 1979, 35,000 UAW workers (36 percent of International Harvester’s workforce)[25] at 21 plants[25] in eight states—GeorgiaIllinoisIndiana,KansasKentuckyMinnesotaOhioTennessee, and Texas—struck International Harvester at noon rather than accept the new work rules and mandatory overtime provisions.[7][1][26]But McCardell was not alarmed, seeing the strike as a way to challenge the union’s power in the workplace[1][25] and as an opportunity to improve efficiency by regaining concessions the company had made in the past.[6]

that says a little over a third of the workforce stopped working.  that explains how the company could keep making sales.  wonder what fy 80 sales and profits were?

there were fy 79/4Q results,  announced in mid-dec for – aug/sep/oct.  “December 15, the same day the Caterpillar strike ended.[31] Days later, International Harvester announced a year-to-year increase of 98 percent in its profits for the entire fiscal year. Yearly earnings reached a record $369.9 million, and sales rose a 25.9 percent to reach a record $8.4 billion.[32] Net income nearly doubled to $12.01 a share (up from $6.14 a share).[32]

then in jan, gloomy projections announced for january quarter 1980. “But on January 10, 1980, the company said that losses in the first quarter of the year (November 1979 to January 1980) could be as high as $225 million (or 10 percent of the company’s shareholders’ equity)[25] if the strike continued.[31][33]

Days before the shareholder meeting, International Harvester reported a first-quarter loss of $222.2 million.[39] The company also admitted that unfilled orders of $4.2 billion, up from $2.8 billion a year earlier, and attributed the loss and backlog to the effects of the strike.[39] Despite the loss, the company still paid a dividend of 62.5 cents a share for the quarter.[40][41]

“The annual shareholders’ meeting, held at the First Chicago Center[41] in Chicago on February 21,”

“The strike severely impacted the company’s financial status. International Harvester lost $257.2 million in the second quarter, for a total of $479.4 million in the first half of the year, while sales slid 47.3 percent.[7][66]

“Nearly all independent commentators saw the agreement as a losing proposition for International Harvester. The company had incurred deep financial costs, lost market share, and achieved none of its key demands despite McCardell’s assertions that the proposals were critical to the company’s success.[6][62][63] “[T]here is no question the company is the big loser in this one,” one Wall Street analyst said.[62]

Wow.  Given all that, it’s not clear why everybody was cheering in Gallatin Hall.  Maybe the speech was before the strike was over, when CEO was still projecting optimism about getting what he wanted.  Can’t imagine him giving a big high-energy speech that got the tough guys and girls all excited if the press had already made this kind of assessment of catastrophe.

What’s the other side of the story?  I guess if Arch were here, he might say something like, “well, we needed to try to make the changes.  the japanese threat to our industrial base was already clear.  we didn’t want to lose deere, cat, and harvester to japan.  we needed changes to work rules to begin to compete.  and we were just not able to get it done in 79 and 80, unfortunately.”

And there’s the larger chess games of the company vs. domestic and japanese competition, the relationship with the union, and the union maybe seeing harvester and its jobs as expendable pawns to be sacrificed, if necessary, to make a point in its larger struggle to protect the much more numerous jobs within the auto companies and their suppliers.

So the story I heard from Eli, and heard around the industry, while apparently true, is a simplistic one focused on highlighting, teaching, and dramatizing one important aspect of what happened, the part that provides a useful lesson and caution about using internal use of allocation-based product costing rules for decisions about product line and decisions about “make or buy” manufactured parts and sub-assemblies, and also about how a one-time big increase in false profits can be created in external reports via illogically building unsold things to inventory utilizing, again, another accounting rule to make a mistake.  Viewed in that narrow context, hollowing out the company and building unsold inventory are clearly mistakes.

But the actual situation was a lot more complex.

“CEO” immediately cut middle and upper management from 15k to 4k , wow, pocketing a cool 66o mil for the company yearly, “farmed out” a lot of his manufactured parts, started paying high prices to suppliers for parts he used to make for cheap and that used to generate cash, maybe cut some product lines that seemed or were not either “profitable” on paper (possible mistake) or not generating enough cash to make them worth maintaining for the future (maybe not a mistake), had the factories build (much more assembly, and much less making manufactured parts, reducing TVA margins a lot) like crazy to support market share and sales increases (where’d that big sales increase in october-ending FY79, right before the nov79-apr80 strike, come from? customer confidence in CEO and his plan and work since he became president in aug 77 and chairman in june 79?  shorter lead times?  customers buying in a hurry knowing a strike was likely?), then, probably as sales dropped off, knowing strike was likely, kept that still full labor force (he didn’t cut that, maybe couldn’t, make didn’t want to, but he did cut middle/upper management jobs) building like crazy to inventory, not to shipping to customers (could be real sales in one part of year and then build to inventory in second part of same year), … enough … close enough …

now i think it all fits … on with the story …

the fact that labor union strikes were happening at all were not the best sign, and all the self-congratulation over strike-busting, and, well, some of us in the auditorium were feeling the charisma and stuff, but were looking around us wondering if this large roomful of leaders of tomorrow weren’t just a little too happy about the tough-guy we/they stuff with the union.  But it was an interesting speech and afternoon.

Apparently, before or after that speech and and all that cheering, CEO got the big year-end bonus, based on the external GAAP reported financial results, and at some pont after that speech and cheering, xyz corp never recovered, sold assets and changed its name by 1986.

Goldratt’s story in 1989 related, in straightforward cause-and-effect, how cost allocation rules allowed the illusion of increased profits to be created despite the fact external sales had plummeted and inventory levels had sky-rocketed.

Here’s how that works.  If you know the jargon, the short answer is:  As demand fell, fewer manufactured parts had to carry more overhead when calculating fully-allocated part/product costs.  So some parts that were actually generating real cash, TVA, were dropped due to being “unprofitable.”  With fewer parts/products being made, the same overhead had to be spread, with new overhead allocation rates, to fewer parts/products.  Some some more parts/products that were actually producing quite a bit of cash, TVA, to cover OE including salaries and benefits and all the rest, were dropped because the accounting and finance experts said they were “unprofitable” based on “full costing.”  And the spiral continued, hollowing out the company.  Each time they dropped a part/product, they either had to stop selling it or buy the part from somebody else who was smarter about their true costs and true cash flows.

Eventually, faced with a sharp decline in sales and profits to report, the same internal accounting and finance experts who stupidly advised dropping all the parts/products, said, “CEO, we can all still make our bonuses if we just tell the plant to make stuff at really high rate, even though there’s no sales, and just put it all into finished goods inventory.  We won’t actually get sales, or cash from sales, in fact, we’ll use up all our cash buying raw materials and purchased parts to do this, and we’ll still be paying cash for Operating expenses, but — because the allocation rate for overhead’s already been set for the upcoming period for higher sales rate, and since its based on direct labor hours, if we build and build product, the overhead rate will “absorb” all the overhead and put it into the inventory account, and the reported sales minus unabsorbed OE will give a huge profit that will meet our management contracts, get us our stock options and big cash bonuses, and we can worry about selling all that stuff we built later.”  Not sure what CEO might have said there.  Probably something like, “ouch.  That’s not where I hoped I’d be with you financial geniuses telling me to drop all those parts.”  But Eli said he said, “ok, let’s build the stuff” and i forget where … i wish i could remember … i’m not being coy or irresponsible, but I did speak to somebody or maybe more than one person who was in xyz corp at the time and confirmed the story.  and i think i saw it in print once, not a quote of eli.  since i can’t even remember the source, i’m going to take the name of the chairman out of this and the name of the company.  if you want to know, ask any toc person.  or apics person.  most of the ones who were around in the 80s and 90s have heard the story.

so they reduced the company’s cash flow by dropping “unprofitable” (not really, only looked that way due to allocation costing) cash-generating parts and products, then plowed a lot of the rest into inventory they hadn’t and wouldn’t soon be able to sell, paid executive performance bonuses which were at least 7 figure (millions) to ceo alone, more to the rest, ran out of cash, filed for bankruptcy, and, as far as i know, never became the manufacturing powerhouse it had been before that allocation-based cost accounting, inventory valuation, and GAAP accounting-enabled catastrophe.

that’s the kind of thing eli goldratt and bob fox and keniichi ohno found a lot.  all of them said things about the problems cost accounting rules created in operations.  eli goldratt had, as usual, the most colorful and effective (what i call, “leadership statement”) in his 1983 or 1988 paper before some operations professional society, actually no, i’m confusing this paper with the green-covered one that covered the history of OPT’s versions and what innovations each version incorporated … the cost accounting paper was presented to an accounting group and was entitled, “Cost Accounting: Public Enemy Number One of Productivity.” : )  Great or what?

Another great paper title was by another of my mentors and heroes, also my friend and 2nd year ops prof, Harvard Business School operations professor, Bob Hayes.  He co-wrote with colleague, Earl Sasser, an article for the Harvard Business Review, entitled, “Managing Our Way to Economic Decline.”  Same sort of theme with different examples, not just cost accounting, but other things too.  another “leadership statement.”  All a part of the process of improving the managing the world’s industrial enterprises.

While I’m doing great article titles:  “Accomplishing Managers:  Versatile and Inconsistent,” my first year ops prof, Wick Skinner.  “managing by walking around” by somebody, wrapp … forgot.  Both articles in Harvard Business Review magazine or professional journal, publication, you get the idea.


Note 62

Dealing with Replacing “Plant & Equipment” due to “Wear & Tear” (Physical/Real) and Cash (Financial/Real) vs. “Depreciation” (Financial/Fictional) …

… became the 2nd Detective Columbo skit.

This will get lively.

How should we handle, “depreciation?”  Let’s listen in …

“Um … hey, Detective Columbo, that chart of yours doesn’t have anything in it about “depreciation.”

Columbo:  [thwacks self on forehead]  Oh, do you know what, sir?  You’re right.  Thank you so much. I’m always forgetting to put a “depreciation” line item into my five-year TOC TVA planning and decision charts.  I’ve had a lot of accounting experts tell me that’s a mistake.  Do you think that’s a mistake, sir?

“Of course.  How are we going to pay for new machines when the old machines wear out if we don’t take depreciation into account?”

[Turns to woman next to him in the lecture and meeting room]  See what I’ve been telling you?  These TOC guys don’t know what they’re doing.  They come up with these simplistico ideas they call, “solutions,” and then repeat them over and over and over again and never get to the issues we have to deal with in the real world, like depreciation.”

[lips pursed.  looking down.  shoulders bent a little forward.  overcoat open, as usual.  shaking head.  a bit chagrined]  Yes, ladies and gentlemen, I’m sorry about that.  The problem is, the reason I keep forgetting to include it, is I’m just not sure where to put it.  Maybe the gentleman with the … what kind of T-shirt is that, sir? …

What?  This?  Oh, I got it at Ozzfest.

Oh, that’s great.  Mrs. Columbo and I love that movie.  My favorite scene is when Dorothy’s little dog, Toto, jumps out of that nasty lady’s bicycle basket and …

Excuse me.  I said, “Ozzfest,” Detective.  Not The Wizard of Oz.

You’re right again, sir.  You know something?  I believe you can help me with this problem I’ve been having with the “depreciation” concept.

I would be happy to help you, Detective.  Ok, when a company buys a piece of equipment, any kind of “depreciable asset” — like a machine tool, lathe, stamping press, fork lift, conveyer, rolling mill, heat treat oven, or even a vehicle — they record the cost of the “asset,” pick a time period called, “the useful life of the asset,” divide the cost of the asset by the number of years or other accounting periods in the useful life, and that becomes the “depreciation” for the asset in each accounting period.  The recorded value of the asset is reduced by the amount of the depreciation each year or other period and the depreciation for the period is treated as an expense for that period.

Well, that’s very interesting, sir.  Thank you for that.  That really helps me a lot to know all of that that you just said to me there.

You’re welcome, Detective.  So do you now see you should have, in your 5-year TOC planning chart, a line item for “depreciation?”

[quizzical looks as he thinks it over] Well … there was one thing you said there, sir.  You said you would create an amount of “depreciation” for an asset by dividing what was paid in the past for an asset by the number of years of useful life of the asset … let’s just use accounting period of one year instead of quarters or months … is that ok with you?

That’s fine.  And yes, the “depreciation” for the year would be shown as an “expense” in each year of your chart.  You could put it into your Operating Expense (OE) line item if you like or just put Depreciation on its own line.

I see, sir.  That’s very flexible of you.  That’s two very good places to put that Depreciation amount on my chart that shows money flows.  Shows money flows.  Right.  [suddenly slaps his knee]  Shows real money flows!  You know what, sir?  That’s my problem.  That’s exactly my problem.

[a little surprised and puzzled]  I’m sorry.  I didn’t follow you there.  What’s  the problem?

Well, sir, my problem is that everything else on my table, my chart, is an actual money flow and this Depreciation amount that you said was a portion of the money that was actually spent in previous periods isn’t really money coming in or out of the company during my 5-year planning period, is it, sir?

Well, no.  I mean, yes.  I mean, yes and no.

Thank you very much for clarifying that, sir.

Very funny, Detective.  It’s true that Depreciation is well-known to be a “non-cash item” and …

Ok, there it is.  A “non-cash item.”  So Depreciation is kind of a fiction or phantom or imaginary money flow.  Is that right, sir?

Well, I wouldn’t say it that way.

Why not, sir?  Everything else in my chart is a real flow of money into or out of the company during that 5-year timeframe.  The TVA, TOC throughput value added, is made up of Sales money coming in minus the money going out for purchased parts, raw materials, and any other totally-variable units-driven costs (like many outside services).  The Operating Expenses (OE) are all money going out of the company for things like salaries, benefits, heat, lights, supplies, legal services, accounting services, and the like.  If I have a clear picture of the  money flowing in and out of the company during my 5-year period, why would I want to put a “non-cash item” into the picture?  Could you tell me that, sir?

Detective, I think I know what you’re saying, but you have to set aside some money to pay for replacing machines when they wear out.

You know, sir, you’re right.  So I’ll make sure some of that TVA in that chart will be set aside to save money for new machines when the old ones wear out.

Great!  That way, you’ll have the money you need when you need the new machine.

Thank you for that, sir.  That is a very good suggestion.  It’s such a good suggestion that I’m going to tell Mrs. Columbo about it so we can start saving up for when the refrigerator or the hot water heater breaks down someday.  Is that the kind of thing you mean, sir?

Absolutely.  The same principle applies at home or at work.  That’s very good, Detective.

Thank you, sir.  That answers the question about saving money to buy new machines.  That really settles that question.  Yes, that issue’s definitely been handled very very well.

Great.

Yes.  I think I’ll head home and start saving up for our new refrigerator right now.  See you next time, sir.  Thank you again.

No problem, Detective.  My pleasure.  Take care.

[starts to leave.  is almost out the door of the meeting and lecture room.  just as he’s almost gone, he stops, turns, and raises the index finger of his right hand and says]  Excuse me, sir.  Just one more thing …

Yes.  What is it, Detective?

In all the excitement about saving money for new machines in factories and refrigerators and hot water heaters at home, I forgot to ask you what to do with that Depreciation value you calculated?

Well, isn’t that’s what we just answered?

Well, maybe I misunderstood something, sir, but I thought we answered whether to save some of the money on my 5-year TVA charts for buying new machines in the future.  We didn’t say anything about, “depreciation” there.

I guess you’re right.  But I said you could give Depreciation its own new line item or put it into your OE account.

Yes, but then I asked you why I would ever put a fictional number in with all my real physical money flow numbers and you answered a different question about saving money to buy replacement machines.  You answered, “what is one way to have money for new machines someday?”  Actually, sir, I’m realizing just now that I had thought about that before.  Other ways to get new machines, other than just saving up TVA from ongoing operations, are to get a loan, or get a lease, or get a lease with a buy-out option.  All of those options, when they produced real money flows, would show in my 5-year charts as real money flows, and not be a math fiction from a past real money flow to buy the old machine. So, the thing I still don’t know, sir, is what to do with the “depreciation” in my 5-year planning and decision view.

[thinking a while] Well … maybe … if you’re showing all the real money flows and physical events for the 5-year period … and if you’re setting aside TVA for buying replacement machines … or planning to finance new machines with loans or leases or grants or capital from corporate … and if you’re going to show all those real inflows and real outflows where they actually happen in the 5-year plan … maybe you don’t need to show depreciation, but I’m not sure why.  Everybody knows Depreciation creates cash to buy replacement machines.

Actually, sir, only selling something creates cash for the company unless they are going to get a loan, a grant, or sell shares of stock to stockholders.  On the other hand, your “depreciation” amount can show up in tax returns as a deduction and reduce taxes.  The reduction in taxes is a real money flow and I show that in the taxes line in my plan.  The tax effects of depreciation of assets purchased prior to my planning period are going to be pretty much the same for all the alternative plans I’m considering now.  If I leased the machines I acquired in prior periods, the lease payments will part of the OE my TVA has to cover.  For future machine leases, lease payments go into OE again.  For future machine purchases, the cash has to come from somewhere (again, saving TVA, loan, grant, equity) and show as purchase outflow in the plan.  The depreciation will effect taxes and show in the tax line of the plan.

Whew.  Ok, that’s definitely all true.  I can see you don’t need to show “depreciation” on your charts, but I’m still not sure why.  Depreciation shows up on all my reports to corporate, to tax authorities, and to external GAAP reports.

Ah, there’s the problem, sir.  Those are reports and my chart is for decisions. The reports are for what happened and what my chart/table is for is what we are considering to make happen. “Reports” have their own purposes, rules, and logic.  So do “decisions.”  When people take accounting figures and procedures created for one purpose and try to use them for other purposes — in the name of “efficiency” or “saving work” or “saving time” — they create a lot of potential confusion for themselves and everybody around them.  Each purpose for accounting — each question, each decision — must be thought through to what are the correct financial elements and steps.

You know what, Detective?  You’re right.

Thank you, sir.  I appreciate that.  Hold on.  Did you hear that, sir?  I just said, “I appreciate that.”  Now isn’t that something, sir?  Here we are now just talking about “depreciation” and I find myself talking about the opposite thing, “appreciation.” How about that?  They are opposites, aren’t they, sir?

Well, I guess so, but … Yes … hold on … let me think about this … “appreciate” … increasing value … “you appreciate” … yes, I never thought about it, but our everyday use of the word, “appreciate,” is about the value we place on something just like “appreciation” and “depreciation” are about value of assets.  And, yes, they are opposites.

That’s what I thought.  Depreciation is about valuation, about lowering valuation, right, sir?

xxxxx  some extra stuff to use maybe … just keep here for now  xxxxx

Right, but …

I know factories run their machines and I know they wear out and have to be replaced.  In fact, I think I mentioned that the TVA produced by the pump company in the chart will use the TVA to buy new machines in the future when needed.

That’s right.  You can’t ignore the need to replace machines when they wear out.  But, oh, you said you didn’t ignore it.  What was it you said again?

Well, on that chart … detective columbo toc tva decision planning chart

Do you have a crayon, sir?

A what?

A crayon so I can make a simple discounted cash flow (DCF) analysis diagram that you can maybe use to help me understand net present value (NPV) calculations.  I could use a felt marker or dry erase pen, but, I think when it comes to more sophisticated financial discussions, a crayon works best.  ♥,/♥rf.lf.me

Oh, for heaven’s sake … it doesn’t … [rummaging around in a drawer in the desk by the far wall] … let me see … it doesn’t make any difference what kind of pen … i can’t find … wait … ok, here’s your crayon … what’s your point?

My point is, and I think you’d agree, sir, that crayons come in many colors and …

No no no … I meant, what’s your point about dcf and npv?

Oh.  Ok, sir.  Well, the first thing I want to say is that things like DCF and NPV aren’t as important as knowing that what you plan to do is going to produce your desired effect.

I’m not sure what you mean.  Aren’t measurements like ROI, NPV from DCF, and EVA indications of what return the company is getting for the owners on their capital?

Yes and that’s why they’re very good guidelines most of the time.  But, suppose you have to create a plan to save the business from going out of business, or a plan to seize some excellent new competitive position, but it would cause temporary dips in reported ROI, EVA, or NPV, would you decide not to survive or not to do the strategically-correct thing in order to try to maintain those return-on-capital and cost-of-capital measures?

I guess that’s right.

It is right.  So, although the TOC TVA planning and decision view, that emphasizes TVA and shows the real money flows, seems simplistic to some experts who think only in terms of the measurements they are expert in, it’s the natural way that the best minds already think — including the best strategists, entrepreneurs, general managers, CEOs, presidents, and such.  They are the ones who figure out what their organizations can do and get it done.

I never thought about it like that, but, once you say it, it’s obvious.  I’m so used to hearing and thinking that ROI is the ultimate bottom line measure.

It’s a very important measure, but the most important measure is getting the right things done which leads, on average, over time, to superior performance on the ROI, EVA, and NPV too.

So should we bother to discuss discounted cash flow analysis and net present value?

Absolutely.  For one thing, it’s what all the experts are thinking about, so you have to be conversant in it.  But they are also useful analytical view and results to have in mind.  NPV gives one sense for the nature of the magnitudes and timings of cash flows.  IRR gives another interesting view.  But getting the right things done is still the most important “measurement.”  You can also, if you like, for a lot of discussions, leave out the “discounting,” the process of using the “hurdle rate” that represents risk-adjusted cost of capital to use for projects and involving investments, and just use the timelines and arrows of money inflows and outflows to give yourself and others a “picture” of your project, like this:

Click on image for larger view

You can divide the “I” line into two lines, Im for materials and Ip&e for plant-and-equipment.  You can also add a fifth line for “Cash”, as in “cash set aside for buying replacement or other new equipment.”

If, after using cause-effect thinking and constraints analysis to get good projections of the size and timing of the money inflows and outflows, and after showing them on a figure like this … stop … because you’ve done the important part of the work … you can see if you can get where you’re trying to go at all … after that, at a much lower level of priority, you can apply the hurdle rate to discount the cash flows to calculate the net present value (NPV) and/or the internal rate of return (IRR) of the cash flows in the life of the project.  But, again, if you’ve figured out your best play for your business, and shown yourself it’s viable financially, then finding what “internal rate of return” gives you npv equal to zero (the percentage rate of return when cash flows aren’t simple and even) is just a “nice to have” “interesting” thing for you and maybe also a necessary view your boss requires.

Interesting.  This is not a view I ever hear.

That’s because the usual measures are such good guidelines most of the time, on average, over time.  But it’s important to learn these things the right way and have the measures serve us vs. constrain us when we get to situations outside the range of validity of having the measures dominate and dictate the action.

Where did you learn this?

you don’t learn it anywhere.  you figure it out.  even if you hear it from me, you still don’t know it until to work it out for yourself as true.  what i learned was “cause and effect”, “range of validity” of all concepts including measurements, and purpose.  you can learn these things in a lot of places.  i learned parts and all of them in different places.  but the most efficient way to gain skill and be influenced in this way is to learn and use eli goldratt’s toc philosophy which is the best and most concise expression of the application of the methods of physics to other areas of life.  it’s about what are we trying to do (goal and measures), what are changing from and to, and how to get there.  what i just said derives from that.  ,/e

Note 62-a

this was the original Note 62 … it turned into a verbalization that was helpful for remembering things i’d worked out in the 90s about the “depreciation” accounting concept and for figuring out some new things, but it didn’t flow real well … so it became

Note 62-a, a first pass that led to the real Note 62 (the 2nd Detective Columbo skit, the one about the “depreciation” concept) … : ) …

the original note:

Thinking about how to work the depreciation issue and whether and how to get the Detective Columbo approach to work it

[ caution:  skip this if not interested in struggling over whether and how to use Columbo for depreciation points and if not also interested in wandering around getting points out about depreciation from various angles.  i.e., not an orderly presentation of depreciation issues.

hm … wonder if i’m using the right tool for this job?  columb0’s clearly right for when something’s obvious to amateurs, but unseen by experts,  but this one’s going to require, not just the chart, but stating the options for replacing machines, the different ways past and replacement machines are paid for and valued and “depreciated”, different meanings for the word, “depreciation,” in everyday life, tax accounting, GAAP accounting, reports for corporate parents … the original skit’s setup was a little complex, describing the chart, but the punchline was simple … is it enough?  yes?  so why do more that makes things more work and more confusing? … hm … the simple message is … let’s see … how to organize this … let’s see what’s true about depreciation … first, it’s mostly not needed in decision analyses … one could argue it’s useful to know its effects on taxes (which effects tax flow a little) and on how corporate and GAAP reports look to the bosses, but only that first one is the kind of thing we look at in planning and decision analysis … that’s part of the confusion here … context … one context is financial planning … another is the intra-corporation politics … also there’s reporting vs. deciding … and whether to consider the political effects on reporting when deciding …

oh, right … as i’m figuring it out again, i’m noticing i’m arriving to things i thought and said before … i would start out saying, be aware of how quickly your machines are wearing out, know what your plans are for replacing them (buy new ones, buy used ones, pull a spare out of warehouse, use cash from ongoing operations, get a loan, get a lease with buy option, and show the actual money effects … end of story … that’s the cause-and-effect way … or toc tva way, whatever … then we say, but replacing machines may have nothing to do with wearing out … may just want/need upgrade, closer tolerances, different features … nothing to do with wearing out …

depreciation is the opposite of appreciation … appreciate means to value something at a higher level … depreciate means to value something at a lower level … in everyday language, “i appreciate that,” is actually saying, “i’m placing a high or higher value on that” … so depreciation and appreciation are about value, not necessarily about wearing out … if something in the market changes, and a new machine’s no longer useful, it’s value drops to next to nothing, but it’s not worn out …

maybe we can use Columbo and the guy in the Ozzfest t-shirt to make those issues work … it starts out pretty well … noticed i’d already used “appreciate” so put that to work …

♥cc.me♥ … the reason npv/dcf uses all those little arrows … shows timing better than a table/chart … related to the I, investment, staying level at 5000 across the 5-year planning/decision view … 62 and 62-a ♥,/♥rf.lf.me …]

Note 62 1/2

About big names who miss plays.

Like the two big names John Caspari mentioned in his essay discussed above.

Like the several generations of accounting, finance, and operations big names who missed the play on it being …

better to start using unit “direct/variable margin analysis”-like “profit margins” in a different way than “direct/variable margin analysis” uses them …

(that’s another way of saying what TOC means when it says, unit “throughput” money flow or “financial throughput” or my “true/throughput/TOC value-added, or TVA,” in a multi-year strategic, financial, and operational planning framework like the TOC TVA Financial Management System for Strategic, Product, and Improvements Planning and Control)

rather than to continue using the after-“fully-allocated product costs (meaning including direct labor and all overheads including factory overhead that includes non-cash “depreciation”) type of unit contribution “margins” for manufacturing company decisions.

Like missing the play on having unit sales price, minus unit real costs that really vary with units, to create a practical concept of “profit margin” or “contribution margin” that avoid the unnecessary extra work and potential mistakes of allocating largely-arbitrary levels and not-really-units-sensitive money elements like direct labor costs, factory overhead, and corporate overhead and avoids putting largely-arbitrary levels of the not-real money item of “depreciation” into every unit contribution margin calculated for a decision.

[i’m trying to verify that depreciation goes into factory overhead and that factory overheads go into “full costs” along with direct labor and other overheads.  i think that’s right, but i’m not sure and not finding it anywhere yet.  ok, here we go at accountingcoach.com:

“Manufacturing overhead (also referred to as factory overhead, factory burden, and manufacturing support costs) refers to indirect factory-related costs that are incurred when a product is manufactured. Along with costs such as direct material and direct labor, the cost of manufacturing overhead must be assigned to each unit produced so that Inventory and Cost of Goods Sold are valued and reported according to generally accepted accounting principles (GAAP).

Note that they’re doing what they’re doing due to the requirements of external GAAP (and also tax) reporting vs. decisions. The problem comes when companies use the numbers they create for reporting also for decisions.  TOC solves that problem by prescribing a separate set of books for decisions.

“Manufacturing overhead includes such things as the electricity used to operate the factory equipment, depreciation on the factory equipment and building, factory supplies and factory personnel (other than direct labor). How these costs are assigned to products has an impact on the measurement of an individual product’s profitability.”

Note the last sentence.  If overhead rates vary, and they’re arbitrary, individual products will seem more or less profitable even though changing an overhead allocation rate doesn’t change the rate at which an individual product generates cash for the company.]

Those are very big plays for big names to miss.

Hey, every big name in every field doesn’t have to invent every breakthrough in his or her own or everybody’s else’s field.  It’s enough that they made the contributions they made to get the big name in the first place.  If we later on, with the benefit of hindsight, can see a big name got something fundamentally wrong (i.e., not matching something natural in the environment that was important to the purpose), we can also usually look carefully to see why that assumption or decision or observation made sense at the time.  Even the Ptolemy concept of “sun revolves around the earth” makes complete sense if we just forget that we learn in school that “earth revolves around the sun” and just, with no other information, look at what happens each day.  Sun comes up over there.  Goes down over that other place.  And, in-between, makes this great big half-circle overhead.  Also, a big name doesn’t change from “somebody with an idea” to a “big name” unless a lot of people agree with him or her.  So, when I say, “big names can be sometimes dead wrong and lead entire generations astray,” I’m not being dismissive of big names.  We all stand on the shoulders of giants when it comes to thinking about anything in the modern world.   So Robert Anthony and Warren McFarland, deservedly big names for things they contributed in their fields and in their time, by standing on the shoulders of giants who came before them, didn’t also make the contribution Eli Goldratt, Bob Fox, and the TOC community made by seeing and telling the world that allocations of labor and overhead, on any basis, to form “fully-allocated product costs” for decisions, including pricing decisions, was a bad idea.  No big deal.  They did what they did.  And we did what we did.  Speaking personally, if it hadn’t been for Anthony’s very clear and very practical Essentials of Accounting, and Professor Chuck Christenson’s wise and practical perspectives to go along with it, I wouldn’t have been in position to see, beginning in 1989, that Goldratt and Fox were right about T-I-OE as the right way to roll up the numbers for decisions.   I don’t really know McFarland’s work, but I remember the sense around the school, HBS, that he was respected, oh, and by Bob Hayes, who was definitely a smart, gutsy, and practical big name in the manufacturing ops knowledge area.

Note 62 and 2/3

This is getting to be like the “Building 19, Building 19 1/2, Building 19 3/4” company name for stores somewhere, where was that?, Boston area?, somewhere in the eastern US, and fun joke.

This is about the controller in the book, The Goal. He’s important because the things Alex Rogo and his pals in Unico were trying to do were right and very profitable and cash-generating for the company, but all the accounting procedures was saying it would be catastrophically wrong.  So the controller knew the numbers were sometimes wrong, was able to see that what Alex was trying to do was right, and helped Alex somehow, mainly, I guess by supporting his doing it so the excellent results could speak for themselves. … by the way, that word, “controller”, sounds so harsh and negative … it just means the boss of the accountants of a manufacturing company … or the accountant in a very small company … the word comes from “financial controls” needed to keep a manufacturing team from spending too much, spending on the wrong things, doing the wrong things, going out of business, things like that … most of them are nice guys and gals and savvy and practical and wise … not fierce “controlling” personalities … the best ones are practical and savvy and wise and independent-thinking because, if they weren’t, they wouldn’t be able to “see” the physical reality of the cash-effecting good things the accounting numbers sometimes, not always, obscure … but you only need a few wrong decisions to cripple a company, or miss significant opportunities, and demoralize and demotivate people who can see the actions would be right …


——————————————————-
Note 62 3/4 (second pass through the re-thinking about “I”)

Ok, let’s try this again.

This is my second pass through working up new, more concise, faster, more clear, and more comprehensive ways to explain dealing with the “I” money symbol and the various realities it points to in recurring manufacturing company contexts ranging from shop floor to divisional to internal corporate and external tax and shareholder reports.  It builds on my vintage 1998 (from my book about TOC) Detective Columbo skit and chart labelled, “Figure 6.1, The TOC TVA Financial Management System for Strategic, Product, and Improvements Planning and Control.”  It may also draw on my vintage AFewDaysAgo second Detective Columbo skit about the “depreciation” concept.  It will also draw on what I’m starting to remember after the first pass about how, in the 90s, I was discussing the different ways of preparing, formatting, presenting, and using TOC-based financial figures for different purposes in different contexts within the factory, within the manufacturing division’s executive office, at group and corporate levels, and from external shareholder/stockAnalyst perspectives.  I decided to keep the old “Note 62 3/4 (the first pass)” handy below here on the page as I started a second pass in this “Note 62 3/4 (second pass).”

When I started this pass, I was thinking I could maybe, not for sure, but maybe on this second pass, make it crisp and clear like the nice clear, concise, and precise 8 or so statements came out above on this page for the new Haystack-compatible database system’s Parts and Process Results tables, old Work Orders, and the new Perpetual Schedule-Based Decision Support data set (pSBDSds) main memory object.  There was a lot of churning around and then some revising that gave rise to those sweet little nuggets.  But, right away in the introductory sentence, I thought, hm …, this is probably going to require at least a third and maybe a fourth pass.  Why?  Because I found myself, as I was writing that first sentence, realizing (realizing again … i’ve been here a few times before, in mba school and then several times on my own just thinking it through during my TOC manufacturing and systems era from 1989-1998) realizing again that part of the challenge of getting this as clear, orderly, and concise as it can be is that what I (as in “me” this time, jeez, and pretty much everybody) often just say as “I”, as in ROI, and as in T-I-OE, or even my TVA-I-OE, actually refers to at least two different words (investment and inventory) and to a lot of different realities that are used from different perspectives for different purposes.  It’s like when you think through the issues of what “love” means.  Ever think about how many different realities of experience and expectation are communicated with that one seemingly-simple word?   What?  Inventory, investment, and love all in the same paragraph?  Hey, I love this stuff!  : )

Anyway, so the discussion has no chance of getting orderly, clear, and comprehensive without acknowledging at least some of the more common different meanings of “I” money symbol.  It can be orderly, concise, and clear for useful specific purposes without creating at least a partial taxonomy and geneology of the “I” money symbol’s words and meanings, but we’re reaching for something that can flow smoothly up and down and across the enterprise from operations to executive suite.

The Many Meanings of the Money Symbol Called, “I”

I don’t claim to know — or even care to know — all the possible meanings and uses people have for what I’m calling, the money symbol “I” and its two main words, investment (I) and inventory (I). However, I couldn’t help but become aware of several while attending MBA school in 1979-81 and then working with TOC in a lot of manufacturing company situations and thinking a lot about it during 1989-1998.

Do TOC Companies Need This?  Need?  Naah.  But It Would Be a Good Thing to Have Anyway.

The first thing I should say is companies didn’t need the conceptual and explanatory result I’m trying to produce here — or even my 1998 book’s pretty good Detective Columbo’s Figure 6.1, The TOC TVA Financial Management System for Strategic, Product, and Improvements Planning and Control — to use TOC with its general “science of anything” principles, “T-I-OE” and “throughput world scale-of-importance” and various operations management solutions in manufacturing, to get great results for the past 20 years or so.

So what’s the point of this work?  Like my book, it’s one person’s perspective on reminding us in the TOC community of who we are and where we’ve been and where we’re going, plus more “due diligence” for the as-yet unpersuaded about TOC, plus more misunderstanding-clearing and obstacle-clearing and policy/thinking constraint-busting toward having TOC become even more of a universally-accepted global standard.  Hm … universally-accepted global? … ah, words words words … they can both clarify and obscure … And it’s my perception that the only thing standing between TOC becoming such a standard tool in every company’s and person’s toolkit is words.  I believe the reality of what TOC is, compared to the realities of what we try to do, makes it clearly something useful for everybody.  But I also believe the words people use to speak, write, hear, and listen about TOC, about what they’re trying to do, and about how they’re already doing it obscure the elegant match of TOC as tool for our tasks.  So stumbling into the “universally” vs. “global” phrase was timely.  Since what we’re doing here is a lot about the different meanings and uses of two words — “investment” and “inventory.”  When I get this to where I want it to be, it won’t so much be something TOC companies needed to succeed with TOC — they’ve succeeded enormously already — but it will help us all, including me, verbalize it all more efficiently for ourselves and also help the persuaded persuade the as-yet unpersuaded about the nature, uses, benefits, and inevitability of TOC as a, well, a tool most people would wish they had known about before if they had only known what it was.  How’s that for a long and winding sentence?

But even this conceptual and explanatory result is not needed, since Eli Goldratt and others in the TOC community already do a great job of explaining and using TOC.  Still, when something gets to be a candidate for being a tool in toolkits everywhere, it’s sometimes useful to have different people’s perspectives on how it all comes together.  So this will be one more of those.

Investment

Ok, that so far was pretty orderly.  But where to start from here?  With ROI?  Return on Investment?  With Eli Goldratt and Bob Fox’s famous simple, but powerful, TOC core examples — the “P & Q” exercises — that show a few of the most important cause-and-effect relationships effecting profit and ROI?

What TOC folks refer to as, “P&Q,” are constraints-based product mix and improvements-focusing exercises that conclude with effects on “ROI” (with no discussion offered or needed for that purpose about the meaning of “I” other than to state what the incremental “investment” assumption is for the reduction in the product processing times on the factory’s constraint resource).  I think it’s fair to say that most of the TOC successes come from having just a few concepts and images in enough people’s minds for a consensus, and, in my view, the “P&Q” exercise would be on that short list.  (I think P&Q is in The Race (1986) and in the Haystack Syndrome (1990). I thought I saw it the other day in the Apics TOC courseware.  It was I think in all the Goldratt Institute’s pre-“full-roadmap”-ThinkingProcess (after around 1991) production, executive decision-making, and Jonah courses in the early 90s.)  As a practical matter, a team in a factory can do just fine and create great results with TOC without what I’m producing here, but I wouldn’t suggest anybody using TOC in a factory getting started without, not only doing, but being able to re-construct and teach P&Q.  Not that one has to actually teach it, but, when you can re-derive something for yourself, as if you were teaching it, without asking anybody else or referring to a book, you know you know it well enough to put it to work in your factory.

How about “Plant and Equipment” as “investment”? Or “Material inventory” as “investment”?  Both as something else called, “assets”?  With “assets” being the opposite of “liabilities”?

How is an “Investment” Different From an “Expense”?

It isn’t really.  And it is.

You’re welcome.

We’ll come back to this.

TOC’s “Inventory” Concept: From “Materials” to “Plant & Equipment” to “All of It”

In my TOC presentations and classes in the 90s, I started out creating little diagrams of T I OE — for throughput (a money item, not physical units), inventory, and operating expense — a lot like the one shown here.  It was pretty close to the way Eli and Bob were teaching “throughput world” “scale of importance” for manufacturing companies.

Click on image to make larger

After a while, I joined Eli and Bob in thinking and speaking of the “I” or “Inventory” in “T-I-OE” as being, not only “materials inventory,” but also factory “plant and equipment.”  I began to draw my TIOE diagrams with four lines — T, Im for materials, Ip&e for plant-and-equipment, OE.  Then to deal with reserving funds for replacing machines in the future, I’d show another line for cash accumulating.  Plants don’t always fund new machines from operating cash flows alone.  Sometimes they use loans or leases or other sources of payment, but adding a “cash” line was the most persuasive way to address the “depreciation” question and issue without taking the main discussion too far for too long down a path that was useful and valid and important, but was off-point when the purpose was brief TOC overview, or less-brief TOC production overview, or in-depth Haystack system scheduling and shop control 4-day implementation education.

Giving TOC’s Throughput 3 Additional Names — all “TVA”

Later, I noticed the increasing popularity in CFO, shareholder, and stock analyst circles of Stern-Stewart’s EVA concept.  EVA stood for, “economic value added.”  But I noticed that, unlike TOC’s “throughput,” it was not the same money flow as the “economic value added” focused on by macro-economists assessing the health of an economy.  Since TOC’s “throughput” was the “true (macro-economist) value-added,” I decided to rename it, “TVA” for “True Value Added” or “TOC Value Added” or “Throughput Value Added.”

So that’s TVA as in “true (macro-economic)” or “toc” or “throughput” value added … i leave it to others to choose one, two, or all three of the names for the T in TVA … or maybe Tom’s value added? … no, that’s funny, but that won’t work at all …. not sufficiently generic and self-descriptive … or how about rob’s … what? … not a T?  ok.  how about, w? … what? OK, jeez …) to replace “throughput” or “financial throughput.”

I think Eli and Bob stick with the original “throughput” term for that particular money flow (1) so TOC would have a term with a clear definition that no one else would steal and confuse with alternate definitions and (2) so the rest of the TOC manufacturing management physics can always be built up clearly from the late 80s/early 90s image of the overall company system as a “black box” “money machine” producing “money” called “throughput” from money coming in one side (from sales), less money going out constantly during time periods (OE), less money going out on a units basis (mostly materials), and made possible by having some money just plain tied up inside (inventory).

EVA vs. TVA

Stern-Stewart’s EVA system was strong and well-implemented for what it was, but it was being explained in a company I was working in during 1993-95 as a system that incorporated the good part of TOC’s financial ideas and placed them into a framework that corrected TOC’s mistakes.  Both claims were untrue.

What made descriptions of EVA sound similar to descriptions of TOC — at least to people who didn’t fully-understand TOC, EVA, or both — were (1) its emphasis on growth, (2) its orientation to a “global” (system-wide) vs. “local” (departmental or functional or uncoordinated divisional) perspective and priority system, and (3) the view that it enabled better decisions to be made.

It was true that TOC delivered all those things, but TOC did all three of them quite a bit better than the EVA system for specific reasons.

(1) On the “orientation to growth” issue, EVA’s measurement structure encouraged “growth” in profits after direct labor and after divisional factory and staff overheads.  Focusing on this level of money flow can encourage divisional managers to take the easy non-growth cost-cutting path of laying off direct labor and staff to meet EVA objectives instead of growing the business via marketing, product and process development, sales, and service.  TOC’s logic tree thinking processes, throughput/TVA planning structures, measurement structure, TIOE “throughput world” importance scale, marketing solutions, project management solutions, and constraints utilization techniques provide powerful coherent mutually-reinforcing methods for growth in the money flow that funds direct labor and overheads as well as the money flow EVA emphasizes.  This is true growth orientation.  This encourages steady growth in the underlying business, not just in the profits.  It stresses a balanced approach to growing the money that prevents firing and trying to re-hire skilled production workers and divisional staff people.  Focus on EVA may allow meet EVA objectives, but contract the business and give rise to layoffs.  Focusing on TVA meets EVA objectives and consistently encourages underlying business growth.  TOC and TVA is the right leadership, the right focus, and the right methods.  EVA isn’t a bad thing, but it just measures one piece of the pie TOC creates.

The TOC community’s views about avoiding direct labor and staff layoffs by emphasizing and achieving growth in TVA sometimes gives rise to criticism that TOC is soft on reducing unnecessary overhead, over-staffing, and other waste.  Not true.  We believe it’s illogical to put a company and its people through expensive hire-and-fire cycles when there’s an option to somehow grow or at least maintain TVA/throughput.  And we believe, that, by using TOC, companies will more often more easily reach strong competitive market positions that enable growing TVA.  But, if a buggy whip company can’t find a way to re-deploy people and other assets to some new business, or if someone has made a mistake and grown staff overhead in an extravagant way, layoff decisions will need to be made.  But that’s layoffs, if necessary, as part of an intelligent thought process, not either hiding behind or being pushed around by some “bottom-line” vs. “TVA margin” financial or measurement plan.

(2) On the “system-wide (global)” vs. “functional-departmental-uncoordinatedDivisional” perspective and priority system issue:  TOC does this much better for both the standalone company and for the three-level division/group/corporate large corporation for the same reason.  Both TOC TVA and EVA provide emphasis on the overall enterprise, but EVA ignores what goes on within the divisions.  EVA makes no comment about the need to eliminate allocations of overhead for product costs, or focusing on constraints, or identifying and clearing policy constraints.  By allowing aggregation of arbitrary overhead allocations to products, EVA makes it difficult or impossible to make flexible 3-level (i.e., divisions, groups, and corporate) planning, capital allocation, measurement, and reporting models that keep TVA I (materials) and I (plant and equipment) and OE and Cash disaggregated for examining “what if” options.  By over-emphasizing — and consuming scarce management marketing and ops strategic entrepreneurial time and mindshare with — theoretical finance issues of Ibbotson-Sinquefield debt/equity-balanced and risk-adjusted capital charge deliberations, and under-emphasizing the entrepreneurial process of placing scarce capital where it’s needed and when it’s needed, EVA calculates elegant theoretical numbers, but cripples the ability of, or at least fails to encourage, the 3 levels of operating executives to work the business TOC logic trees at relevant levels of detail (like individual machines, technology trends and investments, R&D, specific new products, old or new markets, specific customers or classes of customers and their buyer behavior).  ,/jk&dCfo ❤ … It’s not a bad thing to apply modern finance theory, maybe — which is valid and helpful guidance most of the time, on average, over time, maybe.  But the TVA structures and TOC methods, when allowed to work, will produce results in the underlying business for EVA to measure.

(3)  On the issue of “enabling better decisions:”  there are two main reasons the TOC TVA financial system gave rise to better decisions than the circa 93-95 EVA system.  First, all the intra-division reasons mentioned in (1) above that give rise to growing the underlying business, not just measuring its increased profits that could come from cost-cutting instead of growth.  The second reason is TOC’s logic tree decision trees and financial methods are used at the corporate, group, and divisional levels for coordinated strategic, operational, and financial planning.  Scarce capital, raised at the corporate vs. divisional level to get better interest rates, is allocated where it makes strong entrepreneurial impact — even if it means lower EVA at various times for various divisions.

The summary so far is, yes, EVA sounds a little like TOC and TVA, but they are quite different.  TOC is a management philosophy, leadership plan, measurements, logical thinking process, and manufacturing solutions.  EVA is a way of calculating “costs of capital” and a way of applying them to divisional net profits.  TOC encourages building the individual businesses in a way that’s supported, coordinated by, and benefits the whole.  EVA just encourages individual divisional profits and measures them with up-to-date financial theoretical capital charges.

So EVA in no way replaces TOC, or makes TOC no longer useful or needed, or does anything TOC doesn’t do because people using TOC, after placing priority on generating TVA, can, if they like, measure the beneficial effects of the higher TVA in all the useful ways including ROI, EVA, DCF and NPV for projects, and so forth.

If a company uses EVA, can they also use TOC TVA?  Yes, the divisions should use TOC TVA to grow the businesses no matter what financial measurements they’re using.

If a company is using TOC and TVA at divisional, group, and corporate levels like Federal-Mogul was in the 90s, do they need an EVA-like system too?

Need?

I’m not sure if a company needs the Ibbotson-Sinquefield capital charge system applied at divisional, group, and corporate net profit levels.  If they’re turning in good numbers, it might not matter if they’re using modern finance theory for capital charges.  If their numbers aren’t good, it may be necessary to play the “well-managed company perception game” that presently, i think, involves talking a good game on risk-adjusted capital costs used internally to allocate capital and assess the efficiency of its use.  My problem is all of this finance talk justification I’m making is feeling a little hollow when the image of doing the right competitive thing with capital is there.  At this point, my hunch is the fancy talk about risk-adjusted capital charges is politically necessary window dressing to impress securities analysts and loan officers who have no entrepreneurial operations experience.  So it’s not required by law, but required as part of the balancing act of credibility in off times and transition times.  If wall street thinks well-managed companies do EVA, then companies should do EVA.  It’s not that tough to add compared to doing all the other things needed to have a strong business.  But that, I think, is the answer the next question which is: I’m wondering if a case can be made for working with the undiscounted cash flows and for leaving the 3 levels of TVA I I and OE disaggregated.  Have to think about this, but I think the answer is, yes.

Wow.  That’s interesting.  It might also be right.

What it says is: use TVA I, I, OE, Cash at all 3 corporate levels to show viability of a plan and … at first … stop … know it’s a viable plan … or in TOC strategic planning … know it’s three 3-corporate-level 5-year plans that hedge the likely key uncertainties in the 5 years … and stop … before bothering to overlay ROI, EVA, or any other aggregated measurement … we don’t have to be afraid of using ROI, EVA, or similar things, but we don’t have to place a high priority on them either … except to the extent dealing with them is politically necessary due to bosses, boards of directors, shareholders, security analysts, etc.

Very interesting.  And may also be right.  That’s a big deal.  But consistent with the other verbalizations, discussions, and conclusions on these pages.  ,/e’s.papa

Wrote this next part earlier.  Don’t want to toss or read it right now.  will just use strike-out font to halfway take it out The EVA system provided a consistent way to plan for and measure a large number of different kinds of corporate groups and divisions, manage capital flows from corporate to groups and divisions, and apply appropriate capital costs consistent with modern risk-adjusted capital charges.  I didn’t dig into the details of their theory of arriving at their capital charge rates, but it sounded like what I’d learned about Ibbotson-Sinquefield risk-adjusted debt-equity-balanced capital charges in my first-year finance course at Harvard Business School.  I don’t know what the company was doing before Stern-Stewart came in their EVA system, but I remember being impressed with the fact that this logical basis was being installed throughout the huge corporation as a common language for discussing corporate’s expectations and assessment of group and divisional performance.  I was working at the time, and making quite a bit of progress, at getting the language of TOC made common, not only among general managers and financial controllers of divisions and their corporate CFOs and bosses, but also from shop floor to chairman’s office.

If I’m remembering correctly, “EVA” was a shareholder/owner-oriented concept that highlighted a money flow that was available to shareholders as an “extra” or “excess” return after interest on corporate debt and basic shareholder return were earned.  By contrast, TOC’s “throughput” focuses on increasing a more fundamental money flow that is the “value added” over the raw materials and other units-sensitive costs.  I noticed that TOC’s “contribution margin” or “value added” was more like the macro-economic meaning for “value added” that sees the money difference between raw material costs and sales revenues as money flowing into the economy in some form — wages, salaries, benefits, overhead expenses paid to other firms in the economy, government fees and taxes, and investor interest and dividend returns as the relevant “value added” to focus on.  Since TOC focused on the right “economic value added” and Stern-Stewart’s EVA was kind of an “investors-only value-added, I decided to start referring to TOC’s financial money flow, “throughut,” using the terms, “true value added” and “TOC value added” and “throughput value added”, all of which substantive and self-explanatory and correct phrases just happened to abbreviate exquisitely as, “TVA.”  I thought it was gorgeous that TOC’s throughput not only solved a lot of problems in decision-making, but, when expressed as “TVA” it highlighted its match with macro-economists and — so TVA was a better match of reality and term than EVA … and “TVA,” compared to “throuhgput” or even “financial throughput” was much more palatable and attractive and persuasive to the CFO and stock analyst mind, folks who were on my current, future, and pre-requisites logic trees and needed to be on those of every TOC fan who wanted TOC become even more of a standard tool in everybody’s toolkit around the world.

Different Categories of Companies, Different Needs

The discussion of EVA at that 3-level large corporation made me realize it’s probably useful to acknowledge and distinguish among at least three types of manufacturing company because — while they share many of the same opportunities, challenges, and solutions —  the issues of numbers of people, numbers of products, types of financing, distances, number of different states or countries with operations, sharing a pool of scarce capital, and private or public ownership can cause some of the same things to be done very differently and give rise to some important differences in what needs to be done.

Category 1.  Large publicly- or privately-owned multi-divisional corporations with shareholder/owners, a “corporate” level with a “corporate staff,” a “groups” level with usually small “group staffs,” and operating “divisions.”

Category 2.  Medium or large publicly- or privately-owned standalone company

Category 3.  Small privately-owned company

The owner of a very small privately-owned manufacturing company — unlike the various people who own and manage a large geographically- and administratively-distributed publicly-owned group of companies — can just know, trust, and be in frequent communication with all the key people, and can just walk around to get a sense for how things are going, and can just walk over to specific places and people to get all the information needed to deal with pretty much any situation.  This owner knows the reality of what’s happening the factory and she or he and his or her accountant know how the accounting and tax reports prepared for bank lending officers, fellow family and friend owners of the little corporation, the friend and family board of directors, and the tax authorities are reflecting this reality.  That’s knowing and influencing what’s going on and reporting.  When it comes to decisions, they may be small and simple enough to do simple capacity, product, sales, and financial planning “on the back of an envelope” which these days means using a simple spreadsheet program.

There are differences between this very small Category 3 company and the larger and more more complex Category 2, but those aren’t the differences on my mind right now.

The main differences on my mind right now, after having just thought again about the 3-level division/group/corporate example in the discussion above about EVA, are those between both Categories 2 and 3 companies and the much larger and much more complex Category 1 company.  Unlike the other two, Category 1 situations have the needs for (1) something called, “consolidations of accounts,” for external GAAP public shareholder reporting, external state-level tax reporting, and federal/national tax reporting — very possibly all three of those using specific rules of several or even many different countries, (2) consolidated 3-level planning, (3) sharing/allocating/prioritizing a scarce pool of corporately-acquired debt and equity capital among the groups and divisions, and (4) probably some other things relevant to this discussion I haven’t remembered yet.

These needs are why I was initially positive about Stern-Stewart’s overall system and smooth implementation services despite the fact that I disagreed with what was being said — not necessarily by Stern-Stewart themselves — about EVA vs. TOC.

It would be relatively easy for Stern-Stewart to modify whatever software they use (not sure if they use their own or other outside or guide client internal system development, maybe any of the above depending on client situation, do the same impressive job implementing an overall planning, consolidations, and reporting system that builds in TOC/TVA disaggregated conceptual and computer system for planning and decisions

Hm … It looks like a well-selected specific situation is emerging to represent a useful important generic case.  I’m realizing as I’m thinking and writing about this, that my respect for the Stern-Stewart company — despite my views about the more narrow issue of EVA as a measurement compared to TVA as a measurement and to TOC overall (there was a lot more to what Stern-Stewart was providing than just the idea and measurement they called, “EVA,” just as there is a lot more to TOC than the just TVA/throughput idea and measurement, although that itself is a lot) — and my working at getting more specific and clear in my own mind on what I liked about them, and what I disagreed on, has taken me into another thing that I knew in the 90s needed to be done for Haystack systems in one of those three needed TOC books, which is, as I was starting to do with my british pal in cincom marketing, spec out the upper-level “what if” module for various types of client situations.  Thinking about changes stern-stewart, specifically, based on my admittedly incomplete knowledge of how they were implementing EVA in the situation I was close to and in others, is really an entry into the generic thought process of how any manufacturing systems software provider, including software and services systems integrators (like computer sciences CSC, EDS, cap gemini, oracle, the Big 2 or however many accounting firms’ systems spinout firms), or in house systems makers should make their financial planning and reporting systems haystack compatible.

So, since I always had a good sense of Stern Stewart, despite the fact that some of the people in one of their client firms were trying (unsuccessfully) to run rough-shod over me politically with wrong perceptions and arguments about EVA, TVA, and TOC, I’ll keep promoting them here the same way I always promoted CA and Cincom and eds (via jim s) and IBM (via Carol P) and SAP (via lynn w) other packaged and custom systems providers I wanted to see take additional steps toward integrating TOC into their existing and new offerings.

Yeah.  As Karen A would say if she were here, that assignment was a rich learning experience — TOC with — and, unfortunately, sometimes vs. — JIT, TQM, lean, activity analysis, “non-value-added activity” removal, training (on everything except TOC), ABC, a not-bad form of non-cause-and-effect non-constraints-based non-TVA-focused process improvement, and now also EVA.  Perfect place to launch a TOC sig from, right?

Cool.

Conclusions So Far from the Thinking about the “I” money symbol, leads to Haystack-compatible “What If” Module Specification

So the second pass through thinking about the meanings and uses of the “I” money symbol — representing various realities pointed to by the term, “investment” and “inventory” — produces several useful conclusions.  Some of them repeat and re-emphasize prior points and some are new.

Interestingly, the important conclusions don’t even mention the “I” money symbol, the “inventory” concept, or even the idea of “investment.”

That seemed odd to me at first, but it highlights that it’s important whether a plan involved in a decision is viable in terms of the real money inflows, outflows, and their timing, and that it’s much less important — or sometimes not important at all — what names we give to those money flows.

1.  Having the financial resources to make getting the right things done to make a company healthy and successful takes priority over how it’s measured and accounted for.

When the right things get done, the money that results makes all the relevant measures meet requirements — if that was possible in the first place.

After I wrote that I realized it was another way of stating TOC’s concept that “the goal of a manufacturing company is make more money (from sales) now and in the future.”

2.  The TOC way of focusing on cause-and-effect analysis of real money entities in planning decision scenarios (vs. reporting which follows its own rules) is the intrinsically and unavoidably correct approach to financial planning and is superior to any approach that uses financial concepts or definitions of any kind that take focus away from what are the size and timing of real money outflows and inflows associated with the plan or plans being considered.

3.  The attempt to re-use for decisions the financial concepts, definitions, and data created for purposes of reporting will often lead to having money inflow and outflow values and timing that are incorrect for specific decisions.

4.  Plans that create acceptable, or even attractive, results using concepts, definitions, and data in reporting may still harm the health of the business or cause failure to secure important competitive positions in the industry (examples … building to inventory to artificially inflate reported profits while consuming cash and destroying future flexibility … or increasing profits by excessive cost cutting without increasing TVA with smart marketing, product development, or sales actions).  On the other hand, plans that create strong businesses and secure important competitive industry positions will always give attractive results in the reporting.

5.  Haystack-compatible “What If” Module Specification -That means the “What If” Schedule-Based Decision Support (SBDS) module described in The Haystack Syndrome can be further specified in terms of giving planners views, for the several most important and recurring decision situations, of the real resulting money inflows and outflows and their timing without making distinctions about which money category the inputs come from (i.e., investment vs. expense, loans, corporate capital infusion, grant, inventory, operating expense, salary, benefits, advertising, purchases, and all the others).   In this, the TOC way of dealing with only real money flows and timing is similar to discounted cash flow (DCF) analysis before discount rates are applied to calculate net present value (NPV) or internal rate of return (IRR) values.

5a.  That is really just another way of saying what Haystack Syndrome and TOC have been saying at least since 1990, but in a way that will appear to financial systems users more like what they are accustomed to needing and wanting, and will appear to developers more readily-programmable and more likely to be needed and wanted by their customers.

5b.  We’ll discuss it more, but that statement 5 should be enough for manufacturing systems developers to create the “What If” Schedule-Based Decision Support (SBDS) modules described in The Haystack Syndrome that will give benefits to their clients.

5c.  Having said that, the first issues that come to mind are making judgments on levels of aggregation and timing of the real outgoing and incoming money flows.  In other words, the timing of when parts are ordered, received, and paid for vs. time timing of when sales revenues are collected is also real, but displaying that level of detail for decision planning takes a lot of work and adds no insight.  There’s the criterion for deciding all of them.  We show the level of money aggregation and timing aggregation that give us a lot of insight and are easy to prepare and present.  I’m guessing the way TOC throughput/TVA is defined does that.  I’ll assume it for the moment.  Seems and feels right.  I’ll leave it as a note to item 5 of the spec.


6.  The “5 Timelines Per Entity” Specification.  A good starting image to develop from has 5 left-to-right five-year timelines — TVA, I(m), I(p&e), OE, and Cumulative Cash — for each entity in the corporation.  That’s one set of 5 timelines for “corporate” as in “parent corporation,” one set for each “group” in the corporation, and one set for each “division” in the corporation.  From there, it’s a decision about the length of the time periods — 5 one-year time periods, or 20 one-quarter time periods, or 60 one-month time periods.  What gets shown in each period, in the associated chart/table row or timeline, is “TVA for period”, “I(materials) change, up or down, in period”, “outflows for additional I(p&e) or inflows from sales of I(p&e)”, “OE for the period”, and “Cumulative Cash from time zero in the plan.”  Provide for toggling between both chart/table displays and timeline/arrows displays.  What’s not to like about that?

That’s the basis “what if” product.

The basic “measurement” is not ROI or EVA, but “do we have a plan here that gets us where we need or want to go?”  That’s yes, or no.  Once we have a yes, the system should also calculate corporate ROI and EVA to see what it looks like, just out of curiosity.

Or conduct TOC-style scenario-based strategic planning with logic trees dealing with all factors and the basic measure is, “ok, if the assumptions in this of our 3 or more scenarios of what will happen outside our control happen, with laws and industry pricing in certain markets, new competitors, does this plan do what we want it to do and is it financially viable?” again, yes, or no.  Repeat for however many scenarios used to find a plan, a series of actions, that keeps the company healthy no matter what reasonable range of things happen in the environment outside the company’s control.

Product mix at division levels is schedule-tested (schedule-based decision support) in either simple or dynamic buffering ways, but that’s the Scheduling Module not the “What If” Module.

Nice to have extras.  Developers can add logic tree support in various ways.  These become obvious if thinking about any planning activity in detail.

I think if my pal, tom mannmeusel, were here, he’d say this nails it.  ,/  20jan2011-10pm-ish, inv sys I started 7-jan around noon, 47+65=112p

#done

what? not very Socratic?  hey, i gave you guys 12 years to get this stuff first … : )♥ … #socraticMethod

xxx   some other thoughts … just keep them here with strikethrough font for now …  The best “global”/company-wide measures (TVA) are right pretty much all the time.  The better “global”/company-wide measures (ROI, EVA, NPV, IRR, and payback period) give good guidance on average and over time, but may discourage or — if a management allows point 1 above to be violated — block doing the right things for survival (on the down side) or comfortably and securely thriving (on the up side).

[hm … this next point is right, but isn’t really on point for what i’m doing right now … since it’s right, i’ll leave it here … Also, while on the subject of measurements, “Local”/departmental/productLine measures (machine or individual “hours working setting up or working on parts” divided by “hours on the job” which is something erroneously called “efficiencies”, or product line contribution margins at various levels) can give insight, but need to be viewed in cause-and-effect way and at lower importance than “global”/company-wide measures, especially TVA/throughput.]

x. 20

Note 62 3/4 (first pass through the re-thinking about “I”)

This is my first pass through working up new, more concise, faster, more clear, and more comprehensive ways to explain dealing with the “I” money symbol and the various realities it points to in recurring manufacturing company contexts ranging from shop floor to divisional to internal corporate and external tax and shareholder reports.  I decided to keep it handy here as I started a second pass in “Note 62 3/4 (second pass)” above.

———– —————— —–

[this note starts out ok, but gets tough to follow as it changes from exposition to clusters of drafting/verbalizing/exploring … the usual … needs revising … the upside is i think i made a lot of progress on crystalizing some ideas that are helpful for thinking about and dealing with “I” for investment (plant and equipment, and materials inventories) for both decisions and reporting]

Valuing Assets for “I” or any other reason

The old machine in the book, The Goal, that got brought back into use to help make best use of the bottleneck constraint by off-loading some work, had been “fully-depreciated” on the company’s accounting books long ago.  In other words, on paper, it had zero value.  Yet, when brought back into operation, it helped the company make huge increases in profits and cash flow.

It no longer showed up in the I as in ROI, or on the balance sheet as an “asset.”  But it was making a huge contribution to ROI.

So what is “I”, anyway?  That’s the right question and it has many answers depending on all sorts of factors.

It’s a very interesting topic, valuing assets for “investment” for decision and reporting purposes.  … different issues in different uses … in reporting, for example, there’s matching basis with prior reports (and if they don’t match up with prior reports, managers and other employees can lie and cheat and literally steal from owners by changing “basis” from period to period, that’s one of the reasons, maybe the primary reason, why the GAAP and other accounting rules exist.  That’s not a bad thing.  That’s a good thing.  It’s the attempt  to take accounting that prevents stealing and that helps owners and managers see what’s going on and for other reporting purposes and reuse them for decisions that causes most of the problems from accounting)

One thing that’s often helpful is viewing each decision as an incremental case and show the incremental money outflows and incremental incoming money flows that will actually happen if the decision is made compared to if the decision isn’t made.  That works.

However, I’m remembering that one of the incorrect correct criticisms TOC receives from accounting and finance people who only look at narrow slices of TOC, that TOC is incremental thinking.  So it’s TOC evaporating cloud.  “incremental thinking” is both good and bad.  how can that be?  same reason it’s sometimes a good thing and sometimes a bad thing to do anything.  range of validity.

when using “incremental cases” to select an overall business plan, there’s the perceived problem of “what’s the base case?”  there’s an answer.  i’ll explain.  but first, like most of the rest of this, since it’s been a while, i have to figure it out again … : )

the answer to the base case problem is easy and it’s this … [later, i came up with a simpler way to think of the base case for planning]  the base case is not making or selling or shipping any products, so sales is zero unit-derived totally-variable costs is also zero, so TVA is zero.  OE is whatever it is for all the people on the payroll and all other expenditures not units-driven.  don’t get stuck in the modern activity-based analysis trap of some people’s capacity getting consumed on a units basis.  keep it simple (and correct).  if you’re not going to hire them and fire them (i.e., alter expenses) then their part of OE is part of OE.  That just leaves the almost-always-discussed as if it’s an “oh, of course” and almost-always not understood hardly at all, issue of “I”, “investment”, as in ROI.  Stop here.  Take a breath.  Get up and walk around.  Stretch a little.  Jumping jacks.  A few push ups. Ok.  Dealing with the “I” that comes from past investment decisions, in a way that’s logical for forward-looking decisions, is no picnic.  Dealing with I that happens when at time zero or in the future in your planning or decision view is easy (just show actual money flows).  But just dealing with time zero and future actual investment money flows raises questions about the percentage you get when you calculate ROI.  Why is the I that comes from investments made before time zero not a picnic?  Because there’s the temptation to re-use already-available figures for “asset values” from the accounting books, in which some assets are fully-depreciated (showing zero valuable, though they’re used every day), or they could be shown at cost, or at “replacement value”, the money for them could have been spent recently or a long time ago.  What do want I to be?  To get the main point about allocation made in my Detective Columbo planning chart — “Figure 6.1, The TOC TVA Financial Management System for Strategic, Product, and Improvements Planning and Control” — I just assumed the 5,000 figure (in 1000s, so that’s $5 million) for all five years.  That gave reasonable percentage ROI and EVA-related numbers.  I didn’t want to assume all fully-depreciated plant and equipment assets, valued at close to zero, leaving only inventories, and giving 349% ROI in my example.  I didn’t want to assume huge over-investment numbers that gave 0.34% ROI.  In effect, what my Detective Columbo chart does is assume all plant and equipment assets plus inventories are properly valued at 5,000.  The plant and equipment assets could be any combination of acquired for free, partially- or fully-depreciated, valued at cost or market or replacement, but, on some basis, reasonably valued at 5000.  Having said that, need to think about it more.  I’m still not sure yet about that part of I that comes from prior investments.  … I can tell from the way this feels that I don’t yet have the right combination [update: that was before; the revision i just made above nails it now, w/ ,/eg] of starting point, frame on the question, concepts and sequence for dealing with “I” on other than a full incremental (incremental TVA, incr OE, incr NP, and incremental I) basis.  I’m not quite clear yet on basis of full TVA, full OE, which gives full NP, and then what about I?  full I?  that can be a lot of different things depending on whether assets are accounted for at cost, replacement value, breakup sales value … not likely that one since “going concern” has to be assumed or nothing makes sense when bring past I into time zero of the decision picture … )  well, i haven’t quite nailed it yet, but i’ll leave the ramblings since they will be helpful and encouraging to others who haven’t thought about it yet or nailed it yet either …

ok … i think i got the right way to think about it … no matter what the past investment, be it high or low or medium … no matter long ago they were made … no matter how they’re accounted for to bring a “correct” (in somebody’s view) “I” into time zero of the planning/decision picture … in other words, no matter what the initial I before new investments … the “base case” will always be (1) having made those investments in the past and walking away from them which means (2) zero sales, zero parts and raw materials purchases, firing all the people and spending no money for OE.  So the implicit “base case” that makes a planning scenario an “incremental case” is the “I” made in the past and that’s all.  Zero ROI, no matter what I is.  The “incremental” case is add back all the OE and then add back the sales and material expenditures (and other totally variable costs).  In that case, decreasing OE or increasing TVA/throughput increases ROI no matter what “I” is.

and that’s why the TOC community has been able to go for over 20 years gaining more and more fans at all levels of financial sophistication and more and more successes all over the place in different past-investment situations without really dealing with or getting hung up in the I part because, no matter what the actual I is from past investments, it’s always good to do all the good things TOC uses to increase TVA for whatever resources are available and whatever the market situation is, whether OE is also decreased or not (sometimes reducing OE is good if cause-effect analysis says can be leaner and better right-sized and very often an over-focus on reducing OE destroys getting effective focus on getting nice big increases in TVA/throughput).

So that’s it.  Nailed.  At least the “basic planning scenario as being an incremental case to some base case” issue.

That only leaves getting a faster and more persuasive entry and exit to the …

“relative ROI” vs. “absolute ROI” — hm … in a way, that last set of points was what we could call, “relative ROI” (which is encouraging increases in TVA and Net Profit on any level of “I” since higher ROI is better than lower ROI, no matter what the “right” level of “I” is … meanwhile, some of the incorrect criticism of TOC financial concepts is that TOC doesn’t deal with what we might call, “absolute ROI” (the perspective of planning for “adequate” or “target” or “desired” or “required”  “returns” on “capital.”  every company using TOC has these among their objectives and measurements.  when these are too low, they appear as “undesirable effects” on current reality logic trees, and they always appear among the “desirable effects” on higher-level future reality logic trees as plan elements.  they also appear on pre-requisites and transition implementation logic trees.  so these “return on capital” measures are like any other factors in short- and long-term strategic, market, product, and operational planning, measurement, and control using TOC … the Detective Columbo-style presentation of plan financial effects focuses on real money flows the effects of which can be viewed from any of the internal or external corporate reporting ROI/EVA/other, GAAP, or tax reporting lenses … the basic idea is to use all aspects of TOC to create a healthy business with growing TVA … usually, if the business is made healthy, and if TVA is growing and constraints are being used well, measures like ROI, EVA, and such look real good too … ) …

next issue …

why all this talk about base case and incremental case?  especially since it draws criticism from some finance experts about toc having too much an incremental focus (it doesn’t … toc strategic planning and tva financial system view (detective columbo chart above and skit in my book) are as comprehensive and long-term as any approach … but that’s just a counter-claim, not yet proof in a form that persuades the opposition … i have a hunch the resolution is to show that all planning and decisions are “incremental” … it’s just the “base case” and “incremental cases” that differ for each type of planning and decision, and the size of the increments vary from small to medium to the difference in being not in business and in business with some plan (wrote this after what follows … in other words, what was a hunch when i started this indented comment is now a conclusion … great!) afterthought … as i was writing “incremental” over and over again, knowing intuitively, from a lot of use of the idea, what it means and why it’s right to think in those terms, it occurred to me that might it not be so obvious to everybody … in fact, i wasn’t sure why i was so sure it was right … thinking it over a bit … if there’s a “decision” to be made, that means, by definition, there’s more than one way to go, and that there are two different streams of positive and negative effects arising from the different ways, right?  otherwise, what’s to “decide”?  in a case where there’s a basic plan or “base case” which is, “don’t do anything different than we’ve been doing”, the decision, the “incremental case,” is the case that creates the big or small “increment” of differences … here we find the words increment and incremental are used two different ways … often “increment” and “incremental” is used to mean small difference vs. a larger difference … but, in this decision context, the “increment” is the “difference” between the two decisions which can be small or big …

so, in these terms, there are several types of decisions.  there’s (1) “change nothing (base case) vs. change something (incremental case)” decisions, example “keep doing what we’re doing” vs. “add a machine, some people, a product line, and some new customers”, or (2) a choose between two different changes which is one base case and two incremental cases, example, “keep doing like we’re doing” vs. “add new product X” or “add new product Y”, or (3) the decision whether to approve or accept or act on a basic business plan (where … potential confusion on use of words, as always, in this case “base” and “basic”, but context should make it clear) … the decision whether to approve a basic business plan or not, which could be viewed as base case of “just walk away from our prior investment and no longer buy materials, hire people, pay expenses, and sell products” vs. “do this plan.” … getting closer …

when people criticize for using incremental thinking, the other problems they think they are pointing to are things like not generating enough cash, short term and long term, to cover overhead and to somehow get money to replace worn out or obsoleted plant and equipment (that’s what the depreciation concept is supposed to, not solve, but help with) or pay for other investments that were made to get into the business (that’s what amortization concept is for, for things like paying big bucks to own a brand name, just a name and reputation and customer loyalty asset, not a physical asset, or to buy a copyright, again, an idea, a law, a right, not a physical asset, but, in either case, if the business plan doesn’t generate enough cash, )  … so, if we demonstrate we can and do deal with these issues and objections in our TOC TVA I OE – based process (we can and do), we’ll know we’re not missing something important …

dealing with the “covering overhead” objection is easy.  the first detective columbo skit does that.  done deal.  if we have to deal with group and corporate overhead too, well, that just goes in a line item next to divisional OE to get covered by TVA (without allocating to products, of course)

the second, depreciation, is handled on this page

the last problem … [this part isn’t really coming together and may need to come out ] the problem of “I”, investment other than incremental investment money flow in the incremental case, is the one that solves nicely for decisions, but doesn’t satisfy all the objections … that’s partly because some things just aren’t possible and no amount of accounting mumbo jumbo, even very sophisticated and learned expert accounting mumbo jumbo, will help … accounting won’t help you and me fly (without an airplane) … accounting won’t help if somebody grossly overpaid for a business or new equipment or a brand or a copyright that can’t be used to generate sales … hm … is this the right angle to start with? …

a lot of this has to do with the “going concern concept” that’s necessary to have accounting that makes any sense at all … accounting does make sense, by the way … the toc community isn’t against all accounting, or even all cost accounting, but against illogical uses of accounting that lead to unnecessary bad effects … the thing about accounting is that, for every decision, every report, every question and answer, the organization of the financial facts can and usually is different … people who want to reuse one accounting element for multiple purposes, some can, some can’t, don’t want to accept that, but every smart accountant, controller, and cfo and security analyst knows this … the views an employee inherits when joining a company are supposedly thought through and kept relevant by the thought process of accounting ….

incidentally, harvard business school doesn’t teach rules of accounting … it teaches purposes of accounting and concepts used to attempt to get accounting procedures and rules to match up with purposes … some schools teach what the rules are … others teach principles since the underlying reality can change and require new applications of principles to accomplish purpose … that’s both internally (management accounting) and financial accounting (external, GAAP) …  the case studies at harvard let the student see when very precise-looking accounting design is wrong for what a business has become … it once gave good info but, after something changed, no longer … the cases invited re applying unchanging principles to different circumstances to have meaningful accounting again … what they missed, in the 80s, was avoiding creating a blind spot allocation of overheads in standard cost systems …

this is wandering a bit … not really … these are the issues … just first verbalizing/drafting ,,, there’s just a lot there to find and organize … wonder if, at this point, a reasonably concise statement is possible … let’s try this …

npv and dcf and irr handle some purposes … roi is good for some … ignoring them and removing allocations and dealing only with incremental future physical money flows is good for most things we do for most practical decisions … reporting, however, is another story, a LOT of other stories … and most of them are not what manufacturing people need to deal with except for perspective …

zzz – note for beth: but i had some wrong opinions about the fortune cookie guy ❤ … rac would like the hole in the floor idea at nuggets intl : ) ❤ …

xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx

…………..  Sorting out the Notes on this page …………………….

Notes 50-52 – Some observations and some timeline stuff about Just In Time arising from verbalizing about what I called, “The Great TOC vs. JIT Showdown in the Old Midwest,” an example from 1991 of how, in the 90s, people who didn’t have the time or business agenda to understand the results of early projects got confused and, or deliberately created confusion, about the inherent superiority of the TOC vs. the JIT way of managing a factory.

while i’m here … i don’t think i finished that story to say how it turned out … what happened was the word went around that the factory implementing JIT “won” — not because JIT didn’t have all it’s built-in computer-phobic, over-simplification,  inflexibility, and over-capacity drawbacks — but because the pre-release version of the TOC Haystack system software presented too many systems modelling, systems integration, and systems implementation issues for the local factory staff to handle on their own even with some education and consulting help.   ITT AC Pump and Rockwell/Meritor/Newark were an entirely different situation of corporate, group, and local IT resource.  The need for the additional IT support at the time was partly because of its state of development of the TOC Haystack software and partly due to the lack of database system interface support at the time (i.e., the specs developed on these three blog pages).  There were also some getting stuck waiting for long run-times for what should have been quicker iterative decision processes.  For example, a DRUM rescheduling for a batch move, batch split, setup consolidation, etc. in the Exploit (step 1) phase of scheduling.  Or in “subordination,” after settling on a DRUM batch sequence for a constraint that was best for the company at that time.  Some of this was probably in part due to the speed and power of the memory and cpu at the time, 1991, a different world in performance of computer systems than today.  I don’t remember, but data modelling decisions can effect these things.  It was also probably due to the early stage of the software and also maybe due to Eli Goldratt being stubborn about whether to allow the software to allow the user to make judgments on certain things like whether to run a full subordination pass before declaring the second obvious next constraint.  Not sure.  But that was then.  Everything’s different now.  Usually, not always, when Eli was stubborn, there was some principle, maybe already verbalized, maybe not yet verbalized, that was there as valuable to hold onto to keep things anchored in Mother Nature.  Or it was a negative branch or cloud that still needed to be more finely sliced.  In this case, I think my little essay on these pages somewhere about “Haystack system process monitors” resolves all those problems and opportunities. ,/frankfortPlanner&tom&stubborneli.vornlocker,garys

Note 53 – Became a well-deserved tribute to Bob Vornlocker and Gary Smith and Gene Makl, the guys who made the first successful Haystack system implementation happen in April 1991 at ITT AC Pump.

Note 54Mother Nature.

Note 55 – Father Time.  Tribute to Peter Langford.

Notes 56-58 – Some vintage 1990s TOC email discussions and an essay posted by my pal, John Caspari, on his website.  They led to thinking about “full costing” in relation to pricing decisions which, in turn, raised issues that led to the rest of the notes that followed on the page, Notes 59-62.

Note 59 – 6 Homework assignments for all TOC people – 3 on allocation, 3 on variance analysis

Note 59-a – Note to Note 59.  Seeking a concise way to make a few points related to the homework assignments in Note 59.  Turned into the internationally well-known and acclaimed multi-year case study of the Nuggets Incorporated manufacturing company.

Note 59-b – Note to Note 59-a.  Digression to remember/reconstruct/figureOutAgain how wrong old-style “direct labor basis” allocation for “full costing” and planning worked, and new-style (and also wrong) activity-based costing (ABC) “activity basis” allocation for “full costing” and planning worked, vs. how TOC (not wrong) does it right.

“Right” meaning TOC companies (1) do not try to use the same numbers for both logical internal planning decisions and for legally-required external GAAP and tax reporting since external reports have their own logic and rules that conflict with the logic of internal decision-making, (2) use no allocations of OE to products to form allocation-based product costs for decisions, and (3) use, unless otherwise required, simple direct labor allocations for required external reporting, i.e, no activity-based costing (ABC) at all.  This as opposed to trying to use either direct labor-based or activity-based allocations for product costs for both external reporting and internal decisions.

Note 60 – Analysis of end-of-period “variance analysis.”  I didn’t get around to finishing this.  The main point I had in mind to re-discover with examples was that there’s a real and useful part of “variance analysis,” and an unreal and wasteful and unnecessarily troublesome part, and that the bad part that arises at the end of the accounting/planning period comes mostly from the bad decision made at the beginning of the period to use allocation-based product costs in the first place.

Note 61 – The inside story often told (that might also be true) about how two kinds of cost accounting — (1) “allocation-based full product cost accounting” for internal decisions and (2) the infamous “inventory profits” effect made possible by GAAP external reporting rules for cost-based inventory valuation — combined to cause the “hollowing out” and implosion of the once-great International Harvester manufacturing company.

Note 62 – 2nd Detective Columbo skit – the “depreciation” concept

[jan 26 8:17pm]

xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx

Jan 17, 2011 – 9:36 pm – Time for a new page (although The Return of Detective Columbo note 62 above just keeps wanting to grow even after the new page appeared).  Ladies and gentlemen, Inventory Systems III[that was fine, but, as of 1/26 12:14 am, virtually all of work was still going into this page, mainly into Note 59-a that became the exciting and dynamic continuing saga, The Nuggets Incorporated Case Study]

Leave a comment