Artificial Intelligence (AI), Simulations, and Experts…from Linkedin conversation

Artificial Intelligence, Simulations, and Experts

I’m trying to get my head around how these all go together. For the sake of conversation, I’m posting a few statements (which may or may not be true) to get the conversation going. Any thoughts? (There are also ‘expert systems’)

An artificially intelligent system is based on the knowledge of experts up to the time of creating the program. Programs take a long time to write.
An AI that takes a longer time period to rewrite (modify) than it takes the system to change will never be optimal. Experts can modify the system because:

1) Experts have a much broader range of knowledge and skill sets than an AI.
2) Experts never cease to learn how to improve systems even if the system doesn’t change. AI doesn’t learn.
3) Experts quickly become aware when the system changes and what they’re doing needs to be modified; not so with AI.
4) Experts use theory and experience to quickly modify rules and algorithms in practice; not so with AI
5) Experts are the source of changing rules. The complete information and context are rarely transferred into AI.

A) Simulation is sometimes passed off as AI and has at least the same insufficiencies. Real AI systems are extremely difficult to create.
B) Simulation should be used as a tool, especially to help experts improve systems. (corollary: use AI as a tool)
C) Simulation ideally can be used in context during real time activities by experts to clarify options and improve processes.

Statistics help experts make decisions.
Experts decide which context is appropriate for a statistic.

Conclusion:
The better the expert—the more useful the statistics and simulation to improve systems.

7 days ago

15 comments

Arie Versluis • Brian,
This is an impressive and interesting range of statements.

Let me start with your conclusion. It gave me the insight in my own limitations. I did not manage to develop a logical reasoning which leads from your statements to your conclusion 😦
When I look at your conclusion as a statement I would like to make the following comments:
– I know a lot of (medical and other) experts who only have a very limited understanding of statistics. For these experts statistics have hardly any added value.
– A good simulation requires the application of quite some statistics and requires reliable data (statistical distributions). Not every expert in a certain field has the knowledge or experience to evaluate this conditions. The risk than is that the beautiful presentation of results will be interpreted as the truth.
– Experts may very well know how systems might be improved. This is according to my experience different from being capable of really implementing these improvements

My reasoning is not always (or should I say mostly not) based on logical thinking only , but is for a large part subconscious thinking. At this level I have a feeling of what you want to say in your conclusion and I can fully agree with that.

Artificial Intelligence does not offer the capability of subconscious thinking. A lot of experts can and do think intuitive or subconscious (an experienced doctor who sees a patient knows what the problem is at first sight) and will then use formalized knowledge to verify this first impression.

Each of your statements deserve more attention and comment, but that might bring me over the upper limit of number of characters allowed for a comment 🙂
If you like to hear these comments please let me know.

Brian Gregory, MD, MBA

Brian Gregory, MD, MBA • Thanks Arie. Yes, I’d like to hear your musings on the matter. Please email them to brian@ortimes.org, or post a few more comments (others may be interested).

7 days ago

Wayne Fischer • Well, Brian, I’ve been checking in on your post and am not surprised that only one person has replied so far. 🙂

Based on my experience, and much published science, you give waayyy too much credit to the “experts.” Any system of import today is much too complicated for an expert, using his education, experience, intuition – whatever – to comprehend all its interactions, non-linearities, and dimensions…and most systems are “self-adapting” to some extent. I need only point to the complexity (chaos?) of healthcare delivery systems to validate that point. Research has shown that the best human minds can only comprehend, at most, 7 variables!

I disagree with your premise that systems change faster (or experts can change them faster) than we can model / simulate / optimize them – this has been thoroughly demonstrated and published many times over in many diverse areas. Humans resist change – we all know that – even in the face of overwhelming evidence of the need…usually takes a crisis (that’s why the phrase “burning platform” arose).

Artificial intelligence was oversold early on, but expert systems *did* have many great successes back in the late 80s and early 90s (research DuPont’s use).

My contention is diametrically opposite yours: The *only* hope of understanding and significantly improving our systems *is* by modeling and simulation…*but* with the experts working “hand-in-glove” with the modelers. 🙂

4 days ago• Unlike1

Brian Gregory, MD, MBA

Brian Gregory, MD, MBA • Thanks, Wayne.

I’m of the opinion that my statements are correct in certain fields/disciplines, and not in others. I was hoping for some disagreement and examples— what took you so long?

We are not diametrically opposed. I’ve spent a lot of time creating models and simulations which I use as tools to understand processes and improve the models. It’s that old ‘iterative’ thing that I talked about once.

I’ve also spent considerable time designing and creating methods of seeing the important interrelationships in context. It’s difficult. Ya need to fill the model with good data (your expertise); but you need to know what data and which statistic from that data is useful which implies theories and specialized concepts(both the expert and statistician need input on this).

So…an expert can be a modeler, or at least should be involved with the modeling. We agree on that, as we agree that ‘expert’ systems (and I’ll include ‘expert tools’ as a similar concept) are great things.

We also live in slightly different worlds. Mine (the OR) is filled with terrible OR schedulers, lots of physicians who don’t understand basic statistics, insufficient data collection for lots of process improvement, and politics that decide how the processes of the OR run.

You know how sophisticated clinicians can be with the data (think pie charts). One could claim that hospitals are slow to change because the ‘experts’ are lacking in expertise in many areas, and hence the modelers can’t model without good input and algorithms. (ever hear clinicians discuss finance and the stock market?) Otherwise, why don’t we have great AI systems that run hospitals by now?

There has been a bit more discussion, but it’s been by direct email. Some other people are trying to get a more holistic feeling of how these statistics, simulation, experts, and AI models interact…and how their significance should be weighed in different situations.

As for the human mind holding 7 variables…that may relate more to the conscious than the subconscious mind. Billions of synapses and trillions of path combinations for neural pulses are in are little brains, and intuition may actually be the result of a lot more than 7 variables.

4 days ago
Brian Gregory, MD, MBA

Brian Gregory, MD, MBA • I guess what I’m saying is that all of these need to work together…what can be done, and in which circumstances, to achieve that? That’s the future.

4 days ago
Brian Gregory, MD, MBA

Brian Gregory, MD, MBA • Just got back from Yahoo hme with a few statements/questions for comment:

AI can be great at data mining and running pre-conceived simulations based on that data.

AI can deduce which of the parameters that it’s captured influence which of the other parameters and create equations to show the relationships and covariance of any relationships.

Could AI have created the Theory of Relativity?
Supposedly, data had to be collected and means conceived and created to collect that data to support the theory. There would have been no previous data or simulation routine for an AI to use to create E=MCsquared.

Could AI have created the equation for electricity: V=IR?
I suppose that good collected data of with voltages, resistances (along with lots of other data that possibly could have been choices for the equation ex ante) an AI would eventually work through data mining and could come up with an equation that related just those three V,I,R. How would it have done simulations to support it’s newly created equations?

What if AI’s initial simulation routine is wrong. Can AI create its own simulation routine that makes sense?
Do you need to ask the right questions before your simulation gets the optimal results?

Corollary:
If a system does not create the degree of improvement after the use of AI and simulations, are you asking the wrong questions? Wrong simulation paradigm/algorithm?

Does that mean that you should reassess your simulation and which data you’ve collected for the data mining and analysis?
(you don’t need failed AI to reevaluate your simulation suppositions)

3 days ago

Wayne Fischer • Brian, part of your reply to me supports my contention about the abilities of “experts:”

“We also live in slightly different worlds. Mine (the OR) is filled with terrible OR schedulers, lots of physicians who don’t understand basic statistics, insufficient data collection for lots of process improvement, and politics that decide how the processes of the OR run.

“You know how sophisticated clinicians can be with the data (think pie charts). One could claim that hospitals are slow to change because the ‘experts’ are lacking in expertise in many areas, and hence the modelers can’t model without good input and algorithms. (ever hear clinicians discuss finance and the stock market?) Otherwise, why don’t we have great AI systems that run hospitals by now?”

And I disagree with your claim about intuition:

“Billions of synapses and trillions of path combinations for neural pulses are in are little brains, and intuition may actually be the result of a lot more than 7 variables.”

In my experience, precisely because the human mind cannot understand the many interconnections and interactions of any system of complexity, “intuition” gets it wrong. Many, many times I’ve worked with the experts who argued for a certain course of action, and after we generated appropriate data, built and validated a model, we found they were totally wrong.

As for AI, I’m not quite sure what you’re including in that phrase. Much can be accomplished with straight-forward modeling paradigms such as Discrete-Event Simulation, System Dynamics, and Agent-Based Modeling…even Statistical Process Control charts have brought a level of understanding to systems that the “experts” did not comprehend.

2 days ago• Like 

Brian Gregory, MD, MBA

Brian Gregory, MD, MBA • Wayne,

We’re approaching ‘experts’ in different ways. I think of an expert as an hypothesis generator.

Let’s look at this in terms of black boxes and computer programming.

Box A:
A computer program that has been designed to have five possible solutions to any problem. It can ‘hypothesize’ one of those approaches to explaining or solving any problem or situation that it encounters.

Box 1:
Neanderthal man from the past who believed that any event not immediately understood was caused by fairies, leprechauns, gods or goddesses.

Box B:
Watson, from Jeopardy, which has terabytes of data and an hypothesis generating engine (whatever that is) so that it can create questions and beat the reigning human winners on that tv show.

Box 2:
Sherlock Holmes. He notices everything, runs tests, and is incredibly analytical and logical.

Box A will obviously give correct answers to only a few questions; (a broken clock has the right time twice a day).

Neanderthal man will also have difficulty thinking outside the box — magical creatures aren’t the answer to everything.

Watson was specifically programmed for Jeopardy and did extremely well answering those types of questions, but its hypothesis generator and terabytes of data were not focused on running an OR, anesthesia, or surgical decisions. Watson won the contest, but had lost trial contests. The difference between Watson’s score and the contestants was not greater than those contestants and many of their adversaries in prior matches.

Sherlock Holmes is never beat (well…maybe Moriarity was his match).

My point is that a talented ‘expert’ is a great hypothesis generator within a specific field. The broader the general knowledge, and the greater the specific knowledge, the better the hypothesis.

Computers can work very well with the premises (the bases for hypotheses) according to the logic given them.

Statistics programs have great logic built on statistical premises (proofs). But, their work is mainly to support hypothesis created by other sources (possible exception being data mining).

An ‘expert’ (in my definition) is a great hypothesis generator who seeks validation through logical consistency of theory, validation through simulation, and then validation through reality.

That last one -reality- is a tough one. Once again, iteration. If the expert’s intuition (hypothesis) is not validated by reality, it could be a problem with the simulation…which could be a problem with the simulator’s logic or data supplied…which could be a problem with the raw observations. However, the erroneous output in the simulation could be very helpful in determining the nature of the faulty analysis. This brings us to control charts.

Control charts are great! They’re at the interface of human hypothesis and statistics. The general guidelines with control charts are to let the ‘human’ realize what is amiss and hypothesize (from experience or whatever abilities) to diagnose and fix the problem.

Control charts can be simple (the number of dents in a piece of sheet metal) or tailored to a very specific detail of a complicated process that someone expert knows is the cause of further problems down the line even though not immediately apparent (a variation in temperature of a reaction, amount of coke in steel production, etc).

An ‘expert’, is never absolutely sure of anything. An expert is a great adapter to information.

Computers and simulations, however, can be misinterpreted by people as having absolutely correct answers to everything (not the fault of the computer).

2 days ago
Brian Gregory, MD, MBA

Brian Gregory, MD, MBA • clarification:
A wise expert can have strong opinions, but is never 100% certain.

2 days ago

Robert Gordon • Lost my comment! The AI that runs LinkedIn comments is not expert enough to control for my stray clicks. Anyway, with great precision I had said roughly the following:
I read Brian and Wayne as in more than substantial agreement disagreeing from habit.

Human expert knowledge (EK) is essentially open, while AI is essentially closed.
Knowledge is necessary but not sufficient for operational production of outcomes.
Operational EK (OEK) can be (typically is) improved in and by being reduced to AI.
AI is not necessarily an alternative to EK — OAI is often prosthetic to OEK.

1 day ago• Like 

Arie Versluis • @Brian,
I like to comment on your clarification of an expert. I prefer to rephrase it:
– an expert can have strong opinions (not necessarily the right ones)
– a wise expert always has an open mind to continuously learn more about her/his field of expertise and is able to explain her/his knowledge in understandable words to a layman

1 day ago• Like 

Brian Gregory, MD, MBA

Brian Gregory, MD, MBA • @Arie
And of course, having a strong opinion does by no means imbue someone with expert status (unless we define it as such). What makes an expert? I’m not going to touch that question with a 10 foot pole.

1 day ago
Brian Gregory, MD, MBA

Brian Gregory, MD, MBA • Thank you everyone, for helping me get my head around how all these (AI, experts, simulation) go together. I now have a stronger (and different) opinion on these matters, but by no means claim to be an expert.

About Brian D Gregory MD, MBA

Board Certified Anesthesiologist for 30 years. TOC design and implement for 30 years. MBA from U of Georgia '90: Finance, Data Management, Risk Management. Practiced in multiple US states and Saudi Arabia at KFSH&RC and KFMC Taught residents in two locations. Worked with CRNAs for 20 years.
This entry was posted in experts, healthcare reform, simulation and tagged , , . Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s