AI and the future of work


AI and Jobs

 AI and the future of work

December 14, 2023 | By Professor Huaxia Rui

Last year, the advent of ChatGPT raised new questions about what Artificial Intelligence (AI) means for human labor. Workers who once felt secure in their jobs began wondering if they would soon go the way of the telegraph operator or the carriage driver. 

In a working paper, my co-authors and I create a visual framework to think about the evolving relationship between AI and jobs. We then use the launch of ChatGPT as a shock to test an idea we call the inflection point conjecture.

A conceptual framework

Before diving in, let’s address three common misunderstandings. 

First, intelligence is not the same as consciousness. While we can define human or artificial intelligence in various job contexts, the same cannot be said for consciousness. In fact, whether consciousness even exists remains debatable. 

Second, there are really two forms of human intelligence: one based on deduction and the other based on induction, much like System 2 (slow thinking) and System 1 (fast thinking) suggested by psychologist Daniel Kahneman. We can think of deduction as a causal inference process based on logic and premises, and induction as a computational process of achieving generalization using data under certain distribution assumptions. Hence, we need to distinguish Statistical AI and Causal AI where the former, better known as machine learning, obtains knowledge by detecting statistical regularities in data. Statistical AI gained momentum later last century, thanks to significant progresses in statistical and computational learning theories, and, of course, to the dramatic increase in computing power and the availability of vast quantities of data. Current AI technologies are largely based on Statistical AI. Despite its limitations in reasoning, Statistical AI has enjoyed enormous success over the past decade or so. It most likely will be the form of AI that revolutionizes the way we live and work in the near future. 

Third, current AI technologies are task-specific, not task-generic. Artificial general intelligence (AGI) that can learn any task is probably still decades away, although some have argued that GPT-4’s capabilities show some early signs of AGI. 

We limit our discussions to task-specific Statistical AI and will refer to it as AI from now on.

The power of AI for a given task depends on four factors.

Task learnability—how difficult it is for an AI to learn to complete a task as well as a human worker does. From the perspective of an AI, a task is essentially a function mapping certain task input to some desirable task output, or more generally, distribution of task outputs. The learnability of the task is determined by how complex the mapping is and how difficult it is to learn this mapping from data using computational algorithms. While some tasks are highly learnable because they are so routine, others may require vast amounts of data and/or huge amounts of computational resources for the learning to be successful. In fact, there may even exist tasks that are simply not learnable no matter how much data we have. As a theoretical example, consider the practical impossibility of learning the private key in a public-key cryptosystem, even though one can generate an arbitrary number of labeled instances, i.e., pairs of plaintext and encrypted messages.

We can break down a task’s learnability into its statistical complexity Sf and its computational complexity Cf. Visually, we may represent a task as a point on a task plane where the two coordinates represent the statistical and computational complexities of the task. Plotting AI performance (e.g., relative to human performance) for all tasks in a 3-dimensional space which we refer to as the task intelligence space, we obtain the current intelligence surface, or CIS, for short, that represent the overall intelligence levels of current AI technologies. The top left panel of Figure 1 illustrates this concept.     

CIS_AI


Figure 1

The two sources of task learnability imply two types of resources needed for AI to successfully learn the task, which lead us to the next two factors.

Data availability—The more data points available to train an AI, the higher the CIS is. Whether it is data about driving conditions and vehicle control to train an autonomous vehicle or documents in different languages to train a translation device, the availability of sufficient amounts of labeled data is of paramount importance for AI to approximate human intelligence. This may seem obvious given our understanding of the two types of resources required for the training of AI. Its significance in practice can still be strikingly impressive. For example, the ImageNet project, launched in 2009 and containing more than 14 million annotated images of over 20,000 categories, is of historical importance in the development of AI, especially for vision tasks. Dr. Fei-Fei Li, the founder of ImageNet, is recognized as the godmother of AI at least in part for establishing ImageNet. Because the importance of data availability for different tasks depends on their degrees of statistical complexity, as is illustrated in the top right panel of Figure 1, we may also understand the significance of ImageNet for vision tasks by noting the high statistical complexity of image data. 

Computation speed—The faster the computation speed is, the higher the CIS is. Similarly, the importance of computation speed for different tasks depends on their degrees of computational complexity, as is shown in the bottom left panel of Figure 1. The recent example of graphic processing unit, or GPU, demonstrates the importance of this factor.

Learning techniques—Unlike the first factor, which is an inherent property of a task, or the second and third factors, which are resources, this factor is all about the actual learning and is where unexpected progress is made thanks to human ingenuity. It encompasses a variety of techniques used for learning which can be broadly categorized into two types: better hypothesis class or better learning algorithm. For example, the successes of convolutional neural networks for computer vision tasks and the transformer architecture for natural language processing are examples of better hypothesis classes. On the other hand, regularization and normalization techniques are examples of learning algorithm improvements. If there is an occupation that will never be replaced by task-specific Statistical AI, we bet on researchers and engineers who innovate in learning techniques. The bottom right panel of Figure 1 illustrates the impact of improvements in learning techniques, the magnitude of which is not necessarily related to task learnability.

In summary, we can understand the AI performance through the lens of four factors, illustrated in Figure 2.
 

AI_Learnability


Figure 2

For a given task, whether AI performance is satisfactory enough depends on what we mean by satisfactory. To make this concrete, imagine another surface, referred to as the minimal intelligence surface, which represents the minimal level of AI performance for us humans to consider it as satisfactory. If the CIS is below the minimal intelligence surface on a task, AI performance on that task is not yet good enough and the task remains a human task. But if the CIS is above the minimal intelligence surface on a task, the task can be left for AI.

Three phases for AI-jobs relations

We consider an occupation as a set of tasks. Depending on the relative position of the CIS and the minimal intelligence surface, we can play out three different scenarios.

Phase 1: Decoupled

This is the phase when human workers are not engaging with AI while doing their jobs. Graphically, the CIS is below the minimal intelligence surface on the region corresponding to the task set of the occupation, as is illustrated in the left panel of Figure 3 where the occupation is represented by six red dots. Therefore, none of the tasks can be satisfactorily completed by AI yet. This phase will likely last long for occupations with data availability issues.

Human Intell


 
Figure 3


Phase 2: Honeymoon

This is the phase when human workers and AI benefit from each other. Graphically, the CIS is above the minimal intelligence surface on some tasks of an occupation but is below the minimal intelligence surface on other tasks of the occupation. In other words, these jobs still have to be done by human workers, but AI can help them by satisfactorily completing some of the tasks required by the jobs. On the other hand, by working side-by-side with human workers, the AI can benefit from new data generated by human workers. In the left panel of Figure 3, we illustrate this phase by representing the occupation using six dots. The three green dots represent the tasks that an AI can do, and the three red dots represent the tasks that only a human can do. Human workers of such an occupation will use AI to complement their work, benefiting from the boost in productivity that comes from offloading some tasks. Ironically, this may also accelerate their own replacement. 

Phase 3: Substitution

In this phase, AI can perform as well as an average human worker but at a much smaller or even negligible marginal cost. Graphically, the CIS is completely above minimal intelligence surface on the region corresponding to the task set of the occupation. In the left panel of Figure 3, we illustrate this phase by representing the occupation using only green dots. At this phase, the occupation is at the risk of becoming obsolete because the marginal cost of AI is often negligible compared to that of humans, making it more efficient for these jobs to be completed by AI rather than by humans.

While the minimal intelligence surface is largely static, the CIS shifts upwards over time, because even though task learnability is an inherent task property, the other three factors progress over time, resulting in improved AI performance. Hence, we can envision most occupations, initially decoupled with AI, gradually enter the honeymoon phase, and for many, eventually move into the substitution phase.  On the other hand, because AI adoption takes time and different organizations have different AI proficiency levels, we may find that the same occupation can simultaneously be in different phases, depending on organizations or regions. We illustrate this point in the right panel of Figure 3.

The Inflection Point

Based on the conceptual framework, we further build and analyze an economic model to show the existence of an inflection point for each occupation. Before AI performance crosses the inflection point, human workers always benefit from improvement in AI, but after the inflection point, human workers become worse off whenever AI gets better. This model insight offers a way to test our thinking using data. Let’s consider the occupation of translation and the occupation of web development. Existing evidence suggests that AI likely has crossed the inflection point for translation, but not for web development. Based on the inflection point conjecture, we hypothesized that the launch of ChatGPT has likely benefited web developers but hurt translators. We believe these effects should be discernible in data because the launch of ChatGPT by OpenAI a year ago significantly shocked the CIS, affecting many occupations. Indeed, anecdotal evidence and our own experiences suggest that ChatGPT has increased AI performance for translation and for programming in general. There are even academic discussions that ChatGPT, especially the one powered by GPT-4, has shown early signs of AGI which is the stated mission of OpenAI.

To test this, my co-authors and I conducted an empirical study to evaluate how the ChatGPT launch affected translators and web developers on a large online freelance platform. Consistent with our hypotheses, we find that translators are negatively affected by the launch in terms of the number of accepted jobs and the earnings from those jobs. In contrast, web developers are positively affected by the same shock.

By nature, some occupations will be slower to enter the substitution phase. 

Occupations that require a high level of emotional intelligence will be slower to enter the substitution phase. At a daycare center, for example, machines may replace human caregivers by changing diapers and preparing bottles, but they will be poor at replicating human empathy and compassion. Humans are born with a neural network that can quickly learn to detect and react to human emotions. That learning probably began tens of millions of years ago and has become engrained in our hardware. Machines, in contrast, do not have that long evolutionary past, and must learn from scratch, if they can learn at all. At a more fundamental level, this might be rooted in the computational complexity of learning to “feel”.

Occupations that require unexpected or unusual thinking processes will also be slower to enter the substitution phase or even the honeymoon phase. Humans sometimes come up with original ideas seemingly out of nowhere, without following any pattern. What’s more intriguing is that we may not be able to explain how we came up with that idea. While fascinating for humans, this poses significant challenges to AI because there simply isn’t enough data to learn from. To exaggerate a bit, there is only one Mozart, not one Mozart a year.

What’s next

The relationship between AI and humans is already generating heated public debates because of its profound implications on our society and the potential to disrupt the fabric of our society. At this moment, I still believe there is a future for human workers, not only because of the many limitations of current AI technologies, but also because of our limited understanding of ourselves. Until the moment when we finally understand what it means to be human and the nature of human spark, we have a role to play in the cosmic drama.

Rui Huaxia

Huaxia Rui is the Xerox Professor of Computers & Information Systems at Simon Business School. 

Follow the Dean’s Corner blog for more expert commentary on timely topics in business, economics, policy, and management education. To view other blogs in this series, visit the Dean's Corner Main Page.  

 

Comments

It's becoming clear that with all the brain and consciousness theories out there, the proof will be in the pudding. By this I mean, can any particular theory be used to create a human adult level conscious machine. My bet is on the late Gerald Edelman's Extended Theory of Neuronal Group Selection. The lead group in robotics based on this theory is the Neurorobotics Lab at UC at Irvine. Dr. Edelman distinguished between primary consciousness, which came first in evolution, and that humans share

Add new comment

Enter the characters shown in the image.
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.

Generative AI, competition, and antitrust


AI Competition Antitrust

Generative AI, competition, and antitrust

July 19, 2023 | By Jeanine Miklós-Thal

 

These days, everyone is talking about AI, especially generative AI which can create text or images. Discussions range from concerns about AI causing human extinction to more immediate questions about the potentially disruptive effects of generative AI on higher education.

As a competition economist, I ask myself whether competition in the industries that produce AI is healthy and working in ways that serve consumers and society.

The generative AI industry consists of several layers. So-called foundation models form the upstream layer. A foundation model is a large machine learning model that is trained on broad amounts of data and can be adapted to many different downstream tasks. GPT-3.5 is an example of a foundation model. The downstream layer of the industry consists of AI applications for specific tasks. These applications arise in a wide range of industries, including healthcare, energy, finance, education, social media, law, agriculture, and more. Examples of applications built upon foundation models include the ChatGPT chatbot, or GitHub Copilot which helps developers write software code.

The upstream layer of the industry is currently dominated by two players: Open AI, in partnership with Microsoft; and Google DeepMind. While Microsoft and Alphabet/Google have been giants in the tech industry for many years, OpenAI was founded in 2015 as a start-up backed by tech heavyweights like Elon Musk and Sam Altman. Given the small number of large players, the AI foundation model market can currently be considered highly concentrated. There are some good economic reasons for this high level of concentration. Developing a foundation model requires immense amounts of data, extensive cloud computing infrastructure, and an army of data engineers. Only a few select firms have the resources needed to build foundation models, and replicating the required investments may be socially inefficient. It should also be noted that the three main players in cloud computing—AWS, Microsoft Azure, and Google Cloud—have a strategic advantage in foundation models, given the importance of cloud infrastructure for training and deploying large-scale AI models.

The downstream layer of the industry consists of applications and tools tailored to specific tasks. Some of the applications, like ChatGPT, are developed by the foundation model owners themselves, others by tech start-ups, or by non-tech firms that seek to improve efficiency or solve problems in traditional industries. This is an exciting and dynamic industry, with plenty of innovation and new-firm entry happening.

Is the market concentration in foundation models something to worry about?

One worry is that the owners of foundation models may exploit their market power in the classic way, by setting high prices. For instance, the licensing fees charged to downstream application developers may be higher than they would be in a more competitive market, which could lead to fewer applications being developed and higher final prices charged to end buyers. Market power would slow down the development and adoption of AI applications in this case.

Another worry is that the owners of foundation models may exclude or discriminate against downstream firms viewed as (potential) competitors in certain applications, with the goal of extending their market power in foundation models into other markets. Antitrust regulators should be on the lookout for contracts that are aimed at leveraging market power in the upstream market to monopolize downstream application markets, which is an illegal practice under existing antitrust laws.

Finally, market power is also likely to influence the direction of innovation, as emphasized by Daron Acemoglu and Simon Johnson in their recent book “Power and Progress: Our 1,000 Year Struggle Over Technology and Prosperity.” This would be a source of worry if market power leads firms to focus less on developing socially desirable applications, e.g., in health care or energy, and more on applications that may be harmful to society, e.g., applications that spread misinformation or social media applications with high addiction potential. A related and important question is whether the applications being developed will replace or augment human labor and how market power in AI models plays into this.

What about competition in the downstream applications market?

The availability of foundation models has the potential to facilitate entry and thereby promote competition among application developers. Consider an entrepreneur who wants to offer a service that helps grooms and brides write their wedding vows. Prior to the availability of foundation models for generative AI, the entrepreneur would have had to build their own machine learning model, which would have required significant investments in data and computing as well as engineering talent. Now that a foundation model like GPT is available, the entrepreneur can build upon an out-of-the-box solution, which makes entry significantly easier and less costly. And indeed, several competing firms (notably, ToastWiz and Joy) have begun to offer AI-assisted wedding vow writing tools over the past year.

Generative AI may also foster new competition in markets that have been dominated by a single firm for many years. For instance, Microsoft may challenge Google’s long-standing single-firm dominance in online search (and, with it, Google’s leadership in online advertising) thanks to the integration of ChatGPT into Bing. Google’s dominance in search may also be challenged by new entrants like You.com, a search engine built on AI that was founded in 2020. It remains to be seen if one of these search engines will end up replacing Google as the industry leader, we will witness competition between multiple search engines, or Google will maintain its leadership position. Or online search as we currently know it may be replaced by something different altogether?

In summary, although there are economic reasons for the current market concentration in AI foundation models, this concentration raises several legitimate worries. At the same time, the availability of foundation models has the potential to facilitate entry and foster competition in AI applications, which arise in a vast range of (old and new) industries. Importantly, for these benefits to fully realize, third-party application developers must be given access to foundation models.

From an antitrust policy perspective, I think the priority of regulators should be to ensure that competition in newly emerging AI-related markets is based on the merits of the products and services provided. Firms should not be able to leverage their market power in existing markets to obtain power in newly emerging markets. Let me conclude by saying that I think that antitrust policy is only a small part of the puzzle here and that a more complete suite of policy tools will likely be needed to address some of the societal issues raised by AI, such as the spread of misinformation.

Note: It is important to acknowledge the inherent difficulty of accurately forecasting the future in technology markets, particularly in the rapidly evolving field of AI. The dynamics of competition, market concentration, and the potential impacts of AI innovation are subject to ongoing changes and complexities, making it challenging to predict precise outcomes and implications.

Jeanine Miklós-Thal

Jeanine Miklós-Thal is a professor in the Economics & Management and Marketing groups at Simon Business School. Her research spans industrial organization, marketing, and personnel economics. 


Follow the Dean’s Corner blog for more expert commentary on timely topics in business, economics, policy, and management education. To view other blogs in this series, visit the Dean's Corner Main Page.

 

Add new comment

Enter the characters shown in the image.
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.

Ravindra Mantena

Ravindra Mantena

Ravindra Mantena

Clinical Professor
Janice M. and Joseph T. Willett Professor of
Business Administration for Teaching and Service
Area(s) of Expertise
Computers and Information Systems
Bio

Professor Mantena currently serves as the Faculty Director for the MBA programs at Simon. In this role, he oversees Simon's Full-time, Professional, and Executive MBA programs. He teaches various analytics and digital strategy courses in these programs. His research interests are in the economics of digital and information-rich products. Prior to his academic career, Mantena worked as a sales manager for a consumer goods multinational firm and founded an aquaculture company in India.

Courses
Data Analytics
Business Modeling
Introduction to Business Analytics
Managerial Data Analysis
Managerial Decision Analysis
Managing Digital Products & Platforms
Probability & Descriptive Analytics
Research Interests
His research explores how the increasing digitization of products and services alters competition, strategy and market structure. He studies issues of pricing, product design and strategy for digital products and services. In addition, he also has research interests in measuring decision performance, revenue management and information economics.
Teaching Interests

Quantitative Modeling, Business Analytics, Management of Information Technology and Digital Product Strategy

Publications
Impact of Product and Platform Level Sampling on the Sale of Online Video Courses
Market Share Contracts in B2B Procurement Settings with Heterogeneous User Preferences
2021
Production and Operations Management, 31(3), 2022, 1290-1308
Issue
3
Volume
31
Reflections on Teaching Online
Reflections on Teaching Online
SSRN
Leadership Training In An MBA Program Using Peer-led Team Learning
2013
American Journal of Business Education (AJBE)
Issue
2
Volume
6
Co-opetition Between Differentiated Platforms in Two-Sided Markets
2012
Journal of Management Information Systems
Issue
2
Volume
29
Literature survey: Mathematical models in the analysis of durable goods with emphasis on information systems and operations management issues
2012
Decision Support Systems
Issue
2
Volume
53
Competition and Strategic Partnership between Intermediary Platforms in the Presence of Heterogeneous Technologies
2012
Institute of Electrical and Electronics Engineers (IEEE)
Literature Survey: Mathematical Models in the Analysis of Durable Goods with Applications to IS Research
2011
Institute of Electrical and Electronics Engineers (IEEE)
Platform-based information goods: The economics of exclusivity
2010
Decision Support Systems
Issue
1
Volume
50
CIST 2009: Conference on Information Systems and Technology
2009
Diagnosing decision quality
2008
Decision Support Systems
Issue
1
Volume
45
Exclusive Licensing in the Video Gaming Industry
2007
CONVERGING DIGITAL TECHNOLOGIES: AN OPPORTUNITY OR A THREAT?
2002
AMCIS 2002 Proceedings
Market Expansion or Margin Erosion: The Double-Edged Sword of Digital Convergence
2002
ICIS 2002 Proceedings
On technology markets that tip: Increasing returns, competition, and discontinuous shifts in consumer valuation
1999
3-333C Carol Simon Hall
585.275.1079

Mitchell Lovett

Mitchell Lovett

Mitchell Lovett

Sr Assoc Dean, Education & Innovation
Benjamin Forman Professor of Marketing
Area(s) of Expertise
Marketing
Bio

Professor Lovett is the Senior Associate Dean of Education and Innovation. He is also a leading scholar and teacher as the Benjamin Forman Professor of Marketing. He joined the Simon Business School in 2008 after earning his PhD in marketing from Duke University. In his administrative role, he has been instrumental in launching the AI Initiative, a cross-disciplinary effort to integrate AI into Simon’s business education. He also helped to develop the Online Masters in Business Analytics and Applied AI, a cutting-edge program that prepares students for the rapidly evolving data and AI-driven economy. His research interests span a wide range of topics in marketing, such as advertising, branding, word-of-mouth, political marketing, consumer and firm learning, retailing, and conjoint analysis. He applies and develops empirical methods to study marketing phenomena and to inform marketing decisions. His research has been published in top journals and garnered recognitions such as the Marketing Science Institute's Young Scholars and Scholars distinctions, and the William F. O'Dell award finalist for long-term impact. His research has also attracted national media attention, and he has been cited in outlets such as the New York Times, Forbes, and Ad Age. He is frequently invited to speak at academic and industry conferences and events. He also advises PhD students and is an award-winning teacher including courses on marketing research, marketing strategy, analytics design and applications, advertising strategy, consumer behavior, and PhD seminars in quantitative marketing.

Courses
Advanced Marketing Strategy
Core Research Topics In Quantitative Marketing
Analytics Design & Application
Research Interests

Professor Lovett's research interests include quantitative marketing, retail strategy, targeted advertising, advertising content and schedule choices, online and offline word-of-mouth, branding, social media listening, and consumer learning. One stream of Professor Lovett's research focuses on applying and developing empirical methods for political marketing. Current projects in this stream study the dynamics behind why candidates go negative in their political advertising, how candidates can improve their targeting of political ads, and the role of advertising versus social media in influencing voter sentiment. A second stream of research examines entertainment products and how consumers learn about them as they decide whether to continue engaging. Another current stream of research examines how advertising and brand characteristics influence word-of-mouth online and offline and how these two channels differ. Professor Lovett's research has been published in Marketing Science and the Journal of Marketing Research, received research grants and awards, including the from Marketing Science Institute Institute for the Study of Business Markets’ Research Grant Silver Medalist Award, and garnered national media attention in relevant publications such as Ad Age and Marketing News.

Teaching Interests

Professor Lovett has taught Advanced Marketing Strategy, Marketing Research, Marketing Strategy, Advertising Strategy, Consumer Behavior, and PhD Seminars in Quantitative Marketing.

Publications
Learning to set prices
2022
Journal of Marketing Research
Issue
2
Volume
59
Disentangling the Effects of Ad Tone on Voter Turnout and Candidate Choice in Presidential Elections
2021
Management Science
Empirical Research on Political Marketing: A Selected Review
2019
Consumer Needs and Solutions
Issue
3
Volume
6
Can Your Advertising Really Buy Earned Impressions? The Effect of Brand Advertising on Word of Mouth
2019
Quantitative Marketing and Economics / Springer
Issue
3
Volume
17
Product Launches with New Attributes: A Conjoint-Consumer Panel Technique for Estimating Demand
2019
Journal of Marketing Research
Issue
5
Volume
56
Mobile Diaries Benchmark Against Metered Measurements: An Empirical Investigation
2018
International Journal of Research in Marketing
Issue
2
Volume
35
The Role of Paid, Earned, and Owned Media in Building Entertainment Brands: Reminding, Informing, and Enhancing Enjoyment
2016
Marketing Science
Issue
1
Volume
35
Targeting Political Advertising on Television
2015
Quarterly Journal of Political Science
Issue
3
Volume
10
A Dataset on Brands and their Characteristics
2014
Marketing Science
Issue
4
Volume
33
On Brands and Word of Mouth
2013
The Journal of Marketing Research
Issue
August
Volume
50
Optimal Admission and Scholarship Decisions: Choosing Customized Marketing Offers to Attract A Desirable Mix of Customers
2012
Marketing Science
Issue
4
Volume
31
Marketing and Politics: Models, Behavior and Policy Implications
2012
Marketing Letters
Issue
2
Volume
23
Seeds of Negativity: Knowledge and Money
2011
Marketing Science
Issue
3
Volume
30
3-208 Carol Simon Hall
585.276.4020

Daniel Keating

Daniel Keating

Daniel Keating

Clinical Assistant Professor
Area(s) of Expertise
General Business Administration
Bio

Dan Keating is a Clinical Assistant Professor and Faculty Director of Academic Support. He teaches courses in analytics, general business, communication, and applied AI in the MBA, Masters, and Undergraduate programs. Teaching is a mid-life career change for Dan: he had a 25+ year career in regional and global Technology organizations such as Oracle, Qlik, and smaller marketing and analytics firms. His clients included Apple, SAP, Dell, Abbott Labs, Merck, JP Morgan Chase, and others. In addition to teaching, Dan leads the Instructional Technology and Innovation (ITI) team at Simon Business School, which works with faculty to implement innovative instructional technologies to drive pedagogical success for all students. His teaching is highly rated by students, receiving multiple teaching awards. He has deep experience serving on community and commercial boards and as an elected official in his town.

Courses
Business Modeling with Excel
Financial Statement Analysis-Lab
New Product Strategy
Supervised Teaching CIS 211
Data-Driven Decision Making
Professional Communication
Teaching Interests

Analytics in Excel. Managerial Communications. Analytics Design. Product Management. Marketing. Technology.

3-160 F Carol Simon Hall
Subscribe to AI Initiative