What I Got Right (and Wrong) About AI in 2018
A retrospective, a scorecard, and setting up a debate
In 2018, I wrote a long piece about AI and its potential to disrupt labor markets, reshape industries, and pose risks that the “AI myth debunkers” were too quick to dismiss. I never published it. Life got in the way, and by the time I returned to it, the moment had passed—or so I thought.
Seven years later, the moment hasn’t passed. It’s arrived.
What follows is a retrospective: a prediction-by-prediction assessment of what I got right, what I got wrong, and what it means now. The scorecard is organized into four themes: how AI technology would develop, what would happen to labor markets broadly, which specific industries would be affected, and how policy and inequality would shape the response.
For each prediction, a verdict:
✓ Clearly right
↗ Directionally right, wrong on mechanism or timing
◐ Too early to call
✗ Wrong or overstated
SECTION 1: Technology Trajectory
Much of the 2018 “debunker” discourse rested on assumptions about what AI couldn’t do—or wouldn’t do anytime soon. But AI had already surprised its own practitioners, and there was no reason to assume the surprises were over.
Prediction 1.1: The pace of AI progress
“The current pace of AI progress is surprising even the AI research community.”
Verdict: ✓ Clearly right
In 2018, the recent shocks had been AlphaGo defeating Lee Sedol and the emergence of GANs. The next surprise turned out to be large language models (LLMs)—statistical pattern-matching on text that produced capabilities nobody fully anticipated. GPT-3 arrived in 2020. ChatGPT broke every adoption record in late 2022, reaching 100 million users in two months. By 2024, multimodal models were writing code, analyzing images, and passing professional licensing exams like the bar. The research community has spent the last several years in continuous recalibration. The surprise hasn’t stopped.
Prediction 1.2: AI writing
“In several years, I wouldn’t be surprised if an AI could write this article!”
Verdict: ✓ Clearly right (and then some)
This was half a joke in 2018. It took about four years before GPT-3 could produce passable prose, and six before the tools became genuinely useful for long-form writing. The mechanism was unexpected—not a dedicated natural language system, but a general-purpose model trained on internet text. The core intuition, though, was sound: the written word wasn’t safe from automation.
Prediction 1.3: Creativity isn’t uniquely human
“The debunkers also employ the argument that creativity is a uniquely human trait, and that creative work cannot ever fully be replaced... even in the AI we currently have, which will no doubt be viewed as rudimentary, 25 years hence, we have examples of machine creativity, including music and art composition.”
Verdict: ✓ Clearly right
In 2018, AI-generated art was a curiosity—neural style transfer, DeepDream hallucinations, experimental music. By 2022, DALL-E and Midjourney had made image generation accessible to anyone. By 2023, AI-generated songs were going viral and getting pulled from streaming platforms. The fake Drake/Weeknd track “Heart on My Sleeve” was a watershed: millions of streams before the labels intervened. The philosophical debate about whether machines can “truly” be creative continues. The economic debate is settled. [My friend, colleague and erstwhile podcast co-host, Marco Annunziata of the Just Think Substack disagrees with me… more to come on this.]
Prediction 1.4: The cobot phase is transitional
“Debunkers are quick to note that in many cases, the combination of a human radiologist and AI is the most effective diagnostician. This feels like cold comfort, because in the long term, as AIs continue to improve and learn from the best humans, our role in the ‘teams’ will be reduced to equipment manager. For us to believe that human-robot teams will always be better than all-robot teams we must believe that there is really something about human capability and intelligence that is irreplaceable or not replicable and that bots will never catch up.”
Verdict: ↗ Directionally right, still unfolding
Human-AI collaboration is currently the dominant paradigm—coding assistants, AI-augmented diagnosis, LLMs as writing partners. But the trajectory is visible. GitHub Copilot started as autocomplete; newer agent-based systems take on entire coding tasks. Radiologists are still employed, but the ratio of AI contribution to human contribution keeps shifting. Whether “equipment manager” is the destination or just a waypoint remains an open question.
Prediction 1.5: AI as enabler AND performer
“Electricity is the best example of a technological breakthrough that was pervasive and clearly led to wealth creation and substantial job growth. I posit that electric power is unusual and maybe unique as a disruptive technology, because it is an enabler of activity, not the activity itself... Of course AI is an enabling technology, too, but unlike electricity, it can also perform actions and functions.”
Verdict: ✓ Framing holds up
This distinction remains underappreciated. Electricity powers tools; AI is the tool. Electricity enabled humans to do more; AI can do things instead of humans. The optimistic analogies to electrification miss this asymmetry. Every example cited in 2018—autonomous transit, cashier-less checkout, AI writing—has advanced since then.[ma8]
Prediction 1.6: AGI timelines
“The range of opinions on how fast AGI/HLMI can become a reality is vast, from a few decades to ‘by 2100’ to ‘never.’ This may seem so distant that it’s not worth worrying about, but I certainly plan to be alive in the next few decades, and my kids will likely be alive in 2100.”
Verdict: ◐ Too early to call
The 2018 piece acknowledged uncertainty rather than making a falsifiable prediction. What’s changed is that the distribution of expert opinion has shifted dramatically toward “sooner.” According to 80,000 Hours’ review of expert forecasts, AI company leaders now estimate a 25% chance of AGI this year(!); Metaculus forecasters put 25% probability by 2027; and even the more conservative superforecaster groups have shortened their timelines significantly. Whether current LLM architectures can scale to AGI or whether new breakthroughs are required is actively debated. The acceleration of the discourse itself was not anticipated.
SECTION 2: Labor Market Dynamics
The debunkers’ strongest argument was historical: technology has always displaced workers, and new jobs have always emerged. But this time might be different—not because anyone had a crystal ball, but because the nature of this disruption seemed to break the pattern.
Prediction 2.1: Historical models may not apply
“Of course economic growth creates demand for labor in a normal labor market, but in one without any scarcity of labor, the existing models of economic relationships may no longer be relevant... We have to be more honest about the weaknesses of our predictive tools under a game-changed economy.”
Verdict: ↗ Directionally right, still contested
Economists remain divided. The “lump of labor fallacy” crowd still argues that new jobs will emerge. But the confidence has dimmed. The speed of LLM adoption and the breadth of tasks now automatable have made the “this time is different” argument more respectable than it was in 2018. The models haven’t broken down at scale yet, but we’re running the experiment in real time. [This is another area where Marco thinks I’m off base. We’re setting up a good debate…]
Prediction 2.2: The animal labor analogy
“We no longer design products or processes that require animal labor because we have machines that are better at mechanical work. How long before we stop designing production processes that require human labor?”
Verdict: ◐ Too early to call, but increasingly relevant
This was meant as a provocation, not a prediction. But it’s aged into something more serious. When companies design workflows now, AI capability is a first-order consideration. Why hire a human transcriptionist when Whisper exists? Why hire a junior analyst when an LLM can do the first pass? The question isn’t whether humans will be eliminated, but whether new systems will be designed with humans as optional.
Prediction 2.3: Simultaneous displacement across sectors
“The ability to switch jobs or industries will be constrained not just by lack of skill as in the past, but by the simultaneous shrinking of opportunities in every sector. When tax lawyers start looking for work, so will the portfolio managers. When carpenters start looking for work, so will the welders. And when warehouse shelvers start looking for work, so will the pizza delivery guys.”
Verdict: ↗ Directionally right, timing uncertain
Mass displacement hasn’t arrived yet, but the pressure is visible across white-collar work in a way that 2018 discourse didn’t anticipate. Legal research, financial analysis, copywriting, customer service, coding—all are seeing AI adoption. The blue-collar displacement emphasized in 2018 (welders, warehouse workers) has actually been slower than white-collar exposure. The breadth was right; the sequence was inverted.
Early data suggests the displacement is real. A Stanford study found employment among software developers aged 22-25 fell nearly 20% between 2022 and 2025—coinciding precisely with the rise of AI coding tools. Freelance copywriters report work drying up almost overnight. “The drop from 2022 to 2023 was bad,” one veteran copywriter told Blood in the Machine. “The drop from 2023 to 2024 was catastrophic.” [Here again, Marco reads the data differently.]
Prediction 2.4: Scale, speed, and starting point
“There are three reasons why economic history is only a partial guide, and they are scale, speed, and starting point.”
Verdict: ✓ Framework holds up
Scale: AI affects cognitive and physical labor simultaneously, across virtually every industry. Speed: the pace of capability improvement outstrips worker adaptation. Starting point: high existing inequality means less social resilience. Each leg of the argument has strengthened since 2018. The main weakness of the framework is that “starting point” is more about distributional consequences than about the nature of disruption itself—it explains why the transition will be painful, not why it will be different.
Prediction 2.5: Displacement timeline
“Bain predicts that 20-25% of the U.S. labor force could be displaced within 10-20 years; for perspective, that is 2-3 times more rapid than the displacement of agricultural workers during the first half of the 20th century.”
Verdict: ◐ Too early to call
Seven years in, mass unemployment hasn’t materialized, though labor market churn has increased and certain job categories (content writing, customer support, some paralegal work) are visibly contracting. The full assessment requires waiting until 2028-2038. The prediction may have been wrong, or we may be in the early phase of a curve that steepens [Marco is more blunt: The odds are building massively against rapid and large job displacement. “Seven years in, we’ve kept creating more jobs. Soon we’ll be expecting a quarter of the workforce to be displaced within one year.”]
Prediction 2.6: Wage suppression
“In addition to job loss, workers will be affected by wage suppression. That is, in markets where human labor is valued less, and where labor supply is abundant, holders of capital have little incentive to raise wages.”
Verdict: ↗ Directionally right OR ◐too early to call, complicated by other factors
Real wages grew post-COVID, complicating the narrative. But the mechanism—AI as a tool that increases employer leverage—is visible in labor negotiations. The SAG-AFTRA and WGA strikes were explicitly about AI displacement and compensation. Freelance rates for writing, illustration, and translation have dropped as AI alternatives proliferate. The aggregate statistics don’t yet show the effect, and in fact overall wages are rising; however the lived experience of some knowledge workers suggests that the effect is at least present in patches.
SECTION 3: Specific Industries
Predictions about “AI” in the abstract are easy. Predictions about specific industries are harder—and more useful. The 2018 piece walked through sector after sector, arguing that nowhere was safe. Some of those predictions have landed; others are still in flight.
Prediction 3.1: Transportation
“The imminent loss of transportation workers, especially truck-drivers, is currently one of the most talked-about issue in this area. If you are getting paid to drive, or provide services to people whose income is derived from driving, you may want to switch jobs, soon. The technology for fully autonomous driving is nearly here; indeed some AI experts contend it is already here.”
Verdict: ↗ Directionally right, timeline too aggressive
Autonomous trucking has progressed—Waymo, Aurora, and others are running limited commercial operations. But “imminent” was too strong. Regulatory hurdles, edge cases, liability questions, and infrastructure requirements have slowed deployment. The technology is better than 2018, but societal adoption has lagged the capability curve. Truck drivers still have their jobs. The timeline was too aggressive.
That said, autonomous transit has arrived faster in ride-hailing than trucking. Waymo served over 4 million fully autonomous rides in 2024 alone, bringing its total to over 5 million rides. The company now provides 150,000+ trips per week across Phoenix, San Francisco, Los Angeles, and Austin. The robot taxi future is here—just not for long-haul trucking, yet. In my mind, long haul trucking remains a huge source of pent-up demand for automation, and when the regulatory dam breaks, we’ll see a very rapid shift.
Prediction 3.2: Warehousing
“Amazon has already deployed robots to move boxes around their warehouses. As robot dexterity improves, and as robot and drone coordination improves, warehouse workers will be replaced quickly.”
Verdict: ↗ Directionally right, slower than predicted
Amazon’s warehouses are more automated than in 2018, and the company continues to invest heavily in robotics. But they still employ over a million workers. “Quickly” hasn’t happened. The manipulation problem—robots picking and packing arbitrary objects—turned out to be harder than locomotion. Physical-world automation has lagged information-world automation.
Prediction 3.3: Mining
“Anglo American, Plc, a mining company that currently employs 87,000 people, expects that within a decade, some of its mines will operate without human labor.”
Verdict: ✗ Wrong or overstated
Autonomous haul trucks and drilling systems are now deployed in multiple mines. Rio Tinto operates autonomous trains and trucks in Australia. But “fully autonomous mines” without human labor remain aspirational. Automation has increased significantly; the complete removal of humans from mine sites hasn’t happened within the decade Anglo American projected.
Prediction 3.4: Medicine and radiology
“For some illnesses and radiological scans, the AIs are already better than all but the very best radiologists.”
Verdict: ✓ Clearly right
This was true in 2018 and is more true now. AI diagnostic tools have received FDA clearances. Studies continue to show AI matching or exceeding human performance on specific imaging tasks. The important caveat is that radiologists haven’t been displaced—the tools have been integrated into workflows rather than replacing the professionals. The “cobot” pattern is holding here, for now. Whether the medical “guild” is powerful enough to indefinitely hold their robots at bay remains to be seen.
Prediction 3.5: Entertainment and robot actors
“We already have AIs writing basic weather reports and composing simple songs. We already have the ability to use life-realistic CGI in lieu of actors, and life-realistic voice is improving rapidly. We will soon have major motion pictures with robot actors.”
Verdict: ✓ Clearly right
AI-generated background actors have appeared in Marvel productions. Voice cloning (Respeecher) has been used for major characters including Luke Skywalker. “Next Stop Paris” (2024) is billed as the first fully AI-generated film. Synthetic influencers and virtual K-pop groups have real commercial traction. The SAG-AFTRA strike was substantially about this issue. Robin Wright’s trajectory—from “The Congress” as cautionary fiction to “Here” as production reality—captures the shift precisely. (And, if you haven’t seen The Congress, do. It’s a prescient – and weird - 2013 film based on a 1971 novel by Stanislaw Lem.)
Prediction 3.6: Professional services
“There are already robo-lawyers and accountants for simple matters.”
Verdict: ✓ Clearly right, and accelerating
This was a modest claim in 2018—tools like LegalZoom and TurboTax had automated simple tasks for years. What’s changed is the scope. LLMs can now draft contracts, summarize case law, prepare tax filings, and handle client intake. Law firms are integrating AI into research workflows. The Big Four accounting firms have all announced major AI initiatives. Junior associates and paralegals are the most exposed—exactly the entry-level positions that train the next generation of professionals. [Marco points out that there is not yet evidence that LLMs can replace senior professionals, a problem that I have also pondered. Without the grunt work that most junior white-collar workers undertake, we will not produce sophisticated, broadly knowledgeable and strategic thinkers needed for leadership positions.]
Prediction 3.7: Manufacturing and reinforcement learning
“One current research area is training AI’s to perform physical tasks in virtual representations, using a technique called reinforcement learning. This technique rewards the AI for results and penalizes it for mistakes. Training these robots will soon be cheaper than training humans, and once they learn, they don’t make mistakes.”
Verdict: ◐ Too early to call; timeline slower than expected
Sim-to-real transfer has improved dramatically. Companies like Covariant, Dexterity, and Figure AI are deploying learned manipulation in real settings. But “cheaper than training humans” isn’t yet universally true, and the “don’t make mistakes” framing was too optimistic. Robots still fail on edge cases. Physical-world deployment remains harder than the virtual training successes suggested. I suggested that this functionality would happen “soon,” but 8 years in, it’s still “around the corner.”
SECTION 4: Policy, Inequality, and Risk
Even if AI creates aggregate wealth, the distribution of that wealth is a choice, not a law of nature. The 2018 piece argued that our starting point—high inequality, weak labor power, short-term political incentives—would make the transition harder than optimists assumed. It also addressed the most extreme risk scenarios, not because they were likely, but because dismissing them as “myth” was dangerous.
Prediction 4.1: Inequality exacerbates vulnerability
“Piketty and Saez ... found that inequality in US is at a level not seen since 1900; in Europe income inequality is as high as in 1940. The state of economic inequality is now as bad as it has been in decades, if not ever... our starting point of inequality, ironically and sadly, means that the tide of job loss and wage suppression will be harder to hold back.”
Verdict: ✓ Clearly right, and worsening
This is a contentious issue. Wealth concentration has increased since 2018, partly driven by the success of tech firms—including AI companies. According to Federal Reserve data, the top 0.1% held just under 12% of wealth at the end of 2018 and in the most recent data available hold nearly 14.5% of wealth; the top 1% now hold nearly 32% of all wealth, up from 30% at the end of 2018. The political economy of redistribution has gotten harder, not easier. The consolidation of AI capability in a few well-capitalized firms reinforces the dynamic. However, I will concede that my rating is biased by my world view; the debate on inequality has been harsh, with attacks on and defenses of Piketty & Saez’ work coming from multiple corners. Roge Karma of the Atlantic offers a sober assessment (in my view) of the contours of the debate and can be found here. Stock valuations of tech (AI heavy) firms are one of the reasons for the rich getting richer; the throughline to vulnerability to job loss is emerging, I believe, if not yet evident. [This is perhaps one area where Marco and I disagree most. Since he’s an actual economist and I’m a pretender, I don’t relish this part of the debate, but we’ll have it anyway.]
Prediction 4.2: Philanthropy must lead
“Given the lack of incentive for the private sector to address these issues head-on, and the low likelihood that elected officials have political incentive or wherewithal to address these issues, it falls to civil society, in particular philanthropic organizations to step up.”
Verdict: ↗ Partially realized
AI safety and alignment research has been substantially funded by philanthropy—Open Philanthropy, the Survival and Flourishing Fund, and others. But the focus has been more on existential risk than on labor market transition. The “AI for Good” space has grown, but it remains fragmented and small relative to commercial AI investment. Governments have begun to engage (EU AI Act, US executive orders), but the structural incentive problems remain.
Prediction 4.3: The prerequisites for “Skynet”- like, dangerous AI
“Several elements need to happen for this nightmare to occur, including: (1) robots with fully functional limbs and digits, (2) robots that can coordinate action amongst themselves, (3) robots whose objective function is at odds with human welfare, (4) robots with Artificial General Intelligence (AGI), or Human Level Machine Intelligence (HLMI)... and (5) robots without an off button or ‘kill switch.’”
“On a technical level, problems 1 and 2 (physical coordination and multi-agent coordination) are being worked on now, and advances are made at an astonishing pace.”
Verdict: ✓ Progress faster than expected on several fronts
The framing in 2018 was cautious: not predicting catastrophe but arguing that dismissing the risk as “myth” was premature. The five prerequisites were meant as a checklist of what would need to be true for the nightmare scenarios to become plausible.
Seven years later, progress on that checklist was faster than anticipated—particularly on the dimensions that don’t require AGI.
Physical coordination (prerequisite 1): Boston Dynamics robots now perform parkour, recover from shoves, and navigate complex terrain. The “jaw-dropping” videos of 2018 look quaint compared to current demonstrations.
Multi-agent coordination (prerequisite 2): This is where the darkest developments have occurred. Autonomous drone swarms are no longer research projects; they’re deployed weapons. The Ukraine conflict has been a live laboratory for AI-assisted targeting, coordinated drone strikes, and autonomous systems operating with minimal human oversight. Drones identify targets, coordinate attack patterns, and execute missions in ways that were theoretical in 2018. The “Robo-cup” soccer teams have a military cousin now.
Misaligned objectives (prerequisite 3): The naive debunker response was “why would we ever build robots programmed to harm us?” The 2018 piece answered: “Of course robots will be programmed to harm SOME people.” That’s no longer hypothetical. Autonomous weapons systems exist. The question of whether the “objective function” of a military AI is “at odds with human welfare” depends entirely on which humans you’re asking about.
AGI (prerequisite 4): Still not here, but as noted above, timelines have compressed dramatically. Geoffrey Hinton, who won the 2024 Nobel Prize for his foundational work on neural networks, now estimates a 10-20% chance AI causes human extinction within 30 years. Yoshua Bengio, who chairs the International Scientific Report on AI Safety, warns that “many leading researchers now estimate the timeline to AGI could be as short as a few years or a decade.”
Kill switches (prerequisite 5): The 2018 piece noted that “we don’t have a particularly good record of enforcing only benevolent uses of technology” and that “rogue AI research” should be assumed to be underway. The current landscape of open-source models, proliferating capabilities, and inconsistent international governance suggests this concern was warranted.
The Skynet scenario—sentient robots turning on humanity—still seems remote. But the scenario that doesn’t require sentience—autonomous weapons systems, coordinated by AI, deployed by states or non-state actors with objectives hostile to some group of humans—has moved from science fiction to battlefield reality. The debunkers said this was a myth. It wasn’t.
What the Predictions Missed
The mechanism, not the target. The 2018 piece correctly predicted that both cognitive and physical labor would be exposed—”jobs that require brain and those that require brawn.” Radiologists and truck drivers appeared in the same sentence for a reason. What wasn’t anticipated was how it would happen. The expectation was that robotics and reinforcement learning would drive displacement across both domains roughly in parallel. Instead, large language models disrupted cognitive work years before physical automation caught up. The lawyers, accountants, and writers were always in the crosshairs; the surprise was that the bullet arrived via text prediction, not task-specific AI.
The sequencing. Related to the above: physical-world automation has been slower than information-world automation. Warehouses still employ humans; radiology departments are integrating AI but not firing doctors. Meanwhile, freelance writers, illustrators, and translators are already seeing rates collapse. The blue-collar displacement that dominated 2018 discourse hasn’t materialized at scale. The white-collar displacement has arrived faster and through an unexpected door.
The speed of public adoption. ChatGPT went from zero to 100 million users in two months. The expectation was that AI would diffuse through enterprise software and industrial applications. Instead, consumer-facing tools drove awareness and adoption. The public caught up faster than expected.
The types of dangers and ill effects. The 2018 piece warned about the risks of shrugging off risks. Skynet scenarios are unlikely and farfetched, but elements of it are quite possible; jobs may be created, but we may not be ready for the widespread job losses; etc. But as Marco notes, I neglected to discuss or predict some of the more malignant effects of AI that are already upon us: “We have all missed the unintended ways in which AI would be dangerous … deepfakes, the exacerbation of the adverse health effects of social media, chatbots pushing people to suicide, etc.” Indeed, all technologies have unintended consequences. usually both positive and negative. For a technology as pervasive and transformative as AI, the consequences will be far reaching in ways we will inevitably fail to estimate well. (Also, anytime I can quote Marco being more pessimistic than me, I have to jump at the chance.)
What It Adds Up To
Seven years is long enough to take score, even if the game isn’t over.
On speed, I was partially right. The pace of AI advancement has consistently outrun expert forecasts. LLMs blindsided most of the field—nobody in 2018 was predicting that a text-prediction engine would pass the bar exam. ChatGPT’s adoption curve broke records. And the timeline compression on AGI is remarkable: serious researchers now debate years, not decades. The debunkers said we had plenty of time. We had less than they thought.
On scale, I was directionally right, though the full picture hasn’t arrived. The breadth of exposure is there—cognitive work and physical work, creative and routine, white collar and blue collar, all in the crosshairs. What I didn’t anticipate was the sequence. The coders building these tools are among the first to feel the squeeze. Junior developers and freelance writers weren’t supposed to be the canaries in the coal mine. But here we are.
On the big question—whether this technological revolution is like the others—I think I’m being proven right. The debunkers’ strongest argument was historical analogy: technology always displaces workers, new jobs always emerge, it works out in the long run. But AI doesn’t fit the pattern. It’s not “just” an enabler like electricity; it’s also a performer. It’s not narrow like the loom or the tractor; it’s general. The provocation I offered in 2018—that we stopped designing processes around animal labor once machines got good enough, and might do the same with human labor—looks less like provocation every year.
Some surprises were reassuring. Physical automation has been slower than I expected; warehouses and mines still employ humans. Some surprises were alarming. Autonomous drone swarms went from research project to battlefield weapon faster than anyone anticipated. Geoffrey Hinton quit Google to warn about extinction risk. These weren’t in my 2018 draft, but they rhyme with its concerns.
The honest summary: I got the shape of the problem mostly right while getting plenty of details wrong. The mechanisms surprised me; the trajectory didn’t. And the debunkers who said the fears were overblown, that history would repeat, that we could relax? I’m not relaxed; are you?
What comes next is genuinely uncertain. The displacement that hasn’t arrived at scale may still come—or may not. The “cobot” phase may last decades, or it may be a pit stop. AGI may arrive in five years or fifty. Anyone who claims to know is selling something.
About that 2018 prediction: “In several years, I wouldn’t be surprised if an AI could write this article!” Reader, it did. Or at least it helped—the research, the drafts, the structure, the fact-checking. I’d be lying if I said I wrote this alone; Claude.AI was my co-author. Turns out I was right about that one too. I’m just not sure how to feel about it…
Oh yeah, of course ChatGPT created this image for me, in about 10 seconds.
Marco Annunziata was a great reviewer and hard grader, sometimes calling my BS on this article. Where AI is taking us is a favorite topic of his as well, and we don’t always see eye to eye. So we’ve agreed to debate some of the topics here, and perhaps a few others as well. We’ll post it here and in Just Think. Stay Tuned!


