About Bennat Berger

Bennat Berger is an entrepreneur, investor, and tech writer based in New York City. He is a co-founder and Principal at Novel Property Ventures, a real estate firm that specializes in amassing and managing multifamily residential units in New York City. He is also a founding partner at the investment firm Novel Private Equity, where he oversees investments across a diverse range of interests, from experiential retail to entertainment to supermarket technologies.

We Can’t Afford to Sell Out on AI Ethics

Today, we use AI with the expectation that it will make us better than we are — faster, more efficient, more competitive, more accurate. Businesses in nearly every industry apply artificial intelligence tools to achieve goals that we would, only a decade or two ago, derided as moonshot dreams. But even as we incorporate AI into our decision-making processes, we can never forget that even as it magnifies our capabilities, so too can it plainly show our flaws.

“Like all technologies before it, artificial intelligence will reflect the values of its creators. So inclusivity matters — from who designs it to who sits on the company boards and which ethical perspectives are included,” AI researcher Kate Crawford once wrote for the New York Times. “Otherwise, we risk constructing machine intelligence that mirrors a narrow and privileged vision of society, with it sold, familiar biases and stereotypes.”

The need for greater inclusivity and ethics-centric research in AI development is well-established — which is why it was so shocking to read about Google’s seemingly senseless firing of AI ethicist Timnit Gebru.

For years, Gebru has been an influential voice in AI research and inclusivity. She cofounded the Black in AI affinity group and speaks as an advocate for diversity in the tech industry. In 2018, she co-wrote an oft-cited investigation into how gender bias influenced Google’s Image Search results. The team Gebru built at Google encompassed several notable researchers and was one of the most diverse working in the AI sector.

“I can’t imagine anybody else who would be safer than me.” Gebru shared in an interview with the Washington Post. “I was super visible. I’m well known in the research community, but also the regulatory space. I have a lot of grass-roots support — and this is what happened.”

So what, exactly, happened?

In November of 2020, Gebru and her team concluded a research paper that examined the potential risks inherent to large language-processing models, which can be used to discern basic meaning from text and, in some cases, create new and convincing copy.

Gebru and her team found three major areas of concern. The first was environmental; relying on large language models could lead to a significant increase of energy consumption and, by extension, our carbon footprint.

The second related to unintended bias; because large language models require massive amounts of data mined from the Internet, “racist, sexist, and otherwise abusive language” could accidentally be included during the training process. Lastly, Gebru’s team pointed out that as large language models become more adept at mimicking language, they could be used to manufacture dangerously convincing misinformation online.

The paper was exhaustively cited and peer-reviewed by over thirty large-language-model experts, bias researchers, critics, and model users. So it came as a shock when Gebru’s team received instructions from HR to either retract the paper or remove the researchers’ names from the submission. Gebru addressed the feedback and asked for an explanation on why retraction was necessary. She received no response other than vague, anonymous feedback and further instructions to retract the paper. Again, Gebru addressed the feedback — but to no avail. She was informed that she had a week to rescind her work.

The back and forth was exhausting for Gebru, who had spent months struggling to improve diversity and advocate for the underrepresented at Google. (Only 1.9 percent of Google’s employee base are Black women.) To be silenced while furthering research on AI ethics and the potential consequences of bias in machine learning felt deeply ironic.

Frustrated, she sent an email detailing her experience to an internal listserv, Google Brain Women and Allies. Shortly thereafter, she was dismissed from Google for “conduct not befitting a Google Manager. Amid the fall out, Google AI head Jeff Dean claimed that the paper “didn’t meet our bar for publication” and “ignored too much relevant research” that undermined the risks she outlined — a shocking accusation, given its breadth of research.

To Gebru, Google’s reaction felt like corporate censorship.

“[Jeff’s email] talks about how our research [paper on large language models] had gaps, it was missing some literature,” she told MIT’s Technology Review. “[The email doesn’t] sound like they’re talking to people who are experts in their area. This is not peer review. This is not reviewer #2 telling you, ‘Hey, there’s this missing citation.’ This is a group of people, who we don’t know, who are high up because they’ve been at Google for a long time, who, for some unknown reason and process that I’ve never seen ever, were given the power to shut down this research.”

“You’re not going to have papers that make the company happy all the time and don’t point out problems,” Gebru concluded in another interview for Wired. “That’s antithetical to what it means to be that kind of researcher.”

We know that diversity and research is crucial to the development of truly effective and unbiased AI technologies. In this context, firing Gebru — a Black, female researcher with extensive accolades for her work in AI ethics — for doing her job is senseless. There can be no other option but to view Google’s actions as corporate censorship.

For context — in 2018, Google developed BERT, a large language model, and used it to improve its search result queries. Last year, the company made headlines by creating large-language techniques that would allow them to train a 1.6-trillion-parameter model four times as quickly as previously possible. Large language models offer a lucrative avenue of exploration for Google; having them questioned by an in-house research team could be embarrassing at best, and limiting at worst.

In an ideal world, Google would have incorporated Gebru’s research findings into its actions and sought ways to mitigate the risks she identified. Instead, they attempted to compel her to revise her document to include cherry-picked “positive” research and downplay her findings. Think about that for a moment — that kind of interference is roughly analogous to a pharmaceutical company asking researchers to fudge the statistics on a new drug’s side effects. Such intervention is not only unethical; it leads to the possibility of real harm.

Then, when that interference failed, Google leadership worked to silence and discredit Gebru. As one writer for Wired concludes, that decision proves that “however sincere a company like Google’s promises may seem — corporate-funded research can never be divorced from the realities of power, and the flows of revenue and capital.”

Gebru is an undeniably strong person, an authority in her field with a robust support network. She had the force of personality to stand her ground against Google. But what if someone who wasn’t quite as well-respected, supported, and brave stood in her position? How much valuable research could be quashed due to corporate politicking? It’s a frightening thought.

The Gebru fallout tells us in no uncertain terms that we need to give real consideration to how much editorial control tech companies should have over research, even if they employ the researchers who produce it. If left unchecked, corporate censorship could stand to usher in the worst iteration of AI: one that writs large our biases, harms the already-underserved, and dismisses fairness in favor of profit.

This article was originally published on BeingHuman.ai

By |2021-04-16T18:56:58+00:00April 16th, 2021|Technology|

Is Unequal Access to Data Undermining Your Company’s Success?

Big data has far and away transcended its status as a technology buzzword. It has become a full-fledged infrastructural norm; countless business leaders have embraced its potential to provide enhanced insight, trend discovery, and other key variables contributing to their annual goals. This notion has created the need for multifaceted implementation strategies which, ideally, aim to use data as a binding agent for all company sectors, ultimately streamlining internal fluidity.

However, despite their ambition and openness to change, many of these leaders fail to recognize that their implementation strategies are flawed — namely in terms of distribution and accessibility. In turn, these inconsistencies can foster a culture of inequality and opacity, creating a counter effect and undermining the very success the strategy strives to achieve.

To curb these setbacks, organizations must be proactive in expanding data knowledge and utilization equally across their different departments.

Diagnosing the problem

To establish a stable data landscape, business leaders need to identify both the internal problem at hand and its broader implications. Limited data distribution should be viewed not only as a threat to corporate functions, but also a potential slight to certain divisions of the organization’s workforce.

The pitfalls of such disparities have already been carefully observed at a societal level, even before the pandemic took them to new heights, and to introduce them to the workplace is to court slow-burning disarray. Inner turmoil can quickly lead to poor external performance, causing companies to fall behind competitors that are more internally cohesive.

With the proper mindset in place, leaders can turn their attention to a variety of strategies to nip their data problem in the bud, and this begins with pinpointing where deficiencies lie. For instance, are employees reliant upon a “suboptimal mix of cloud-based technology and on-premise enterprise systems,” where the collective workforce is hamstrung by patchy, insufficient access — as nearly two-thirds of companies report — or is quality access simply limited to specific parts of the company? Organizations will need to assess workers’ level of “data illiteracy,” a consequence that occurs when data-driven decision-making is limited to select departments and teams.

Leaders must also recognize that the issue can spread beyond data access alone, impacting a company’s confidence in investing and technological innovation. For example, if the company empowers non-IT departments with the bulk of its data technology investment authority, IT workers may start to feel disenfranchised and contribute to a general sense of confusion and mismatched priority. These so-called “shadow systems” are not sustainable because they confuse expectations and leave some workers ill-equipped to address problems that would otherwise be in their wheelhouse.

Creating long-term success

With remote work enduring as a new norm, emphasis on data and technology is arguably at an all-time high, and the need for a tight digital ship has followed suit. Therefore, solutions to the above should be handled with diligence, and it is important to remember that seemingly cut-and-dry remedies are anything but; simply making data access more widespread is not the full answer. Instead, to create lasting success, a broader systemic change should be favored over a temporary band-aid.

By focusing on total reinvention, leaders will be able to properly address each micro-issue contributing to the macro flaw. These focal points could include better, more equal funding to multiple departments, stronger team integration to optimally disperse data knowledge and learning opportunities, and reallocation of investing dollars to reflect future innovation rather than retroactive level setting.

These efforts can also be applied to the introduction (or updating) of relative technology aimed at an improved data-driven work cycle. Access to AI and automation tools, for instance, should be evenly distributed to all applicable company fields — with training provided for those unversed in how to use these resources. Success in each of these areas will be contingent upon properly communicated expectations.

Regardless of where change is most needed, a general rule of thumb is to isolate growth areas that require a rapid return and use them as a kicking-off point. The current system should be audited based on its existing depth and reach, and any salvageable aspects can be leveraged during the construction of a stronger, more efficient successor. New infrastructure must also remain compatible with the business’s technological and financial capabilities.

This type of large-scale change may seem daunting, even unreachable, but it has become an objective necessity as COVID continues to rewrite the rulebook for businesses worldwide. That said, the challenge can be met head-on with a blend of forward-thinking, unfailing commitment, and, above all, constant transparency and attention to detail.

This article was originally published on Business2Community

By |2021-03-31T23:31:04+00:00February 13th, 2021|Business, Technology|

Will Cryptocurrency Ever Enter the Mainstream for Businesses?

At this time in 2018, the very idea of bitcoin becoming an accepted currency among major corporations would have been unlikely. Now, the idea still attracts some raised eyebrows — but it isn’t as immediately dismissed.

Cryptocurrency had its first spotlight moment in 2018. It was the modern era’s equivalent of the gold rush; all around the country, adventurous bitcoin dabblers found themselves raking in thousands — sometimes tens of thousands — of dollars after investing comparatively paltry sums.

“Bitcoin and, subsequently, a proliferation of other cryptocurrencies had become an object of global fascination, amid prophecies of societal upheaval and reform, but mainly on the promise of instant wealth,” journalist Nick Paumgarten wrote of the time for The New Yorker. “A peer-to-peer money system that cut out banks and governments had made it possible, and fashionable, to get rich by sticking it to the Man.”

But that promise of high returns soon began to buckle. In January of 2018, the total market capitalization of cryptocurrencies had peaked at $800 billion, skyrocketing up from the mere $18 billion reported the year before. It didn’t take long for the market to plunge; by the end of the year, the market had lost three-quarters of its value and stood at a mere $200 billion.

The cryptocurrency bubble had popped. But unlike other markets, it seemed as though the sheer intensity of the crash would bring the boom-and-bust cycle to a grinding (and permanent) halt. Headlines shared stories of would-be investors who invested their life savings, insurance payouts, and loans in the new market only to see the lion’s share of their hard-earned and borrowed money trickle away.

“What the average Joe hears is how friends lost fortunes,” Alex Kruger, a former banker and current cryptocurrency trader, told reporters for the New York Times. “Irrational exuberance leads to financial overhang and slows progress.”

The response from corporate interests was, at the time, similarly cold. One writer for the Financial Times noted that even well-regarded cryptocurrency enthusiasts were “met with a cold shoulder by US regulators” when they attempted to open exchange-traded funds for Bitcoin and encourage wider adoption.

But in recent months, hints of another crypto boom have begun to circulate. Recent reporting from Forbes’ Ron Shevlin indicates that trading of Bitcoin, Ethereum and other major cryptocurrencies increased sharply at the open of 2020, peaked in February, and remained at high levels through the first half of 2020. Roughly 15 percent of American adults now own cryptocurrency — and notably, half of them invested in the sector for the first time this year.

Corporate America, for its part, hasn’t paid the recovering cryptocurrency market much attention. But, as of October, two major financial firms have diverged from their peers to invest in the opportunity they believe the sector offers. Their names: PayPal and Square.

Early in the month, Square announced that it had purchased a total of 4,709 bitcoins at the cost of roughly $50 million, or one percent of the company’s total assets.

“We believe that bitcoin has the potential to be a more ubiquitous currency in the future,” Square’s Chief Financial Officer, Amrita Ahuja, shared in a press release. “As it grows in adoption, we intend to learn and participate in a disciplined way. For a company that is building products based on a more inclusive future, this investment is a step on that journey.”

PayPal took another route in upholding cryptocurrency. Rather than purchase bitcoin, it launched a cryptocurrency service that will allow customers to buy, hold, and sell digital currency on its site and associated applications. PayPal’s President and CEO, Dan Schulman, explained the company’s decision to create its crypto platform was based on the idea that the “efficiency, speed and resilience of cryptocurrencies” could “give people financial inclusion and access advantages.” Moreover, he said, the eventual shift to such digital currencies was “inevitable.”

But what would this “inevitable” future mean for businesses? If you were to ask Gavin Brown, the co-founder and director at the venture capital firm Blockchain Capital Limited, the answer would be a fundamental change in trade currency.

“In an era where companies such as McDonald’s have a higher credit rating than countries such as Ireland, the notion that multinational firms may issue their own currencies and request that their customers purchase with them is not that outlandish,” CNBC journalist Eustance Heung paraphrased of Brown’s perspective in a 2019 article. “What we’re probably likely to see is … almost like [corporate] groups or alliances coming round around mainstream currencies.”

There are certainly a few benefits to using cryptocurrency in business. Bitcoin and other similar currencies facilitate secure, speedy transactions that offer chargeback protection — because cryptocurrency doesn’t support debt or loans, companies can be sure that payments conveyed via bitcoin aren’t fraudulent or reversible. Bitcoin’s decentralized nature also allows businesses to reach international buyers who may not have previously been able to access their goods or services.

Cryptocurrency offers increased accessibility; however, it isn’t without its detractions.

At present, cryptocurrencies are not stable, insured, or regulated. This lack of clear support from federal bodies makes for tremendous market volatility and puts investors at a high risk of losing their — or their clients’ — fortunes. Most businesses will not want to roll the dice on a currency they can’t rely on.

So, will bitcoin see another, more long-lived, heyday in corporate America? The answer is unclear.

While there might be another cryptocurrency boom on the horizon, it will be a while before bitcoin and its competing currencies come into regular corporate trade. The degree of usage will most likely depend on what we see in the cryptocurrency market over the next few months to a year. Will we see another dramatic boom-bust cycle? Will investors flock to or flee cryptocurrency? Will matters stabilize or devolve once more into wild speculation?

If the market stabilizes and provides more consistent (if less lucrative) returns, we can expect businesses to enter into a period of cautious experimentation. PayPal and Square’s investments have lent bitcoin some credibility. Still, it remains to be seen whether — now that they have been giving tacit industry “permission” — other corporate interests will begin making investments in bitcoin and/or using it in trade.

If cryptocurrency does take off in the corporate sector, it seems likely that federal authorities will begin regulating the market. If we were to reach this point, we would be in a world where cryptocurrency has established an (albeit preliminary) place for itself as a credible form of business currency.

However, even this scenario requires a lot of if’s. It would appear best for companies and institutional investors to approach cryptocurrency conservatively and see how the above hypothetical plays out. Bitcoin may eventually lose its novelty status in big business — but there’s no sense in major corporate players charging forward while its stability remains unclear.

This article was originally published on Medium

By |2021-03-31T23:28:50+00:00January 29th, 2021|Business, Current Events, Technology|

CEOs: AI Is Not A Magic Wand

The technology holds great promise, no question—but deployment must be done strategically, and with the understanding that you likely won’t see gains on its first attempt to integrate.

If you achieve the improbable often enough, even the impossible stops feeling quite so out of reach.

Over the last several decades, artificial intelligence has permeated almost every American business sector. Its proponents position AI as the tech-savvy executive leader’s magic wand — a tool that can wave away inefficiency and spark new solutions in a pinch. Its apparent success has winched up our suspension of disbelief to ever-loftier heights; now, even if AI tools aren’t a perfect fix to a given challenge, we expect them to provide some significant benefit to our problem-solving efforts.

This false vision of AI’s capability as a one-size-fits-all tool is deeply problematic, but it’s not hard to see where the misunderstanding started. AI tools have accomplished a great deal across a shockingly wide variety of industries.

In pharma, AI helps researchers home in on new drugs solutions; in sustainable agriculture, it can be used to optimize water and waste management; and in marketing, AI chatbots have revolutionized the norms of customer service interactions and made it easier than ever for customers to find straightforward answers to their questions quickly.

Market research provides similar backing to AI’s versatility and value. In 2018, PwC released a report which noted that the value derived from the impact of AI on consumer behavior (i.e., through product personalization or greater efficiency) could top $9.1 trillion by 2030.

McKinsey researchers similarly note that 63 percent of executives whose companies have adopted AI say that the change has “provided an uptick in revenue in the business areas where it is used,” with respondents from high performers nearly three times likelier than those from other companies to report revenue gains of more than 10 percent. Forty-four percent say that the use of AI has reduced costs.

Findings like these paint a vision of AI as having an almost universal, plug-and-play ability to improve business outcomes. We’ve become so used to AI being a “fix” that our tendency to be strategic about how we deploy such tools has waned.

Earlier this year, a joint study conducted by the Boston Consulting Group and MIT Sloan Management Review found that only 11 percent of the firms that have deployed artificial intelligence sees a “sizable” return on their investments.

This is alarming, given the sheer volume that investors are putting into AI. Take the healthcare industry as an example; in 2019, surveyed healthcare executives estimated that their organizations would invest an average of $39.7 million over the following five years. To not receive a substantial return on that money would be disappointing, to say the very least.

As reported by Wired, the MIT/BCG report “is one of the first to explore whether companies are benefiting from AI. Its sobering finding offers a dose of realism amid recent AI hype. The report also offers some clues as to why some companies are profiting from AI and others appear to be pouring money down the drain.”

What, then, is the main culprit? According to researchers, it seems to be a lack of strategic direction during the implementation process.

“The people that are really getting value are stepping back and letting the machine tell them what they can do differently,” Sam Ransbotham, a professor at Boston College who co-authored the report, commented. “The gist is not blindly applying AI.”

The study’s researchers found that the most successful companies used their early experiences with AI tools — good or ill — to improve their business practices and better-orient artificial intelligence within their operations. Of those who took this approach, 73 percent said that they saw returns on their investments. Companies who paired their learning mindset with efforts to improve their algorithms also tended to see better returns than those who took a plug-and-play approach.

“The idea that either humans or machines are going to be superior, that’s the same sort of fallacious thinking,” Ransbotham told reporters.

Scientific American writers Griffin McCutcheon, John Malloy, Caitlyn Hall, and Nivedita Mahesh put Ransbotham’s point another way in an article titled — tellingly — “AI Isn’t the Solution to All of Our Problems.” They write:

“The belief that AI is a cure-all tool that will magically deliver solutions if only you can collect enough data is misleading and ultimately dangerous as it prevents other effective solutions from being implemented earlier or even explored. Instead, we need to both build AI responsibly and understand where it can be reasonably applied.”

In other words: We need to stop viewing AI as a fix-it tool and more as a consultant to collaborate with over months or years. While there’s little doubt that artificial intelligence can help business leaders cultivate profit and improve their business, their deployment of the technology must be done strategically — and within the understanding that the business probably won’t see the gains it hopes for on its first attempt to integrate AI.

If business leaders genuinely intend to make the most of the opportunity that artificial intelligence presents, they should be prepared to workshop. Adopt a flexible, experimental, and strategic mindset. Be ready to adjust your business operations to address any inefficiencies or opportunities the technology may spotlight — and, by that same token, take the initiative to continually hone your algorithms for greater accuracy. AI can provide guidance and inspiration, but it won’t offer outright answers.

Businesses are investing millions — often tens of millions — in AI technology. Why not take the time to learn how to use it properly?

This article was originally published on ChiefExecutive.net

By |2021-03-31T23:25:21+00:00January 23rd, 2021|Business, Technology|

Autonomous Cars: A Smart Cities Answer to COVID-Proof Transit?

Of all the circumstances that we might have imagined kickstarting America’s smart city aspirations, a pandemic surely wasn’t on our list. And yet, our anxieties over disease transmission might just be the fuel that propels us towards a future in which autonomous cars become the urban norm.

A huge setback for public transit

For the last several months, the COVID-19 pandemic has compelled us to change our perspectives to suit a newly disease-aware world. We’ve adapted our day-to-day routine to suit social distancing recommendations and become leery of crowded, high-traffic areas. Our faith in public transit, in particular, has been shaken so profoundly that it very nearly demands an innovative fix. Time magazine recently described COVID-19’s impact on public transit as “apocalyptic.”

“[Buses] that once carried anywhere from about 50 to 100 passengers have been limited to between 12 and 18 to prevent overcrowding in response to coronavirus […] Seattle transit riders have described budgeting as much as an extra hour per trip to account for the reduced capacity, eating into their time at work, school or with family,” Time’s Alejandro de la Garza wrote in July.

Sometimes, riders’ anxieties compel them to leave the bus before their stop; one woman who de la Garza interviewed described exiting several stops early with her seven-year-old son after the driver allowed a crowd of people to board at once.

“It’s very trying,” the source, Brittany Williams, shared. “I’ll put it in those terms.”

How can we keep public transit viable?

The obvious answer to the overcrowding and slow-transit problems would be to add more buses — but such a move doesn’t seem economically feasible with the current decline in public transit use. In July, the Transit App reported a 58 percent year-over-year reduction in travelers within Williams’ home city of Seattle.

Numbers are a little worse in Washington D.C., with a 66 percent decline in Metrobus use and a 90 percent drop in Metrorail traffic. The losses experienced in New York City are among the worst, with the Transit App noting a 95 percent loss in the spring and a still-alarming reduction rate of 84 percent in late summer.

Pandemic fears have limited traveling, which in turn has limited fares to a trickle and all but eliminated cities’ abilities to add to their public transit fleets. According to a recent McKinsey report, 52 percent of American respondents travel less than they did before COVID-19. Many who do travel opt for a private vehicle over bus or train trips. A full third of surveyed consumers say that they “value constant access to a private vehicle more than before COVID-19.”

To risk stating the obvious: not everyone can buy or store a public car, nor should they even if they could. The environmental impact of replacing public transit with individual vehicles would be environmentally disastrous and dramatically exacerbate existing traffic and parking problems. Moreover, reports indicate that purchasing intent has dropped with the economic downturn; people don’t want to buy new cars when their incomes are uncertain.

An opening for autonomous cars

But I would argue that city-dwellers don’t necessarily need private cars — they just need a mode of transport that offers the isolated, sterilized feel of personal vehicles with the cost-efficiency and dependability that characterizes good public transit. Ridesharing services like Uber and Lyft have set the groundwork for this, but aren’t a perfect fit. They’re expensive, focused on one person at a time, and naturally pose a virus-spread risk to passengers and drivers alike. But what if there were no drivers, only a limited number of masked and isolated passengers traveling pre-defined, regular routes?

Years ago, architect Peter Calthorpe painted a vision of California cities with autonomous cars that was very nearly this, writing: “Down the center of El Camino, on dedicated, tree-lined lanes, [would be] autonomous shuttle vans. They’d arrive every few minutes, pass each other at will, and rarely stop, because an app would group passengers by destination.”

There’s a window of opportunity to reshape consumer perception of autonomous cars within a public-transit perception. Instead of anxiously fleeing buses inundated with close-seated crowds, mothers like Brittany Williams could order an autonomous ride and sit, as per a COVID-optimized version of Calthorpe’s vision, either alone or with one or two distanced others. Between routes, these cars could be sanitized and sent off to support new passengers. Such an approach would establish self-driving vehicles not as a one-person luxury, but a new and COVID-thoughtful form of public transportation.

The sustainability and convenience benefits of adding a self-driving shuttle service to public transit are countless. These include lessening the need for private cars, mitigating traffic deadlock, and improving passenger convenience. Autonomous shuttles could shoulder at least some of the burden carried by other public transit services and lessen the need for additional (if half-filled) buses and trains.

While it is true that Uber and Lyft have been talking about developing autonomous cars and next-gen taxi services for years to no avail, we are now closer than ever before to achieving viable autonomous driving technology. Earlier this year, the GM-backed driverless car startup Cruise received a permit from the California DMV that would allow the company to test driverless cars without safety drivers, albeit only on specific roads.

This represents a significant step forward in the deployment of autonomous cars and, if successful, could lead to the first fully-autonomous vehicles. It is worth noting that despite delays, Cruise hopes to launch a self-driving taxi service soon; its fourth-generation autonomous cars features automatic doors, rear-seat airbags, and, notably, no steering wheel.

If Cruise can manage to accomplish this, it stands to reason that autonomous shuttles are not all that far away. If anything, cities might have more opportunities to partner with self-driving startups and incorporate autonomous shuttles into municipal transit. Given that pandemic-prompted anxieties will likely persist until (if not well beyond) the emergence of a mass-produced vaccine, it seems likely that the window of opportunity for piquing consumer interest in socially-distanced autonomous transit could extend out over years.

Of course, there are few clear speed bumps in the way.

For one, there is still a pervasive stigma around the perceived safety of autonomous cars. Uber memorably halted its experiments in 2018, when one of its experimental vehicles struck and killed pedestrian Elaine Herzberg in Tempe, Arizona.

At the time, there were rumors that the company planned to divest itself of its self-driving interests entirely; however, the company has begun to restart its efforts on a significantly smaller scale in recent months. Cruise — and any other autonomous car startup that takes on the challenge — will need to assure the public of its products’ safety before it can achieve widespread acceptance.

Another major issue will be cost.

With public transit in such dire straits, obtaining the funds for a partnership between self-driving car startups and municipal transit may prove difficult in the short term unless the local government is convinced of the public’s need for autonomous shuttles and the revenue that such an approach could attract as a result of said need. Proponents will need to launch a media campaign to raise public awareness and bolster backing for adding autonomous shuttles to municipal transit.

If we can get beyond some of these initial hurdles, we can kickstart a smart, sustainable and COVID-aware urban transit system. As with the early days of online shopping, consumer perceptions of autonomous driving will quickly shift from it being a laughable luxury to a must-have public service, especially under pandemic conditions.

Originally published on TriplePundit.com

By |2020-11-13T21:30:40+00:00November 13th, 2020|Current Events, Technology, Urban Planning|

Amazon Proves That a Competitive Culture Beats an Anti-Competitive Policy, Every Time

Once more, titans of industry have fallen under censure for perceived monopolization and the abuse of their considerable power. But this time, their names aren’t Carnegie, Rockefeller, or Vanderbilt, but Bezos, Zuckerberg, Pichai, and Cook.

In recent weeks, all four have faced hard questions about perceived corporate misbehavior. The concerns directed towards each corporate icon may differ according to the specifics of their company’s actions, but all ask the same essential question: Can massive tech companies keep themselves from intimidating or using the small businesses that increasingly rely on their platforms to survive?

In late July, the House Judiciary Committee convened a hearing to address the matter. The event marked the culmination of an extensive antitrust investigation that encompassed over a million corporate documents and hundreds of hours of personnel interviews. One reporter for the Verge characterized the hearing as “one of the biggest tech oversight moments in recent years.” Representative David Cicilline, the Commercial and Administrative Law Subcommittee Chair, made the subcommittee’s belief in the importance of the hearing clear at its outset.

“Because these companies are so central to our modern life, their business practices and decisions have an outsized effect on our economy and our democracy,” Cicilline said. “Any single action by any one of these companies can affect hundreds of millions of us in profound and lasting ways.”

Cicilline further argued that each of the four tech companies under investigation — Amazon, Facebook, Google, and Apple — comprised a crucial channel for distribution, such as an app store or ad venue, and uses monopolizing methods to purchase or otherwise block potential competitors. He also noted that the companies all either show preference to their branded products or create pricing schemes that undermine third-party brands’ abilities to compete.

As you might have already guessed, each case has a wealth of associated information and considerations. Recapping them, let alone providing commentary, would be challenging at best. So, instead, I want to consider the question of whether or not a business can be both a market ecosystem and fair competitor through the context of one business: Amazon.

Amazon fell under fire earlier this year, when the Wall Street Journal released a stunning report that the e-retailer had used data from its third-party sellers — data that was believed to be proprietary — to inform the development and sale of competing, private-label products.

This revelation sent shockwaves through the business community, despite the fact that it wasn’t entirely unanticipated; according to reporting from the Verge, the European Union’s main antitrust body claimed that it was “investigating whether Amazon is abusing its dual role as a seller of its own products and a marketplace operator and whether the company is gaining a competitive advantage from data it gathers on third-party sellers” in 2019.

Amazon has pushed back on these concerns, claiming that it has policies that forbid private-label personnel from obtaining specific seller data. However, the Wall Street Journal’s interviews of former and current employees found that the rule was inconsistently enforced and overlooked so often that the use of third-party, proprietary data was openly discussed in product development meetings.

“We knew we shouldn’t,” one former employee said while recounting a pattern of using seller data to launch and bolster Amazon products. “But at the same time, we are making Amazon branded products, and we want them to sell.”

And therein lies the core of the problem. Amazon is a company that maintains a laser focus on success — even to the point that its employees are willing to circumvent policy for its sake. But we can’t blame the employees, not entirely. The tech industry has long been known for its move-fast-and-break-things attitude, and Amazon more than most; the e-retailer’s obsession with achievement is near-legendary.

In 2015, New York Times reporters Jodi Kantor and David Streitfeld published an exposé that painted Amazon’s culture as one specifically designed for intense, high-output, and unforgiving efficiency.

“Every aspect of the Amazon system amplifies the others to motivate and discipline the company’s marketers, engineers and finance specialists: the leadership principles; rigorous, continuing feedback on performance; and the competition among peers who fear missing a potential problem or improvement and race to answer an email before anyone else,” Kantor and Streitfeld described.

“The culture stoked their willingness to erode work-life boundaries, castigate themselves for shortcomings (being ‘vocally self-critical’ is included in the description of the leadership principles) and try to impress a company that can often feel like an insatiable taskmaster.”

The article even noted that Amazon holds yearly firing sessions (dubbed “cullings” in the exposé) to shed those who don’t perform up to its notoriously high standards. Illness, parenthood, and even family loss — none were considered excuses for lapses in performance.

Given the stressful environment and achievement-at-all-costs mentality, is it any surprise that employees would sneak around a barely-enforced policy to obtain data that will help their projects succeed? I would say no.

In a culture that positions cutthroat competitiveness as a professional survival mechanism, an anticompetitive policy is little more than flimsy caution tape: readily seen, easily circumvented, and meant more to provide plausible deniability than to prevent anyone from breaking the rules.

And, of course, we have to acknowledge the point that a company that periodically culls its staff for the sake of efficiency wouldn’t mind pushing blame onto a worker who happens to get caught. Bezos already did so in his hearing. He testified, “What I can tell you is we have a policy against using seller-specific data to aid our private label business but I can’t guarantee that policy has never been violated.”

Another hearing exchange between Cicilline and Bezos is particularly telling.

Cicilline asks, “Isn’t it an inherent conflict of interest for Amazon to produce and sell products that compete directly with third party sellers, particularly when you, Amazon, set the rules of the game?”

Bezos responds: “The consumer is the one making the decisions.”

But how is that an appropriate response, when the data Amazon collects allows the e-retailer an unfair advantage to design and market products designed to outstrip the competition? It remains to be seen whether legislators will ultimately choose to spin off Amazon marketplace from its Basics line, but Amazon has proven beyond a doubt that it is naive to believe that a company that was built with a crush-the-competition mentality should be trusted with safeguarding smaller, vulnerable competitors’ proprietary data.

Company culture beats policy, every time.

Originally published on Medium

By |2020-10-20T05:31:01+00:00October 19th, 2020|Business, Culture, Current Events|

Quarantine Can be a Pressure Cooker for Inspiration

Life in quarantine feels like an odd suspension of real life; a time in which the world grinds to an indefinite, boring, and under-achieving halt.

When you live in New York, it only ever takes a glance out the window to remind yourself that the country is in a state of emergency. The streets are oddly silent; the few who brave the open air wear makeshift masks and veer in fearful six-foot detours around other pedestrians. Flip on the news, and you get a cacophony of news stories that riff on the same two questions — What’s happening with COVID-19? When will it end? — in a continuous loop. Sheltering at home somehow generates as much exhaustion as it does restless cabin fever. Life in quarantine feels like an odd suspension of real life, a time in which the world grinds to an indefinite, boring, and under-achieving halt.

This period of pause is the pits for everyone — but especially so for aspiring entrepreneurs. Currently, over 42 states have implemented shelter-in-place mandates and isolated over 316 million people in their homes. Businesses of all stripes have shuttered their doors or attempted to shift their day-to-day work into a new, remote, normal. It’s a unique and stressful time to be in business; according to a recent study by PWC, nearly three-quarters of surveyed American CFOs are “greatly concerned” about COVID-19’s impact on their operations. While there theoretically could be a worse time to start a business, the current pandemic would be challenging to surpass.

But here’s the thing. Amidst all of this business stress and well-deserved economic concern, there is room for hope. While there’s no doubt that the COVID-19 crisis has burdened us with challenges, it has also compelled tech-forward entrepreneurs across countless industries to pivot into a frantic period of innovation.

To borrow a quote from Entrepreneur writer Hamza Mudassir, “Black swan events, such as economic recessions and pandemics, change the trajectory of governments, economies, and businesses — altering the course of history.” The coronavirus will likely be the same — perhaps, even, for the better.

In recent weeks, we’ve seen an explosion of digital teleworking solutions, teaching tools, therapy and stress-relief apps, and retail solutions that, I believe, will benefit us even when the pandemic finally recedes. These offerings are only available because forward-thinking entrepreneurs took the initiative to see beyond the immediate crisis and give consumers what they need. They continued problem-solving even in a world in lockdown.

I’m not necessarily saying that now is the time to build your business — quite the opposite. But there are steps that aspiring entrepreneurs can take to keep their entrepreneurial dreams alive and prepare for when consumers and business leaders alike can finally step into the open air.

“This will be a before moment and an after moment for the world,” Open AI CEO Sam Altman recently told CNBC. “There’s incredible innovation coming.”

Here’s what you can do to get ready for that ‘after’ moment.

Reflect on Your Business Idea

If there’s one fact that we know for sure, it’s that society will feel the repercussions of COVID-19 long after the virus itself fades.

“We’re going to have to work through this quarantine state of mind even when the physical quarantine has lifted,” Sheva Rajaee, founder of the Center for Anxiety and OCD in Irvine, California, recently told reporters for Vox.

Despite our assertions that we’ll make up for lost time and treasure in-person interactions once shelter-in-place restrictions lift, it seems likely that our current fears of infection and interpersonal contact will persist even as we transition back into ordinary life. Aspiring entrepreneurs need to look at their business ideas and consider whether they could be retooled to better suit the needs of a consumer base that increasingly treasures at-home services. Alternatively, entrepreneurs may want to consider how their business could be pivoted to lessen their reliance on in-person contact and maximize their use of digital channels.

Build Your Connections

Entrepreneurship is an inherently lonely profession. While a friend, a colleague, or a partner may sympathize with your anxiety or celebrate your wins, they can never fully understand the nerve-wracking thrill that comes part and parcel with building a business. During hard times like these, that kind of loneliness can feel crippling; it lowers morale, reduces productivity, and dampens creativity.

But, if you can build a network of people who truly understand the struggle from firsthand experience, you’ll be better equipped to face the entrepreneurial challenge head-on. As entrepreneur and writer David Sax put the matter in a recent article for Fast Company, “We need to build a community of entrepreneurs who can lean on each other, learn from each other, and let one another know that while they may feel as though they are facing the world alone, their experience is shared, and in some way, the burden is too.”

Reach out to entrepreneur-based social media groups; get involved with your local small business organizations; forge real connections with the acquaintances you’ve meant to contact but never have. Take the time to build a supportive network, and you’ll see the supportive and creative returns tenfold.

Balance Your Perspective

The pandemic is happening. Yes, it may seem like stating the obvious — but it needs to be said. Society will be struggling through this challenge for a while, and the repercussions will persist for months, if not years.

As Forbes contributor Hod Fleishman recently wrote, “We need to accept that reality is changing, identify what works, and […] define new ways of working. COVID-19 is terrible, it’s a tragedy, but it also opens up new business opportunities.”

We need to find a way to move on and thrive despite the hardship and uncertainty we face right now. Strive for creativity and productivity — and when you feel overwhelmed, give yourself the time you need to process the stress.

I wholeheartedly believe that with persistence, optimism, and effort, entrepreneurs can get through the COVID-19 crisis and do their part to make our world a better, more creative place.

Originally published on ThriveGlobal

By |2020-09-11T21:21:34+00:00September 11th, 2020|Business, Culture|

For Investors, Property Tech Goes Far Beyond a Smart Home

At first listen, the term “property tech” seems to fit comfortably within the context of ultra-luxurious modernism. We picture something at home within sleek glass-and-metal walls and minimalist design. We imagine an -powered abode where the temperature, light, and -connected outlets can be adjusted with a few smartphone taps or an offhand remark, and a security app allows you to video chat doorstep visitors from halfway around the world.

These products align with the average consumer’s idea of residential technology. But for those in the commercial real estate sector, “property tech” has an entirely different definition — one far removed from the realm of modernist homeowners and IoT-enthusiasts. In fact, far from being an unnecessary luxury, property tech stands a good chance of revolutionizing commercial real estate at every point, from development to sales to property management.

Prop Tech: A Promising New Frontier for Commercial Real Estate

As defined by Tech Target,  refers to the “use of information technology (IT) to help individuals and companies research, buy, sell, and manage real estate.” Innovative PropTech solutions are usually designed to facilitate greater efficiency and connectivity in the real estate market, allowing consumers and vendors at all levels to achieve their goals quickly and at high quality. While PropTech capabilities vary widely across products, they tend to fall into three broad categories: smart home, real estate sharing, and .

The first category encompasses the majority of the IoT-powered home devices mentioned at the top of this piece — the smart thermostats, remotely-controlled home systems, and digital security solutions. Real estate sharing refers to online platforms like Airbnb, Redfin, and Zillow, which facilitate the advertisement and sale of real-world properties. The last term is all but self-explanatory; “fintech” references any tool that assists in real estate financial management or transactions.

The potential that PropTech holds to reform the commercial real estate sector is off the charts — and investors know it. According to a recent , global investment in real estate technology netted an incredible $12.6 billion across 347 deals in 2017 alone, $6.5 billion of which funneled directly to U.S.-based companies. Re:Tech researchers further noted that investment trends indicated a great deal of early interest in untested PropTech solutions, with early-stage companies receiving “the lion’s share” of funding dollars.

Early Successes Illustrate High Potential

This flurry of investor interest isn’t without basis. The PropTech sector has seen runaway growth and concrete success in recent years; aside from the evident popularity of digital-forward platforms like Airbnb and Zillow in the rental and buying markets, adoption of smart home technology has reached a fever pitch. Deloitte reports that sensor deployment in real estate is projected to grow at a  and will likely top 1.3 billion in 2020.

Some companies have even incorporated cutting-edge PropTech innovations into their business model to remarkable success. Take the Texas-based real estate investment firm Amherst Holdings as an example. Last year, Forbes profiled  and data modeling during the asset identification process, noting how Amherst used AI not only to discover investment properties, but also to make dozens of offers per day on potentially lucrative homes. The strategy has paid off; today, the investment firm is thriving, and its portfolio encompasses an incredible 16,000 homes across the American Sunbelt region.

New York: A New Sandbox for PropTech Creativity?

Now, however, companies may not need to foray into PropTech testing without support. Last November, New York announced that it would launch a pilot program that would allow PropTech startups to trial their products via NYC’s portfolio of public properties.  in a press release, “The New York City Economic Development Corporation will launch a pilot program that allows companies to implement proof-of-concept property technology products in the city’s 326.1 million square feet of owned and managed real estate.”

“We want to make our buildings available to incentivize the kinds of innovations that you are all out there working on day in and day out,” Vicki Been, the deputy mayor for housing and development, commented. “We want our buildings and our tenants to be helpful to you, and provide a way to test some of the ideas that you are developing so that we can get those ideas out to the market and into buildings even faster.”

In this way, the city is offering itself up as an innovation sandbox, a place where real estate innovators can test and troubleshoot their digital tools to the betterment of all — and especially New Yorkers.

With this philosophy of openness and curiosity comes an opportunity for New York-based real estate players to not only test innovative approaches but put them together into a unified strategy. We’ve all seen companies find significant success by leveraging one variety of PropTech solution. Airbnb thrives in facilitating short-term real estate transactions, Google and Amazon have cornered the smart home market, and Amherst Holdings has established a winning, AI-powered strategy for finding and acquiring assets. Individually, all of these tactics show impressive results — but what could we achieve if we managed to link them together?

The Tools of Today Could Create the RE Strategy of Tomorrow

In theory, the disparate PropTech solutions we see now could be stitched into a seamless strategy. The strategy might progress as follows — real estate operators could use  and  to identify lucrative neighborhoods and home in on investment properties, then apply -powered  to purchase those buildings. Next, they might retrofit their assets to have utility sensors that can ensure optimal utility use and management. These IoT-equipped devices could also better automate the care of a building by notifying owners when a system requires maintenance and providing real-time insights on how tenants .

When linked, these PropTech solutions can , allowing property firms an opportunity to gain better insights into how they can best use, maintain, and improve their asset properties.

The implications for commercial real estate improvement are huge — and, to be clear, this is all available technology. Real estate operators could incorporate PropTech into their strategic workflow today if they wanted. Will that change require some upfront investment and effort? Absolutely — but, as New York’s decision to offer itself as a testing sandbox demonstrates, there is no better time for real estate operators to get ahead of the curve and start crafting unified strategies than right now.

Originally published on 

By |2020-11-20T21:34:55+00:00July 20th, 2020|Business, Current Events, Technology|

Could COVID-19 Kickstart Surveillance Culture?

Several months ago, saying that the “cure” that facial recognition offers is worse than the ills it solves would have seemed hyperbolic. But now, the metaphor has become all too literal — and the medicine it promises isn’t quite so easy to reject when sickness is sweeping the globe.

Even as it depresses economies across the world, the coronavirus pandemic has sparked a new period of growth and development for facial recognition technology. Creators pitch their tools as a means to identify sick individuals without risking close-contact investigation.

In China, the biometrics company Telpo has launched non-contact body temperature measurement terminals that — they claim — can identify users even if they wear a face mask. Telpo is near-evangelical about how useful its technology could be during the coronavirus crisis, writing that “this technology can not only reduce the risk of cross infection but also improve traffic efficiency by more than 10 times […] It is suitable for government, customs, airports, railway stations, enterprises, schools, communities, and other crowded public places.”

COVID-19: A Push Towards Dystopia?

At a surface glance, Telpo’s offerings seem…good. Of course we want to limit the spread of infection across public spaces; of course we want to protect our health workers by using contactless diagnostic tools. Wouldn’t we be remiss if we didn’t at least consider the opportunity?

And this is the heart of the problem. The marketing pitch is tempting in these anxious, fearful times. But in practice, using facial recognition to track the coronavirus can be downright terrifying. Take Russia as an example — according to reports from BBC, city officials in Moscow have begun leveraging the city’s massive network of cameras to keep track of residents during the pandemic lockdown.

In desperate times like these, the knee-jerk suspicion that we typically hold towards invasive technology wavers. We think that maybe, just this once, it might be okay to accept facial recognition surveillance — provided, of course, that we can slam the door on it when the world returns to normal. But can we? Once we open Pandora’s box, can we force it shut again?

In March, the New York Times reported that the White House had opened talks with major tech companies, including Facebook and Google, to assess whether using aggregated location data sourced from our mobile phones would facilitate better containment of the virus. Several lawmakers immediately pushed back on the idea; however, the discussion does force us to wonder — would we turn to more desperate measures, like facial surveillance? How much privacy would we sacrifice in exchange for better perceived control over the pandemic?

Understanding America’s Surveillance Culture Risk

I’ve been thinking about this idea ever since January, when an expose published by the New York Times revealed that a startup called Clearview AI had quietly developed a facial recognition app capable of matching unknown subjects to their online images and profiles — and promptly peddled it to over 600 law enforcement agencies without any public scrutiny or oversight. Clearview stands as a precursor; a budding example of what surveillance culture in America could look like, if left unregulated. One quote in particular sticks in my head.

“I’ve come to the conclusion that because information constantly increases, there’s never going to be privacy,” David Scalzo, the founder of a private equity firm currently investing in Clearview commented for the Times. “Laws have to determine what’s legal, but you can’t ban technology. Sure, that might lead to a dystopian future or something, but you can’t ban it.”

Scalzo’s offhand, almost dismissive tone strikes an odd, chilling contrast to the gravity of his statement. If facial recognition technology will lead to a surveillance-state dystopia, shouldn’t we at least try to slow its forward momentum? Shouldn’t we at least consider the dangers that a dystopia might pose — especially during times like these, when privacy-eroding technology feels like a viable weapon against global pandemic?

I’m not the only one to ask these questions. Since January’s expose, Clearview AI has come under fire from no fewer than four lawsuits. The first castigated the company’s app for being an “insidious encroachment” on civil liberties; the second took aim both at Clearview’s tool and the IT products provider CDW for its licensing of the app for law enforcement use, alleging that “The [Chicago Police Department] […] gave approximately 30 [Crime Prevention and Information Center] officials full access to Clearview’s technology, effectively unleashing this vast, Orwellian surveillance tool on the citizens of Illinois.” The company was also recently sued in Virginia and Vermont.

All that said, it is worth noting that dozens of police departments across the country already use products with facial recognition capabilities. One report on the United States’ facial recognition market found that the industry is expected to grow from $3.2 billion in 2019 to $7.0 billion by 2024. The Washington Post further reports that the FBI alone has conducted over 390,000 facial-recognition searches across federal and local databases since 2011.

Unlike DNA evidence, facial recognition technology is usually relatively cheap and quick to use, lending itself easily to everyday use. It stands to reason that if better technology is made available, usage by public agencies will become even more commonplace. We need to keep this slippery slope in mind. During a pandemic, we might welcome tools that allow us to track and slow the spread of disease and overlook the dangerous precedent they set in the long-term.

Given all of this, it seems that we should, at the very least, avoid panic-prompted decisions to allow facial recognition — and instead, consider what we can do to avoid the slippery slope that facial recognition technology poses.

Are Bans Protection? Considering San Francisco

In the spring of 2019, San Francisco passed legislation that outright forbade government agencies from using tools capable of facial surveillance — although the ruling was amended to allow for equipped devices if there was no viable alternative. The lawmakers behind the new ordinance stated their reasoning clearly, writing that “the propensity for facial recognition technology to endanger civil rights and civil liberties substantially outweighs its purported benefits.”

They have a point. Facial recognition software is notorious for its inaccuracy. One new federal study also found that people of color, women, older subjects, and children faced higher misidentification rates than white men.

“One false match can lead to missed flights, lengthy interrogations, tense police encounters, false arrests, or worse,” Jay Stanley, a senior policy analyst at the American Civil Liberties Union (ACLU), told the Washington Post. “But the technology’s flaws are only one concern. Face recognition technology — accurate or not — can enable undetectable, persistent, and suspicionless surveillance on an unprecedented scale.”

While it’s still too early to have a clear gauge on the ban’s efficacy, it is worth noting that the new legislation sparked a few significant and immediate changes to the city’s police department. In December, Wired reported that “When the surveillance law and facial recognition ban were proposed in late January, San Francisco police officials told Ars Technica that the department stopped testing facial recognition in 2017. The department didn’t publicly mention that it had contracted with DataWorks that same year to maintain a mug shot database and facial recognition software as well as a facial recognition server through summer 2020.”

The department scrambled to dismantle the software after the ban, but the department’s secretive approach remains problematic. The very fact that the San Francisco police department was able to acquire and apply facial recognition technology without public oversight is troubling.The city’s current restrictions offer a stumbling block by limiting acceptance of surveillance culture as a normal part of everyday life — and prevent us from automatically reaching for it as a solution during times of panic.

A stumbling block, however, is not an outright barricade. Currently, San Francisco is under a shelter-in-place mandate; as of April 6, it had a reported 583 confirmed cases and nine deaths. If the situation worsens, could organizers suggest that the city make an exception and use facial recognition tracking to flatten the curve, just this once? It’s a long-shot hypothetical, but it does lead us to wonder what could happen if we allow circumstances to convince us into surveillance culture, one small step at a time.

Bans can only do so much. While the San Francisco ruling proves that Scalzo’s claim that “Laws have to determine what’s legal, but you can’t ban technology” isn’t strictly speaking correct, the sentiment behind it remains. Circumstances can compel us into considering privacy-eroding tech even as those explorations lead us down a path to dystopia.

So, in a way, Scalzo is right; the proliferation of facial recognition technology is inevitable. But that doesn’t mean that we should give up on bans and protective measures. Instead, we should pursue them further and slow the momentum as much as we can — if only to give ourselves time to establish regulations, rules, and protections. We can’t give in to short-term thinking; we can’t start down the slippery slope towards surveillance culture without considering the potential consequences. Otherwise, we may well find that the “cure” that facial recognition promises is, in the long term, far worse than any short-term panic.

Originally published on Hackernoon.com

By |2020-06-12T21:03:05+00:00June 12th, 2020|Business, Current Events, Technology|

What Does It Matter If AI Writes Poetry?

Robots might take our jobs, but they (probably) won’t replace our wordsmiths.

These days, concerns about the slow proliferation of AI-powered workers underly a near-constant, if quiet, discussion about which positions will be lost in the shuffle. According to a report published earlier this year by the Brookings Institution, roughly a quarter of jobs in the United States are at “high risk” of automation. The risk is especially pointed in fields such as food service, production operations, transportation, and administrative support — all sectors that require repetitive work. However, some in creatively-driven disciplines feel that the thoughtful nature of their work protects them from automation.

It’s certainly a memorable passage — both for its utter lack of cohesion and its familiarity. The tone and language almost mimic J.K. Rowling’s style — if J.K. Rowling lost all sense and decided to create cannibalistic characters, that is. Passages like these are both comedic and oddly comforting. They amuse us, reassure us of humans’ literary superiority, and prove to us that our written voices can’t be replaced — not yet.

However, not everything produced by AI is as ludicrous as A Giant Pile of Ash. Some pieces veer on the teetering edge of sophistication. Journalist John A. Tures experimented with the quality of AI-written text for the Observer. His findings? Computers can condense long articles into blurbs well enough, if with errors and the occasional missed point. As Tures described, “It’s like using Google Translate to convert this into a different language, another robot we probably didn’t think about as a robot.” It’s not perfect, he writes, but neither is it entirely off the mark.

Moreover, he notes, some news organizations are already using AI text bots to do low-level reporting. The Washington Post, for example, uses a bot called Heliograf to handle local stories that human reporters might not have the time to cover. Tures notes that these bots are generally effective at writing grammatically-accurate copy quickly, but tend to lose points on understanding the broader context and meaningful nuance within a topic. “They are vulnerable to not understanding satires, spoofs or mistakes,” he writes.

And yet, even with their flaws, this technology is significantly more capable than those who look only at comedic misfires like A Giant Pile of Ash might believe. In an article for the Financial Times, writer Marcus du Sautoy reflects on his experience with AI writing, commenting, “I even employed code to get AI to write 350 words of my current book. No one has yet identified the algorithmically generated passage (which I’m not sure I’m so pleased about, given that I’m hoping, as an author, to be hard to replace).”

Du Sautoy does note that AI struggles to create overarching narratives and often loses track of broader ideas. The technology is far from being able to write a novel — but still, even though he passes off his perturbance at the AI’s ability to fit perfectly within his work as a literal afterthought, the point he makes is essential. AI is coming dangerously close to being able to mimic the appearance of literature, if not the substance.

Take Google’s POEMPORTRAITS as an example. In early spring, engineers working in partnership with Google’s Arts & Culture Lab rolled out an algorithm that could write poetry. The project leaders, Ross Goodwin and Es Devlin, trained an algorithm to write poems by supplying the program with over 25 million words written by 19th-century poets. As Devlin describes in a blog post, “It works a bit like predictive text: it doesn’t copy or rework existing phrases, but uses its training material to build a complex statistical model.”

When users donate a word and a self-portrait, the program overlays an AI-written poem over a colorized, Instagrammable version of their photograph. The poems themselves aren’t half bad on the first read; Devlin’s goes: “This convergence of the unknown hours, arts and splendor of the dark divided spring.”

As Devlin herself puts it, the poetry produced is “Surprisingly poignant, and at other times nonsensical.” The AI-provided poem sounds pretty, but is at best vague, and at worst devoid of meaning altogether. It’s a notable lapse, because poetry, at its heart, is about creating meaning and crafting implication through artful word selection. The turn-of-phrase beauty is essential — but it’s in no way the most important part of writing verse. In this context, AI-provided poetry seems hollow, shallow, and without the depth or meaning that drives literary tradition.

In other words, even beautiful phrases will miss the point if they don’t have a point to begin with.

In his article for the Observer, John A. Tures asked a journalism conference attendee his thoughts on what robots struggle with when it comes to writing. “He pointed out that robots don’t handle literature well,” Tures writes, “covering the facts, and maybe reactions, but not reason. It wouldn’t understand why something matters. Can you imagine a robot trying to figure out why To Kill A Mockingbird matters?”

Robots are going to verge into writing eventually — the forward march is already happening. Prose and poetry aren’t as protected within the creative employment bastion as we think they are; over time, we could see robots taking over roles that journalists used to hold. In our fake-news-dominated social media landscape, bad actors could even weaponize the technology to flood our media feeds and message boards. It’s undoubtedly dangerous — but that’s a risk that’s been talked about before.

Instead, I find myself wondering about the quieter, less immediately impactful risks. I’m worried that when AI writes what we read, our ability to think deeply about ourselves and our society will slowly erode.

Societies and individuals grow only when they are pushed to question themselves; to think, delve into the why behind their literature. We’re taught why To Kill a Mockingbird matters because that process of deep reading and introspection makes us think about ourselves, our communities, and what it means to want to change. In a world where so much of our communication is distilled down into tweet-optimized headlines and blurbs, where we’re not taking the time to read below a headline or first paragraph, these shallow, AI-penned lines are problematic — not because they exist, but because they do not spur thought.

“This convergence of the unknown hours, arts and splendor of the dark divided spring.”

The line sounds beautiful; it even evokes a vague image. Yet, it has no underlying message — although, to be fair, it wasn’t meant to make a point beyond coherency. It isn’t making a point. It’s shallow entertainment under a thin veil of sophistication. It fails at overarching narratives, doesn’t capture nuance, and fails to grasp the heartbeat of human history, empathy, and understanding.

If it doesn’t have that foundation to create a message, what does it have? When we get to a place where AI is writing for us — and be sure, that time will come — are we going to be thinking less? Will there be less depth to our stories, minimal thought inspired by their twists? Will it become an echoing room rather than a path forward? At the very least, will these stories take over our Twitter feeds and Facebook newsfeeds, pulling us away from human-crafted stories that push us to think?

I worry that’s the case. But then again, maybe I’m wrong — maybe reading about how an AI thinks that Ron ate Hermione’s family provides enough dark and hackneyed comedy to reassure our belief that AI will never step beyond its assigned role as a ludicrous word-hacker.

For now, at least.

By |2020-06-12T21:03:46+00:00April 17th, 2020|Culture, Technology|
Go to Top