What comes to mind when you think of Digital Ethics? The ethical implications your startup could face without digital ethics guidelines in place can be severe.
Let me ask you this; As your company reaches scale, what’s The New York Times headline that you want to avoid when misusing your products or technology?
The answer can reveal a lot about your company culture, says Paula Goldman, Salesforce’s first-ever Chief of Ethical and Humane Use Officer.
Ethical Implications: Digital Ethics Rules To Always Consider
Paula posed this question during a fireside chat 500 Global hosted to explore how startups can think about building responsible technology—an increasingly important topic at a time when headlines are full of companies scrambling to adjust for missteps around their products and cultures (look no further than Meta’s recent move to end its facial recognition program amid growing criticism).
Conversely, in a move to embrace the opportunity of ethics in tech, Twitter appointed Rumman Chowdhury as Director of ML Ethics, Transparency and Accountability.
Technology Ethics Beyond Silicon Valley
Today, tech ethics isn’t just a concern for Silicon Valley giants. Early-stage companies should be focused on it too. As Salesforce likes to say: “Values create value.” Digital ethics, according to Paula, helps burnish a brand and can protect it from downside risk by enhancing customer relationships and the quality of the product itself. It can save time and money because companies don’t have to fix things later. It also differentiates products in a market with ever-increasing consumer concern over issues like data privacy. Simultaneously, with more investors prioritizing ESG, startups cannot afford to ignore tech ethics.
Our discussion with Paula dug into how startups can implement digital ethics at an organizational level and in product development. Paula has a wealth of knowledge on the subject, having worked on ethical tech investing with Omidyar Network long before it became a mainstream trend.
The Ethical Implications And Risks of Artificial Intelligence
At Salesforce, Paula has developed a strategic framework for the ethical and humane use of technology across the entire company. Advanced tech has numerous known ethical implications & risks, especially artificial intelligence and facial recognition. These rapidly-morphing technologies present complex issues and have historically lacked the proper guardrails to prevent them from wronging users. The most noteworthy are racial and gender biases proliferated by these technologies, affecting people’s lives in fundamental ways (for example, determining who does or does not qualify for a credit card or a mortgage.)
Another critical issue is data privacy. “Name your industry; data’s the currency,” says Paula. Customers today are aware of breaches of trust around data, making it essential for companies to carefully consider how they’re collecting and managing that data while still delivering value.
Yet, tech ethics is still a developing field, and while narratives around protecting users and society seem intuitive now, hindsight is always 20/20. When Google created a chatbot capable of tricking someone into thinking it was human, the development was cheered. But such breakthroughs are now subject to plenty of scrutinies, showing how easy it is to lose sight of unintended consequences around complex tech. A good rule of thumb for founders developing ethical guardrails for product development and company culture: assume everything matters.
The world is changing quickly, and technology is influencing that evolution, for better and worse. Simply taking time to think about these unforeseen risks in planning and team conversations can make a world of difference, says Paula. It’s something all startups can do because this isn’t just a concern for traditional technology businesses anymore.
There are plenty of simple ways any company can address ethical tech immediately. Some are purely tactical moves, like adopting Fairlearn, an open-source toolkit helping improve fairness in AI systems. But ultimately, it’s more about widening the aperture to tackle this from the level of company culture, leadership and governance.
When Paula zooms out to the big picture, the essence of her job is about culture. She’s giving people permission to ask questions and get rewarded for thinking about ethics and seeing them as a core part of the business that affects the success of products. That means baking ethical tech into company culture by designing and operationalizing a collective set of values (values that are openly recognized by leadership and promoted at all-hands meetings, annual gatherings and newsletters).
Digital Ethics And Facial Recognition
This approach has seen Salesforce make notable moves around digital ethics. A good example is a facial recognition, which Salesforce has never offered. They made that decision after seeing problems with its accuracy—particularly around skin color and how that created potential adverse outcomes in a criminal justice context. “It wasn’t ready for prime time,” says Paula. Simultaneously, regarding AI, Salesforce’s acceptable use policy has a clause preventing customers from using their bots in a way that deceives end users into thinking they’re dealing with a human.
Salesforce has formalized its commitment to its values with an entire department dedicated to ethical and humane use of tech, led by Paula. Created roughly three years ago, the department first focused on policy setting. Next, Paula focused on how Salesforce was building its technology and embedding company values into its product development lifecycle (that ties into another Salesforce mantra: “Ethics by Design”). They wanted to envision how we would use products at scale and what vulnerabilities would arise and then build safeguards directly into the tech before a product shipped.
As part of that process, they use a tool called Consequence Scanning. At the beginning of a development cycle, teams explore the potential consequences of a product, both positive and negative and intended and unintended. That generates ideas that are documented on the backlog. “So it’s not just an isolated intellectual exercise,” says Paula. “It’s something that’s affecting the roadmap.”
Stakeholder involvement is crucial. Salesforce’s product research team deliberately recruits diverse participants to test a product, which helps with accessibility and inclusive design. They have an ethics advisory council for policy processes, including outside members, frontline employees and executives. The wider your network and the more consultation you get, the better, says Paula, even if feedback isn’t always uniform. “That’s like 90% of the game.”
They also have an online learning platform called Trailhead, with free courses on topics like ethical AI and inclusive design. These courses don’t take much time but send a significant signal, says Paula, creating light bulb moments that help teams think about products differently.
The Evolution And Future Of Tech Ethics
Ultimately, embracing ethical tech doesn’t mean that startups must completely overhaul their business. Often it’s about addressing small details and nuances. “If you get them right, it sets you up for a rocket ship-type situation,” says Paula. She points to eBay: as an e-commerce pioneer, it made early ethics decisions that set it apart, like deciding it wouldn’t allow Nazi memorabilia to be sold on its platform. “They didn’t have to; it wasn’t technically against the law,” says Paula. “They decided it didn’t fit their values as a company.” That wasn’t an expensive decision for eBay but an important one.
These are still the early days for ethical tech overall, but it’s gaining traction quickly. “Take note: you’ve got a bunch of massive companies that are taking this very seriously,” Paula says. That’s an important signal, but it will take a collective effort among investors, companies and society to ensure that ethical tech thrives. That includes startups—for founders, that represents an opportunity to make your mark by developing tech the right way.