Let’s do a quick rewind.

It’s May 2018: Prince Harry and Megan Markle marry, Beyonce and Jay-Z are about to drop their new album, and Mark Zuckerberg just appeared before the US Congress to testify, responding to the Cambridge Analytica scandal.

It also marked my 1.5 year anniversary as an engineer at Palantir Technologies. Not unlike Facebook, Palantir was also quickly becoming the subject of increasingly heavy scrutiny for its alleged involvement in Trump’s heavy-handed immigration priorities. The allegations from media sources had escalated to a point where there existed a small group of employees asking the same difficult questions.

And that’s how I found myself in the middle of May 2018, sitting in a circle with about 30 other people, participating in one of the many group discussions that were held about U.S. Immigration and Customs Enforcement (ICE), facilitated by the Privacy and Civil Liberties team.

As a technical employee in any company, there exist, unfortunately, few opportunities and dedicated moments to take a step back and reflect on the broader ethical and societal implications of one’s day to day choices. When I was a computer science student at MIT, the only mention of ethics came my junior year in the form of the infamous Therac-25 case study, the first paper assigned in my Computer Systems course.

The message I implicitly received from the tech industry was this: engineers are valuable resources, hired and paid large sums of money to write the code and build the thing, evaluated and rewarded for our ability to deliver on a deadline. Debating or critically analyzing or getting caught up in the larger picture of the very thing being built was not required, nor very productive. I quickly learned that that thought process, and along with it the responsibility for ethical reasoning, was outsourced to a specific team, or to groups on the outside.

The longer I sat in the group discussion, the more I realized how sorely unequipped I was to express my own ethical views, how I lacked language to translate gut feelings into cohesive arguments, and the more I became interested in finding and learning from technologists and thought leaders who thrived at the intersection.

I found the Digital Life Initiative (DLI) the way anybody finds anything these days: a Google search. I swapped a backlog of pull requests with a stack of academic papers, which I proceeded to devour. I traded questions like “Will this scale?” for “What does fairness mean?” and “Why does this even matter?”

So began my informal ethics training.

My new colleagues were not the software engineers I’d grown accustomed to, but instead a diverse collection of rotating and yearlong scholars, with legal, philosophy, design, and human computer interaction backgrounds, all looking at technology through a lens of equity and privacy. Though intangible, my most valuable takeaway from my time at the DLI will be the thought provoking interactions and discussions I was able to partake in with my fellow fellows. These discussions, coupled with the weekly seminar series, connected me to folks who were experts in everything ranging from privacy and security in contexts of abuse, to kidney allocation as a case study for algorithmic governance, to the history of digital cash manifesting itself as present day cryptocurrency.

My own research on algorithmic accountability and fairness was a hairy ball that seemed to constantly unravel itself into more hairy questions, a stark contrast from the concrete features I used to build and deliver by concrete deadlines. The unanswerable challenged me to focus more on the method and approach, the why and how. For the second half of my fellowship, I wanted to challenge myself again: through community engagement, user interviews, and project building, I spent the remaining time creating small software projects in the civic space, with particular attention to the political nature of technology.

Now, at the end of my time at Cornell, I’m happily left with many more questions than answers. As the tech industry matures and technology itself becomes more pervasive in society, I believe that the average tech worker, in addition to tech leaders, will also need to rumble with, ask, and be active participants in finding answers to the ever-growing arsenal of these very questions around technology’s role in furthering or atrophying of public values. Top universities, like my own alma mater, are now recognizing this very fact and are leading the way, training a new generation of technologists who will lead by actively asking and engaging with these questions.

My time at the DLI has helped refine what my definition of what a “socially-minded technologist” could look like. To me, it involves first accepting technology is political, being intentional about the values we choose to embed, being intentional about the users we choose to empower, and then staying relentlessly rooted in serving and protecting those users – in both product design and business model. Because technology is always more than just technology, more forms of partnership and feedback loops between the tech industry, researchers, community leaders must be built.

And now, it’s May 2019: Prince Harry and Megan Markle just had their first baby, Beyonce released her Homecoming album, and Mark Zuckerberg is still facing scrutiny, though this time by the Canadian Parliament.

More notably, my time at the DLI is coming to an end.

The next step on my journey will be a fellowship with Blue Ridge Labs, a tech incubator supported by the Robinhood Foundation, which specializes in creating and supporting early stage social tech ventures. I plan to continue to critically evaluate technology’s role in society and my own role in technology, while connecting with other like-minded entrepreneurial individuals who hope to forge new, socially minded ways of building things for a more equitable society.