Sixth Lecture – “What users understand about algorithms?” and “Methods to study algorithmic systems”

The last session dealt with two topics:

Discussion (protocol):
We talked about different approaches to understanding algorithms. One possibility is to recode an algorithm to see if one understands the problem. Another would be to look at different creative representations and explanations (e.g. dancing a sort algorithm). Possibilities for methods to study algorithms and the understanding of users of algorithms is related, since one first needs to really know what an algorithmic systems does (sometimes difficult due to complexity or type of method, e.g. machine learning) in order to present it. Many algorithms are also not open source (sometimes even considered trade secrets). Therefore, efforts from companies are necessary so users and policy makers are able to understand what is happening. Currently most companies do not explain their algorithms, persuasion and policy is needed. The open sourcing of code is not enough, since not everyone understands code, companies should also be required to explain very important algorithms (e.g. sorting on facebook).

Next we discussed differences between human and algorithmic work in the context of algorithmic aversion. We discussed benefits of human doctors compared to online alternatives (e.g. WebMD):

  • Accountability is easier to determine with humans, because algorithmic systems are usually complex assemblages (black box) and in turn the accountability is hard to trace.
  • Good care for people requires “human connection”.
  • Algorithms are perceived as objective, in contrast to doctors, but algorithmic decisions are also based on data which could contain problematic bias.

We asked ourself then: “Do doctors actually want to use (algorithmic) decision support systems”? Beside from the reasons mentioned above, also some of the superiority/prestige associated with being a doctor would get lost if algorithms supported or replaced them. A study showed that doctors like to take advice much less from lower ranking people than pilots. Next we to came up with examples of algorithmic aversion in other businesses.

According to a study on the course reading list, many people strongly believe in the benefits of automated plagiarism checks. They are seen as objective and replace work many humans do not really want to do and are therefore accepted. A counter example is AI composed music. The creators could not find good musicians which wanted to play it, because it “had no soul”. Music is more than sound. It is very context depended and intertwined with culture. It is connected to social class, stories, cultural identities, …

Then we talked about an algorithmic system in the Austrian context, ELGA, a system to centralize personal medical data in Austria. There has been a lot of resistance from medical personal and from some data protection activists.

Pro arguments:

  • More transparency  for patients and law makers over the activities of doctors
  • Permanently stored and online accessible list of medications available to patients. May enable people to extract their own data for more evidence based and personalized therapy.
  • Aggregated medical data is useful for basic research
  • It may help with identifying spending issues and thereby potentially save money

Con arguments:

  • Fear of disclosing health information to other doctors (especially if associated with corporations)
  • The collection of data usually leads to secondary uses. In this case insurance companies could be interested in the data.
  • Every computer system is also susceptible to hacks and other online attacks.
  • The online-Interface to control data will only be accessible to some people (Digital Divide).

Another issue is that ELGA is opt-out and most people do not actually know about it. What would people need to know about ELGA to make it an informed consent? The overall process and consequences would be useful, but not too much depth – would probably be too much information. For the interested people it should be possible to dig deeper. Risk assessment should be communicated to people, e.g. What is recorded? Who has access? How can I control it?
In conclusion knowing the effects is more important then the inner-workings of algorithms, but it is very hard to determine effects of things, especially complex algorithms. An interdisciplinary approach would be necessary.

What policies would be necessary for companies to ensure informed consent of users?

  • Algorithms which are used by or have consequences for many people need much oversight, and algorithms that are not used by or have consequences for not many people need little oversight.
  • Oversight by experts over algorithms should be required (consumer protection). Maybe also through audit of a presentation on inner workings. Audits could be done by specialized companies or governmental organisations. A system of “checks and balances” is needed for algorithms.
  • Inform users on consequences and what could be done with personal data
  • End user licence agreements should be available in accessible language

We concluded with a short discussion about possible futures for algorithms in society. Many promises concerned with the replacement of human labour with machines and algorithms are visible in popular discourse. An example are an add by Nurses United against robotic  replacements: https://soundcloud.com/national-nurses-united
In the discussion we came to the conclusion that in the far future (everything will still take some time) there will probably be still jobs left for humans, but they will be different jobs. The trend indicates a stronger divide (inequality) between very well educated and the rest. Skills like repairing could be lost for good, due to cheap availability of new products, but jobs concerned with personal pleasure like in the tourism sector could become more important. We need to ask questions such as: “What do we want to have automated?”, “What are human jobs?”,  “What are human jobs?” In such futures algorithms should support people. Humans should be responsible and decide in the end (“Humans in the loop”).

Currently a lot of technology that is built has no oversight and centralises power and decision making. Many new technologies are only understood by tech experts and not enough time is spent on people who are not able and have to live with it, due to social and economic pressure (e.g. smart home, smart city, automated care). More discussion by public is needed on this developments in order to ensure democratic and inclusive developments.

Discussions about a possible Singularity seem far fetched at the moment are mostly done by very privileged people not dealing so much with systemic problems such as sexism, racism, and so on. Attention should be focused more on these basic social problems we have now as a society and not deal with problems in a far away future. We should look at the people who profiting the least from a system to know what is wrong with it and how to improve it.

Leave a comment