NOTE: Try not to spend more than 12 minutes or so on any question

  • 240 to 300 words per question you need counter argumetns for a good response :SOB:

Q1

What is the difference between explainability and interpretability? In what ways might XAI be helpful or unhelpful? google this

Response

https://www.ibm.com/think/topics/explainable-ai Notes:

  • words that describe the idea of constructing trust and confidence when it comes to pushing AI to production
  • interpretability understanding the inner working and logici of the ai. - transparency in model itsel - how it makes prediction
    • success rate in which humans can predict the result of an AI output
    • Interpretatble AI: Like a decision tree or linear regssion
  • explainability: providing reasons for the output - transparency in the decision making process - why it makes prediction
    • characterize accuracy and fairness
    • further
    • looks at results after computed
    • COmplex: neural network - explain why but can’t trace every individual step
  • give a good comparison analogy
    • For a dish that you’ve been delivered at a restaurant
      • interpretability is the chef giving you a recipe with step by step instructions
        • you understand the logic of the results. Even if you aren’t a chef, you understand how the final result is achieved
      • Explainability is the chef giving a broad explanation for why it tastes the way it does
        • e.g using a blend of spices, slowly cooking the meat for terderness
        • YOu get a high level reasoning for the taste and why it tastes the way it does, even if you don’t get a complete break don.
  • XAI (explainable Artificial intelligence)
    • helpful Reason
      • gives users a better understanding of why AI systems make the choices they do. Give users more trust to trust the confidence in the results that they produce and make their own decisions when it comes to not trusting ai.
    • unhelpful reason
      • sometimes, getting a better understanding of AI doesn’t actually help.
      • trying to further explainability may result in loss of accuracy or improvement. E.g simpler models like a decision tree will be less accurate but its easier to understand.

Q2

We can remove discrimination by removing all group membership information from the dataset (for example, by removing gender data), and the model would become fair to different gender groups. Similarly, we can remove information about age or race. Do you agree or disagree with this approach of fairness through unawareness? Why?

Although fairness through unawareness may remove any biasses that come with age and race, it is also important to be conscious that the data does exist but not necessarily draw conclusions from these demographics.

  • E.g data inputs could be under-represented and as a result, model becomes biased to favour certain gender groups because you receive more data from those demographics rather than that data being impactful

There have been instances where being unaware actually resulted in more discrimination against those groups:

  • e.g women tend to use authoritative terms in their resumes and as a result, cv’s would screen out women more than men
  • e.g race and /or religion may be linked to certain parts of a city

Decisions should be made on merits:

  • they should still be known
  • its not fair, proxy attributes tend to be related to other things, even if unawareness is the goal., legacy data can make a model very biased as it picks on subtle signals in the data and falsely associate groups with outcomes and biases.

Example:

  • amaozn had same day delivery service rolled out to boston but excluded roxbury
  • 80% of roxybury’s population was black or latino and it was also one of the poorest neighbourhoods in boston.
  • factors used were likely corrected with economic data and race and when ml model used a service map, , it likely discriminated against certain areas under the belief those individuals couldn’t afoord it.

hiring models discriminating against people for having ethnic names or looking certain ways for typically having less fluent english

Q3

What does everyday leadership development look like? Give an example (either actual or counterfactual) from your Group Project in terms of Positive Organisational Scholarship. In your answer, place yourself in the role of hypothetical Group Project Manager will lead to look up what positive organisational scholarship is

Leadership is not about a person who holds the position of a manager or leader but about influencing people and processes to further a collective aim or group goal

  • the responsibility of an individual
  • people taking initiative and through their own agency, agendas and actions, result in the development of themselves and others.

What does everyday leadership development look like:

  • Everyday leadership comes in many forms
    • organising and allocating different tasks
    • helping others out with different tasks that they might be struggling with
    • giving words of encouragement during times of low morale or just sadness, even to one person
  • All about furthering this idea of helping oneself to help others.
  • Fundamentally, the idea of developing leadership in everyday life is very much interconnected with personal development and discovery, finding a way one can do their personal best and in turn, do their best for others and to help others Give an example in terms of positive organisational scholarship:
  • Positive Organisational scholarship
    • Commitment to revealing and nurturing the highest level of human potential
      • what makes employees feel like htye’re thriving
      • bring organisation through difficult times
      • creating positive energy
  • Examples of this during group project
    • Simple: simply arranging workloads, breaking down the project so people are clear on objectives and deadlines
    • Complex: One can display leadership from even without a leadership position. Asking questions, interacting with ideas, supporting and exploring ideas, boosting peoples morale and keeping people focussed on a collective goal, they’re all things that people can do and result in people developing into a good leader

In you answer, place yourself as the hypothetical group project managemnt

Q4

What is ethical principlism? Is it useful, dangerous, or both? Why? need to google this

Disccused in principles of biomedical ethics book. Four core principles that guide decision making.

  1. Autonomy – respecting a person’s right to make their own decisions.
  2. Beneficence – promoting the well-being of others.
  3. Nonmaleficence – avoiding harm to others.
  4. Justice – treating people fairly and equitably.

These principles are useful as they supply a basic ground for approaches individuals should take when approaching problems. Fairly broad so they aren’t restrictive in what they ask for so they cover many scenarios and more so discourage negative behaviour. Additionally, since these principles are also broad, they aren’t restrictive in individuals interpretation and merely can be used as guidelines to help professionals make informed decisions but don’t particularly prevent certain decisions being made, merely suggestions to help guide decision making as to guide them to a moral decision.

On the other hand, because of how broad these concepts are when they adhere to modern moral good, they struggle in assisting between determining the better option between two ‘morally good’ options or ‘morally bad’ options. In this sense, it could be interpreted that ethical principlism is unhelpful but I’d argue that does not imply they are dangerous.

As such, I’d argue that ethical principlism is a helpful set of guidelines rather than dangerous as they provide a set of practical guidelines that aren’t too restrictive. need more here

Q5

Why does Munn claim that AI Ethics principles are meaningless, isolated, and toothless? Is he correct? Why?

ooh this is a very interesting article - worth bringing up the main points that he talks about

Talks about how awareness is AI ethics is meaningless, isolated and toothless. Main points are:

  • Meaningless
    • there are a lot of keywords used in AI ethics frameworks (safety, well-being, autonomy, privacy) that are kinda ambiguous
    • many contradictory meanings
    • Companies can claim they adhere to principles or ideals without meaningfully configuring their devices
    • “well-being” is a good term there are a lot to things to consider
      • an ai can promote the well being of humanity and undermine everyone else.
      • a lack of clarity means that these rules become useless.
  • Isolated
    • the toxicity of tech culture and propagation of sexism and misogyner
      • 60% OF WOMEN REPORT UNWARNED SEXUAL ADVANCES IN SILICON VALLEY
    • With a lack of ethics within the tehc industry, the education of software engineeringds and seems to imply a lack of application
      • little consideration of ethical challenges and that becomes reflected
      • lack of integration of eethics in the curriculumn in the inudstry
  • Toothless
    • lack of conseuqneces
      • ethics being used instead of regulation being asked to do soemthing its not asked to do
      • not self enforcing
      • companies won’t have meaningful power to stop or veto projects.
    • ethics is something that in the real world, is never designed to do
    • framewokrs can set normative ideals but lack mechanisms to enfroce complicance.
    • Not self encorcing
      • companies try a lot to outrun and avoid legislation and if not, resist or try to overturn regulations
      • regulations take a while to overturn
    • Lack of penalties and lack of principles
      • corporations can buffer their reputation by carrying high profile work on ethical frameworks

These are all important points and extremely valid points that he makes but simply rejecting these points does not change our disposition. As such, agree that current attempts may be pointless or close to pointless however that is not to disregard the way in which we may need to address thing:

  • e.g supplying proper meaning to ethical terms
  • ensuring that universities and businesses properly educate students and discourage these isolated ethical concerns.
  • Ensuring that these regulations actually et put into place

Q6

What are the Menlo Principles? Which type of normative ethics might be used to justify each of the principles? Why?

Need to flesh this out properly so i dont spend too much time on it.

Menlo Principles:

  • set of ethical guidelines developed to provide a framework for ethical practices regarding ICT research
  1. Respect for persons - Pariticipation as a research subject is voluntary and follows from informed consent. Research should treat people as autunomous agents and respect their right to determine their own best interestes
    1. Kantian ethics - Categorical imperative - treat someone never as a means”
  2. Beneficience
    1. do not harm. maximise probable benefits and minimzie probable harms
    2. rule utilitarinism
  3. Justice
    1. Each person deserves equal consideration in how to be treated and the benefits of research should be fairly distributed according to individual need, effort, socetal contribution and merit.
    2. Kantian ethics - treat people equally and never as a means
  4. Respect for Law and Public interest- engage in legal due diligence
    1. this is just rule utiulitrianims

Q7

**Assume Nihilistic Error Theory. How might the moral education of computer science students then proceed?

nihilistic error theory: moral statements are systematically false because they purport to describe objective moral facts and those facts don’t exist.

  • our moral language is based on a fundamental error basically wha tthis course is teaching people how to evaluate decisions and ethicla issues

Moral education becomes focussed around:

  • less about right and wrong
  • more about outocmes, systems, language and strategy
  • a form of ethical engineering rather than moral preaching Not about creating virtuous people but competent and socially aware ppl in a world with moral beliefs, irregardless of the fact of those beliefs.

how might the moral education of computer science then proceed?

Understanding this and providing moral education through assuming Nihilistic Error Theory, Computer Science students in the industry will be able to adhere to their own personal beliefs instead of having to assume an ethical framework they personally don’t ahere to or reject content taught in a course as it doesn’t align with personal beliefs.

Instead, through gaining that better understanding and analysis of moral frameworks and ethical outcomes, individuals can make more defined decisions that do align with their beliefs and what they belief constitutes and ethical decision.

Q8

What are the risks and opportunities for our understanding and practice of moral responsibility given the rise of automated weapons systems in particular, and automated decision-making systems in general?

basiclaly reworded:

  • Find risk and opportunities for understanding and practie of moral stuff
  • Consider the rise of Automated weapon system and automated decision making systems

With the rise of automated weapon systems and automated decision making systems, it’s been particularly important to be aware of moral responsibilities as it shapes how these systems operates and how heavy are the consequences if and when these systems malfunction.

Automated decision making systems and mal function:

  • COMPAS
    • real people got assessed on how likely they were to re-offend and how bad their current sentence should be.
    • The data that it was fed and the way the algorithm worked resulted in it being heavily biased towards african-americans.
    • Specifically in this example, people’s lives were impacted with white people who had significantly worse crimes being labelled low risk and being given lighter sentences than others who had less heavy crimes.
  • Automated weapon systems
    • weapons that cna track targets and fikre autonomously without human intervention
    • When misused, can result in the loss of human life or even massive environmental damage.
    • The consequences from a misfire are drastic

As such, when workign with these automated system, the risk are also incredibly high and it’s incredibly important for moral responsibility and understanding to increase when working on these systems and the potential consequences of risks.

On the other hand, these automated systems do provide a lot of potential as well. Currently, a lot of automated decision making systems achieve their best potential when being used in conjunction with human interaction. In this sense, the high automation reduces the amount of work that a human needs to contribute whils the human is there to prevent any mistakes that the machine can make in the scenario it malfunctions and suffers heavy bias

  • we want to be have a combination of high human control but also automation to achieve the best results