You are viewing a free preview of this lesson.
Subscribe to unlock all 10 lessons in this course and every other course on LearningBro.
This lesson covers the ethical, moral, and environmental issues surrounding computing for the OCR A-Level Computer Science (H446) specification, section 1.5. Technology has profound impacts on society, and you need to understand both the benefits and the challenges.
An ethical framework is a set of principles used to guide decision-making about what is right and wrong.
| Framework | Core Idea | Application to Computing |
|---|---|---|
| Utilitarianism | The right action is the one that produces the greatest good for the greatest number. | Is mass surveillance justified if it prevents terrorism? |
| Deontological (Kantian) | Some actions are inherently right or wrong, regardless of consequences. Rules must be followed. | Hacking is wrong even if done to expose a corrupt organisation. |
| Virtue ethics | Focus on the character and intentions of the person, not just the action or its consequences. | A programmer should develop virtues like honesty and responsibility. |
| Rights-based | Every person has fundamental rights (e.g., privacy, freedom of expression) that must be respected. | Individuals have a right to privacy, even online. |
Exam Tip: When discussing ethical issues, refer to specific frameworks by name. Showing you can apply different ethical lenses to the same scenario demonstrates higher-level thinking and will earn more marks.
Artificial Intelligence raises unique ethical challenges:
| Issue | Description |
|---|---|
| Training data bias | AI models learn from historical data, which may contain biases (e.g., racial, gender). |
| Algorithmic bias | The algorithm itself may amplify or create biases in its decisions. |
| Outcome bias | AI decisions may disproportionately affect certain groups. |
| Example | Issue |
|---|---|
| Recruitment AI | Amazon's AI hiring tool was found to be biased against women because it was trained on historically male-dominated applicant data. |
| Facial recognition | Studies show higher error rates for people with darker skin tones. |
| Criminal justice | Risk assessment algorithms used in US courts showed racial bias in predicting recidivism. |
| Concept | Description |
|---|---|
| Black box problem | Many AI systems (especially deep learning) make decisions that cannot be easily explained. |
| Explainable AI (XAI) | The movement to make AI systems transparent and their decisions understandable. |
| Accountability | If an AI makes a harmful decision, who is responsible? The developer? The user? The company? |
| Scenario | Ethical Question |
|---|---|
| Self-driving cars | If a crash is unavoidable, how should the car decide who to protect? |
| Medical diagnosis | Should an AI be allowed to make life-or-death diagnoses without human oversight? |
| Content moderation | Should AI decide what content is acceptable on social media? |
| Impact | Description |
|---|---|
| Job displacement | Automation replaces human workers in routine tasks (manufacturing, data entry, customer service). |
| Job creation | New jobs emerge in technology, AI development, data science, cybersecurity. |
| Job transformation | Existing roles change as workers use new tools and technologies. |
| Skills gap | Workers may need retraining to adapt to new roles. |
| Sector | Automation Impact |
|---|---|
| Manufacturing | Robots replace assembly line workers. |
| Transport | Self-driving vehicles may replace drivers. |
| Retail | Self-checkout, online shopping reduce retail staff. |
| Finance | Algorithmic trading, automated accounting. |
| Healthcare | AI diagnosis, robotic surgery (assists rather than replaces). |
Subscribe to continue reading
Get full access to this lesson and all 10 lessons in this course.