You are viewing a free preview of this lesson.
Subscribe to unlock all 10 lessons in this course and every other course on LearningBro.
NLP technology is now embedded in virtually every sector of society — from healthcare and education to social media and law. With this widespread adoption comes significant ethical responsibilities. This lesson surveys the major applications of NLP and examines the ethical challenges that practitioners must address.
Machine translation (MT) automatically translates text between languages.
| Generation | Approach | Example |
|---|---|---|
| 1st generation | Rule-based | Hand-crafted grammar rules |
| 2nd generation | Statistical (SMT) | Phrase-based translation |
| 3rd generation | Neural (NMT) | Seq2Seq with attention |
| Current | Transformer-based | Google Translate (Transformer), DeepL |
from transformers import pipeline
translator = pipeline("translation", model="Helsinki-NLP/opus-mt-en-de")
result = translator("Machine translation has improved dramatically.")
print(result[0]['translation_text'])
| Type | Description | Example |
|---|---|---|
| Extractive QA | Extracts an answer span from a given passage | SQuAD dataset |
| Abstractive QA | Generates a new answer (not just a span) | GPT-based QA |
| Open-domain QA | Finds answers from a large knowledge base | Google Search, Bing Chat |
from transformers import pipeline
qa_pipeline = pipeline("question-answering", model="distilbert-base-cased-distilled-squad")
context = """
Natural Language Processing is a subfield of artificial intelligence
that focuses on the interaction between computers and humans through
natural language. The ultimate objective of NLP is to read, decipher,
understand, and make sense of human languages in a manner that is valuable.
"""
result = qa_pipeline(question="What is the objective of NLP?", context=context)
print(f"Answer: {result['answer']}")
print(f"Confidence: {result['score']:.4f}")
| Type | Description |
|---|---|
| Extractive | Selects the most important sentences from the original text |
| Abstractive | Generates new sentences that capture the key ideas |
from transformers import pipeline
summarizer = pipeline("summarization", model="facebook/bart-large-cnn")
article = """
The Transformer architecture was introduced in 2017 in the paper
'Attention Is All You Need' by Vaswani et al. It replaced recurrent
neural networks with self-attention mechanisms, allowing for parallel
processing of input sequences. This innovation led to significant
improvements in machine translation and later became the foundation
for models like BERT, GPT, and T5, which have achieved state-of-the-art
results across virtually all NLP tasks.
"""
summary = summarizer(article, max_length=60, min_length=20)
print(summary[0]['summary_text'])
| Component | Description |
|---|---|
| Intent recognition | Understanding what the user wants |
| Entity extraction | Identifying key information (names, dates, numbers) |
| Dialogue management | Tracking conversation state and deciding the next action |
| Response generation | Producing a natural language reply |
| Task | Description |
|---|---|
| Named Entity Recognition | Identifying people, places, organisations |
| Relation Extraction | Finding relationships between entities |
| Event Extraction | Identifying events and their participants |
| Knowledge Graph Construction | Building structured knowledge from unstructured text |
NLP powers automated content moderation on social media platforms:
Language models learn from human-generated text, which contains societal biases. These biases are reflected — and sometimes amplified — in model outputs.
| Bias Type | Example |
|---|---|
| Gender bias | "The doctor... he" vs "The nurse... she" — models associate professions with genders |
| Racial bias | Sentiment analysis scores African American English more negatively |
| Cultural bias | Models trained primarily on English/Western data may not represent other cultures fairly |
| Socioeconomic bias | Language associated with lower socioeconomic status may be rated as less professional |
# Demonstrating gender bias in word embeddings
# king - man + woman = queen (intended)
# programmer - man + woman = homemaker (biased!)
from transformers import pipeline
fill_mask = pipeline("fill-mask", model="bert-base-uncased")
# These may reveal gender biases
print(fill_mask("The nurse said [MASK] would be right back."))
print(fill_mask("The engineer said [MASK] would be right back."))
Subscribe to continue reading
Get full access to this lesson and all 10 lessons in this course.