A comprehensive collection of Python tutorials designed to take you from beginner to advanced. Each tutorial includes clear explanations, practical code examples, and real-world applications.
Advertisement
01 — FEATURED TUTORIALS
Master the basics: variables, data types, control flow, functions, and error handling. Perfect for absolute beginners.
OOP is the mental model behind almost every large Python codebase. You'll learn to design classes with attributes and methods, use inheritance to avoid repeating yourself, and apply encapsulation to protect internal state. The course covers dunder (magic) methods — __init__, __repr__, __len__ and more — along with polymorphism, abstract base classes, and practical design patterns like Factory and Observer. Build a full mini-project by the end.
Go beyond built-in types and learn how data is truly organized in memory. You'll master Python's core structures — lists, tuples, sets, and dicts — then build your own stacks, queues, linked lists, and binary trees from scratch. Each lesson explains when to reach for each structure, how time complexity affects real programs, and how professional engineers think about data organization. Includes 30+ annotated code examples and challenge exercises.
Algorithms are the recipes computers follow — and understanding them separates good programmers from great ones. This tutorial walks you through sorting (bubble, merge, quicksort), searching (binary search, BFS, DFS), and recursion with step-by-step visual traces. You'll learn Big-O notation, analyze trade-offs, and implement every algorithm in clean Python. By the end, coding interview problems will feel approachable, not intimidating.
This is where Python gets truly elegant. Learn how decorators let you wrap and extend functions without touching their source — perfect for logging, caching, and access control. Discover how generators produce values on demand, keeping memory usage minimal even with massive datasets. You'll also cover context managers with with statements, itertools magic, and a gentle intro to metaclasses — the tool that powers frameworks like Django and SQLAlchemy.
Every real program reads and writes data. This tutorial covers the full spectrum: reading and writing text and binary files, parsing and generating JSON and CSV, using pickle for Python object persistence, and working with pathlib for modern path handling. You'll also learn about buffering, encoding pitfalls (UTF-8 vs. Latin-1), error handling with context managers, and how to safely handle large files without loading everything into RAM.
Advertisement
02 — LEARNING PATH
Every expert was once a beginner who got the basics right. Step 1 takes you from your very first print("Hello, World!") through variables, data types, operators, conditionals, loops, and functions — explaining the why behind each concept, not just the syntax. You'll write real programs from lesson one, building the muscle memory and mental models that make everything else click. No prior experience required — just curiosity and a text editor.
Once you know the syntax, you need to know how to organize data. This step dives into Python's built-in collections — lists, tuples, dicts, sets — and explains the performance trade-offs behind each. You'll learn list comprehensions, dictionary unpacking, and how to implement stacks, queues, and linked lists manually so you understand what's happening under the hood. Knowing the right structure for the job will make your code faster and cleaner every single time.
This is the biggest mindset shift in your Python journey. You'll stop thinking in isolated functions and start designing systems of objects that model the real world. Step 3 covers every OOP pillar — classes, inheritance, encapsulation, and polymorphism — plus Python-specific magic like dunder methods and properties. By the end you'll know how to structure a 500-line project so it stays readable, testable, and easy to extend six months later.
This step separates Python programmers from Pythonic programmers. You'll learn to write decorators that add behavior to any function in two lines, create generators that stream a million records without using a gigabyte of RAM, and use context managers to guarantee clean resource handling. Itertools and functools unlock functional patterns that turn 20-line loops into single expressive expressions. These tools are what professional Python code looks like in production.
Knowledge only sticks when you build something real. Step 5 guides you through four complete projects: a command-line task manager that uses OOP and file persistence, a web scraper with requests and BeautifulSoup, a CSV data analyzer that generates charts with matplotlib, and a REST API client that fetches and displays live data. Each project is designed to showcase skills you can actually put in a portfolio or GitHub repo.
03 — FREE RESOURCES
Quick reference for syntax, built-ins, and common patterns
Write clean, readable, professional Python code
Manage dependencies with venv, pip, and requirements.txt
Write unit tests and test-driven development
TUTORIAL — OBJECT-ORIENTED PROGRAMMING
Object-Oriented Programming (OOP) is a way of structuring code around objects — bundles of data and behavior that model real-world things. Instead of writing one long script, you design reusable blueprints called classes, then create instances of them. Python is built around OOP, and once it clicks, every codebase you read will suddenly make sense.
A class is a blueprint. An instance is an object built from that blueprint. Think of a class as a cookie cutter and instances as the cookies. The special method __init__ runs automatically whenever you create a new instance — it's where you set up the object's initial state.
class Dog: # __init__ is the constructor — runs when you do Dog(...) def __init__(self, name, breed, age): self.name = name # instance attribute self.breed = breed self.age = age def bark(self): return f"Woof! My name is {self.name}!" def birthday(self): self.age += 1 return f"{self.name} is now {self.age} years old." # Creating instances rex = Dog("Rex", "German Shepherd", 3) buddy = Dog("Buddy", "Labrador", 5) print(rex.bark()) # Woof! My name is Rex! print(buddy.birthday()) # Buddy is now 6 years old. print(rex.breed) # German Shepherd
💡 What is self?
self refers to the specific instance calling the method. When you write rex.bark(), Python automatically passes rex as self. Every instance method must have self as its first parameter — it's how the method knows which object's data to work with.
Inheritance lets a new class (child) take on all the attributes and methods of an existing class (parent), then extend or override them. This is the "Don't Repeat Yourself" principle in action. In Python, you pass the parent class in parentheses when defining the child.
class Animal: def __init__(self, name, sound): self.name = name self.sound = sound def speak(self): return f"{self.name} says {self.sound}!" class Dog(Animal): # Dog inherits from Animal def __init__(self, name, breed): super().__init__(name, "Woof") # call parent __init__ self.breed = breed def fetch(self): return f"{self.name} fetches the ball!" class Cat(Animal): def __init__(self, name, indoor): super().__init__(name, "Meow") self.indoor = indoor def speak(self): # override parent method return f"{self.name} purrs softly... meow." dog = Dog("Rex", "Husky") cat = Cat("Luna", indoor=True) print(dog.speak()) # Rex says Woof! (from Animal) print(cat.speak()) # Luna purrs softly... meow. (overridden) print(dog.fetch()) # Rex fetches the ball!
⭐ Key Rule
Always call super().__init__(...) inside the child's __init__ to properly initialize the parent's attributes. Forgetting this is a very common bug when starting with inheritance.
Encapsulation means hiding the internal details of an object and only exposing what's necessary. In Python, prefix an attribute with _ (convention: treat as private) or __ (name-mangled, harder to access from outside). Use @property to create controlled getters and setters.
class BankAccount: def __init__(self, owner, balance=0): self.owner = owner self.__balance = balance # private: __ prefix @property def balance(self): # getter return self.__balance @balance.setter def balance(self, amount): # setter with validation if amount < 0: raise ValueError("Balance cannot be negative") self.__balance = amount def deposit(self, amount): if amount <= 0: raise ValueError("Deposit must be positive") self.__balance += amount return self.__balance def withdraw(self, amount): if amount > self.__balance: raise ValueError("Insufficient funds") self.__balance -= amount return self.__balance acc = BankAccount("Alice", 1000) print(acc.balance) # 1000 (via @property getter) acc.deposit(500) print(acc.balance) # 1500 # acc.__balance # AttributeError — truly private!
Dunder methods (double-underscore methods like __str__, __len__, __add__) let your objects respond to Python's built-in operations. When you write len(myobj), Python calls myobj.__len__(). When you print an object, it calls __repr__ or __str__. Implementing these makes your custom classes behave like native Python types.
class Vector: def __init__(self, x, y): self.x = x self.y = y def __repr__(self): # shown in console/debugger return f"Vector({self.x}, {self.y})" def __str__(self): # shown by print() return f"({self.x}, {self.y})" def __add__(self, other): # v1 + v2 return Vector(self.x + other.x, self.y + other.y) def __mul__(self, scalar): # v * 3 return Vector(self.x * scalar, self.y * scalar) def __eq__(self, other): # v1 == v2 return self.x == other.x and self.y == other.y def __len__(self): # len(v) — returns dimension count return 2 v1 = Vector(1, 2) v2 = Vector(3, 4) print(v1 + v2) # (4, 6) print(v1 * 3) # (3, 6) print(v1 == Vector(1,2)) # True print(len(v1)) # 2 print(repr(v1)) # Vector(1, 2)
__repr__ is for developers (unambiguous), __str__ is for end users (readable). print() uses __str__ first.
__add__, __sub__, __mul__, __truediv__ enable + - * / on your objects.
__eq__, __lt__, __gt__, __le__, __ge__ power ==, <, > comparisons.
__len__, __getitem__, __contains__ make objects work with len(), indexing, and in.
TUTORIAL — DATA STRUCTURES
A data structure is a way of organizing data so you can use it efficiently. Choosing the right structure is one of the most impactful decisions you make when writing a program. The same task can be 1000× faster or slower depending on this choice. Python gives you powerful built-ins — and lets you build your own from scratch.
A list is Python's most versatile data structure. It holds an ordered collection of items (of any type) and lets you add, remove, and update them freely. Internally, a list is a dynamic array — it stores references to objects in contiguous memory, which is why index access is O(1) but searching for a value is O(n).
# Creating and basic operations fruits = ["apple", "banana", "cherry"] # Indexing (O(1)) print(fruits[0]) # apple print(fruits[-1]) # cherry (negative = from end) # Slicing print(fruits[0:2]) # ['apple', 'banana'] # Mutating fruits.append("date") # add to end O(1) fruits.insert(1, "blueberry") # insert at index O(n) fruits.remove("banana") # remove by value O(n) popped = fruits.pop() # remove last item O(1) popped = fruits.pop(0) # remove at index O(n) # Useful methods fruits.sort() # sort in-place fruits.reverse() # reverse in-place print(len(fruits)) # number of elements print("apple" in fruits) # membership check O(n) # List comprehension — build a new list from an expression squares = [x**2 for x in range(10)] evens = [x for x in range(20) if x % 2 == 0] print(squares) # [0, 1, 4, 9, 16, 25, 36, 49, 64, 81] print(evens) # [0, 2, 4, 6, 8, 10, 12, 14, 16, 18]
A dictionary maps unique keys to values. Under the hood, Python uses a hash table — it hashes the key to find the storage slot, making lookups, inserts, and deletes nearly O(1) regardless of size. Dicts are ordered by insertion order since Python 3.7.
student = {
"name": "Alice",
"age": 22,
"gpa": 3.8,
"courses": ["Math", "Physics"]
}
# Access (O(1))
print(student["name"]) # Alice
print(student.get("email", "N/A")) # N/A (safe, no KeyError)
# Add / Update
student["email"] = "[email protected]"
student["age"] = 23
# Iterate
for key, value in student.items():
print(f" {key}: {value}")
# Dict comprehension
word_lengths = {word: len(word) for word in ["apple", "banana", "fig"]}
print(word_lengths) # {'apple': 5, 'banana': 6, 'fig': 3}
# Counting occurrences (classic pattern)
text = "the cat sat on the mat"
words = text.split()
count = {}
for w in words:
count[w] = count.get(w, 0) + 1
print(count) # {'the': 2, 'cat': 1, 'sat': 1, 'on': 1, 'mat': 1}
A stack is Last In, First Out (LIFO) — like a stack of plates. A queue is First In, First Out (FIFO) — like a line at a store. Python doesn't have dedicated built-in classes for these, but you can implement both using a list or the collections.deque for better performance.
# ── STACK (LIFO) using list ── stack = [] stack.append("page1") # push stack.append("page2") stack.append("page3") print(stack.pop()) # page3 (last in, first out) print(stack.pop()) # page2 # ── QUEUE (FIFO) using deque ── from collections import deque queue = deque() queue.append("customer1") # enqueue queue.append("customer2") queue.append("customer3") print(queue.popleft()) # customer1 (first in, first out) print(queue.popleft()) # customer2 # ── CUSTOM STACK CLASS ── class Stack: def __init__(self): self._data = [] def push(self, item): self._data.append(item) def pop(self): return self._data.pop() def peek(self): return self._data[-1] def is_empty(self): return len(self._data) == 0 def __len__(self): return len(self._data)
✅ Use deque, not list, for queues
Using list.pop(0) to dequeue is O(n) because all remaining items shift left. deque.popleft() is O(1). For queues, always use collections.deque.
Big-O notation describes how an operation scales as the data grows. O(1) means constant time regardless of size — ideal. O(n) means it slows linearly. O(n²) means it slows dramatically. Knowing this lets you write code that doesn't break on large inputs.
| Structure | Access | Search | Insert | Delete |
|---|---|---|---|---|
| List (end) | O(1) | O(n) | O(1) | O(1) |
| List (middle) | O(1) | O(n) | O(n) | O(n) |
| Dictionary | O(1) | O(1) | O(1) | O(1) |
| Set | — | O(1) | O(1) | O(1) |
| Stack (list) | O(1) | O(n) | O(1) | O(1) |
| Queue (deque) | O(1) | O(n) | O(1) | O(1) |
TUTORIAL — ALGORITHMS
Algorithms are step-by-step procedures for solving problems. Understanding classic algorithms trains your mind to break down any problem logically. You'll also understand why Python's built-in sorted() is so fast — and when you might need something different.
Bubble sort repeatedly steps through the list, compares adjacent elements, and swaps them if they're in the wrong order. The largest unsorted element "bubbles up" to its correct position each pass. It's easy to understand but very slow — only useful for teaching concepts, not production code.
def bubble_sort(arr): n = len(arr) for i in range(n): swapped = False for j in range(0, n - i - 1): # inner loop shrinks each pass if arr[j] > arr[j + 1]: arr[j], arr[j + 1] = arr[j + 1], arr[j] # swap swapped = True if not swapped: break # already sorted — early exit return arr data = [64, 34, 25, 12, 22, 11, 90] print(bubble_sort(data)) # [11, 12, 22, 25, 34, 64, 90] # Complexity: O(n²) worst/avg, O(n) best (already sorted)
Merge sort splits the list in half, recursively sorts each half, then merges the two sorted halves. It's a classic divide-and-conquer algorithm. It guarantees O(n log n) in all cases, making it one of the most reliable sorting algorithms. Python's built-in sort (Timsort) is partially based on merge sort.
def merge_sort(arr): if len(arr) <= 1: return arr # base case mid = len(arr) // 2 left = merge_sort(arr[:mid]) # recurse left half right = merge_sort(arr[mid:]) # recurse right half return merge(left, right) def merge(left, right): result = [] i = j = 0 while i < len(left) and j < len(right): if left[i] <= right[j]: result.append(left[i]); i += 1 else: result.append(right[j]); j += 1 result += left[i:] # remaining elements result += right[j:] return result data = [38, 27, 43, 3, 9, 82, 10] print(merge_sort(data)) # [3, 9, 10, 27, 38, 43, 82] # Complexity: O(n log n) always · Space: O(n)
Binary search is dramatically faster than scanning every item. Given a sorted list, it checks the middle element, then eliminates half the remaining search space each step. With 1 million items, a linear search takes up to 1,000,000 comparisons — binary search takes at most 20.
def binary_search(arr, target): low, high = 0, len(arr) - 1 while low <= high: mid = (low + high) // 2 if arr[mid] == target: return mid # found elif arr[mid] < target: low = mid + 1 # target is right half else: high = mid - 1 # target is left half return -1 # not found nums = [2, 5, 8, 12, 16, 23, 38, 56, 72, 91] print(binary_search(nums, 23)) # 5 (index 5) print(binary_search(nums, 99)) # -1 (not found) # Python also has bisect module for binary search: import bisect idx = bisect.bisect_left(nums, 23) print(idx) # 5
⭐ Requirement
Binary search only works on sorted lists. If your list is unsorted, sort it first — even then the combined O(n log n + log n) is faster than O(n) for repeated searches on the same dataset.
TUTORIAL — ADVANCED PYTHON
Decorators and generators are two features that make Python code concise, expressive, and scalable. Once you understand them, you'll find them everywhere — in web frameworks, testing libraries, data pipelines, and async code.
A decorator is a function that takes another function, wraps it in extra logic, and returns the result. The @decorator syntax is just shorthand for func = decorator(func). Because functions are first-class objects in Python, you can pass them around and nest them like any other value.
import time from functools import wraps # ── A timing decorator ── def timer(func): @wraps(func) # preserves original function metadata def wrapper(*args, **kwargs): start = time.perf_counter() result = func(*args, **kwargs) # call original function end = time.perf_counter() print(f"{func.__name__} took {end - start:.4f}s") return result return wrapper @timer def slow_sum(n): return sum(range(n)) slow_sum(10_000_000) # slow_sum took 0.2341s # ── A caching decorator ── def memoize(func): cache = {} @wraps(func) def wrapper(*args): if args not in cache: cache[args] = func(*args) return cache[args] return wrapper @memoize def fib(n): if n < 2: return n return fib(n-1) + fib(n-2) print(fib(50)) # 12586269025 — instant with caching! # Without memoize this would take billions of operations
A generator is a function that uses yield instead of return. Each time you call next() on it, it runs until the next yield, produces a value, then pauses — saving all its state. This means you can iterate over a sequence of a billion items without ever holding all of them in memory at once.
# ── Simple generator ── def countdown(n): print("Starting countdown!") while n > 0: yield n # pause here, give n to caller n -= 1 print("Done!") gen = countdown(3) print(next(gen)) # Starting countdown! 3 print(next(gen)) # 2 print(next(gen)) # 1 Done! # ── Reading a huge file line by line (memory-efficient) ── def read_large_file(path): with open(path) as f: for line in f: yield line.strip() # one line at a time — no full load! # Only one line lives in memory at any moment for line in read_large_file("huge_log.txt"): if "ERROR" in line: print(line) # ── Generator expression (like list comprehension, but lazy) ── squares_gen = (x**2 for x in range(1_000_000)) # uses bytes, not MB! print(next(squares_gen)) # 0 print(next(squares_gen)) # 1
A list of 1M integers uses ~8 MB. An equivalent generator expression uses ~200 bytes. Generators are essential for large data processing.
Generators compute values on demand. They won't even start until you call next() — great for expensive operations you might not always need.
Chain generators together like Unix pipes: results = filter(pred, map(transform, source)) — all lazy, all memory-safe.
TUTORIAL — FILE I/O & SERIALIZATION
Every real-world program needs to read and write data. Whether it's a config file, a dataset, or a cache of results — file I/O is unavoidable. Python makes it clean with context managers, the pathlib module, and excellent built-in support for JSON and CSV.
Always open files using a with statement (a context manager). It guarantees the file is properly closed even if an exception occurs — no resource leaks, no corrupted files.
# Writing a file with open("notes.txt", "w", encoding="utf-8") as f: f.write("Hello, file!\n") f.write("Second line.\n") # Reading the entire file at once with open("notes.txt", "r", encoding="utf-8") as f: content = f.read() print(content) # Reading line by line (memory-efficient for large files) with open("notes.txt", "r", encoding="utf-8") as f: for line in f: print(line.rstrip()) # Appending without overwriting with open("notes.txt", "a", encoding="utf-8") as f: f.write("Appended line.\n") # Modern path handling with pathlib from pathlib import Path p = Path("notes.txt") print(p.exists()) # True print(p.suffix) # .txt text = p.read_text(encoding="utf-8") # one-liner read! p.write_text("New content", encoding="utf-8")
JSON (JavaScript Object Notation) is the universal data exchange format. Python's json module converts Python dicts/lists to JSON strings (json.dumps) and back (json.loads). Use json.dump / json.load to work directly with files.
import json user = { "name": "Alice", "age": 28, "skills": ["Python", "SQL", "Docker"], "active": True } # Serialize to JSON string json_str = json.dumps(user, indent=2) print(json_str) # { # "name": "Alice", # "age": 28, # ... # } # Save to file with open("user.json", "w") as f: json.dump(user, f, indent=2) # Load from file with open("user.json", "r") as f: loaded = json.load(f) print(loaded["name"]) # Alice print(type(loaded)) # <class 'dict'>
CSV (Comma-Separated Values) is the standard for tabular data — spreadsheets, exports, datasets. Python's csv module handles quoting, escaping, and different delimiters automatically so you don't have to manually split strings.
import csv # Writing CSV students = [ {"name": "Alice", "grade": "A", "score": 95}, {"name": "Bob", "grade": "B", "score": 82}, {"name": "Carol", "grade": "A", "score": 91}, ] with open("students.csv", "w", newline="") as f: writer = csv.DictWriter(f, fieldnames=["name", "grade", "score"]) writer.writeheader() writer.writerows(students) # Reading CSV with open("students.csv", "r") as f: reader = csv.DictReader(f) for row in reader: print(f"{row['name']}: {row['score']}") # Alice: 95 # Bob: 82 # Carol: 91
💡 newline="" matters
Always pass newline="" when opening CSV files on Windows. Without it, Python adds an extra \r to each row, resulting in blank lines between every record. This is one of the most common CSV bugs.
TUTORIAL — MODERN PYTHON
Type hints (introduced in PEP 484) let you annotate variables, function parameters, and return values with their expected types. Python doesn't enforce them at runtime, but they power IDE autocompletion, catch bugs before you run your code, and make large codebases infinitely easier to navigate. Combined with tools like mypy or pyright, type hints bring compile-time safety to a dynamic language.
Annotate variables with : type and functions with parameter types and a return type after ->. These annotations are stored in __annotations__ and used by type checkers, but Python itself ignores them at runtime — so adding them never breaks existing code.
# Variable annotations name: str = "Alice" age: int = 30 pi: float = 3.14159 active: bool = True # Function annotations def greet(name: str, times: int = 1) -> str: return (name + " ") * times def add(x: int, y: int) -> int: return x + y # Functions that return nothing use -> None def log_error(message: str) -> None: print(f"[ERROR] {message}") print(greet("Hello", 3)) # Hello Hello Hello print(add(4, 5)) # 9
For containers and composite types, Python 3.9+ lets you use built-in generics directly: list[str], dict[str, int]. Use Optional[T] (or T | None in 3.10+) for values that might be None, and Union when multiple types are valid.
from typing import Optional, Union # Python 3.9+: built-in generics scores: list[int] = [95, 82, 78] config: dict[str, str] = {"host": "localhost"} matrix: list[list[float]] = [[1.0, 2.0], [3.0, 4.0]] # Optional (value or None) — Python 3.10+ uses T | None def find_user(user_id: int) -> Optional[str]: users = {1: "Alice", 2: "Bob"} return users.get(user_id) # returns str or None # Union — multiple allowed types (3.10+: int | str) def stringify(value: Union[int, float, str]) -> str: return str(value) # Callable types from typing import Callable def apply(func: Callable[[int], int], n: int) -> int: return func(n) print(find_user(1)) # Alice print(find_user(99)) # None print(apply(lambda x: x**2, 5)) # 25
⭐ Run mypy to catch bugs before runtime
Install mypy with pip install mypy and run mypy yourfile.py. It will flag type mismatches, missing returns, and None dereferences before you ever execute the code.
dataclasses (Python 3.7+) auto-generate __init__, __repr__, and __eq__ from annotated fields — eliminating boilerplate OOP code. TypedDict adds type safety to dictionaries when you can't switch to classes. Together they make data modeling clean, explicit, and IDE-friendly.
from dataclasses import dataclass, field from typing import TypedDict # ── dataclass: auto-generates __init__, __repr__, __eq__ ── @dataclass class Point: x: float y: float label: str = "point" # default value @dataclass class Student: name: str grades: list[int] = field(default_factory=list) def average(self) -> float: return sum(self.grades) / len(self.grades) if self.grades else 0.0 # ── TypedDict: type-safe dictionaries ── class UserProfile(TypedDict): id: int name: str email: str admin: bool p = Point(1.0, 2.5) print(p) # Point(x=1.0, y=2.5, label='point') print(p == Point(1.0, 2.5)) # True (auto __eq__) s = Student("Alice") s.grades.extend([90, 85, 92]) print(s.average()) # 89.0
Immutable dataclasses — instances can't be modified after creation. Great for value objects and cache keys.
Use field(default_factory=list) for mutable defaults. Never use grades: list = [] — shared across all instances.
Auto-generates __lt__, __le__, __gt__, __ge__ based on field order — enables sorting.
TUTORIAL — CONCURRENCY
Python's asyncio library enables concurrent code without threads. Instead of blocking while waiting for I/O (network calls, file reads, database queries), async code yields control back to the event loop so other tasks can run. This makes async Python extraordinarily efficient for web servers, API clients, and any I/O-bound workload — handling thousands of simultaneous connections with a single thread.
An async function (coroutine) is declared with async def. Inside it, await suspends execution until the awaited operation completes — without blocking the entire thread. Coroutines must be run by an event loop, either via asyncio.run() or from within another coroutine.
import asyncio import time # ── Synchronous version — runs sequentially ── def fetch_sync(url: str) -> str: time.sleep(1) # blocks the entire thread return f"data from {url}" # ── Async version — runs concurrently ── async def fetch_async(url: str) -> str: await asyncio.sleep(1) # yields control — non-blocking return f"data from {url}" async def main(): urls = ["api.github.com", "api.twitter.com", "api.weather.com"] # asyncio.gather runs all coroutines CONCURRENTLY results = await asyncio.gather(*[fetch_async(u) for u in urls]) for r in results: print(r) # 3 requests complete in ~1s (concurrent) vs 3s (sequential) asyncio.run(main())
💡 Sync vs Async execution time
Fetching 3 URLs synchronously takes 3 seconds (sequential). With asyncio.gather(), all 3 start simultaneously and the total time is just ~1 second — the duration of the slowest call.
aiohttp is the standard async HTTP library for Python. Use it instead of requests when you need non-blocking HTTP calls. The key pattern is an async context manager: async with aiohttp.ClientSession() as session — this ensures connections are properly cleaned up even if errors occur.
import asyncio import aiohttp # pip install aiohttp async def fetch_json(session: aiohttp.ClientSession, url: str) -> dict: async with session.get(url) as response: response.raise_for_status() # raises on 4xx/5xx return await response.json() async def fetch_all(urls: list[str]) -> list[dict]: async with aiohttp.ClientSession() as session: tasks = [fetch_json(session, url) for url in urls] return await asyncio.gather(*tasks, return_exceptions=True) async def main(): apis = [ "https://jsonplaceholder.typicode.com/posts/1", "https://jsonplaceholder.typicode.com/posts/2", "https://jsonplaceholder.typicode.com/users/1", ] results = await fetch_all(apis) for data in results: if not isinstance(data, Exception): print(data["title"][:40] if "title" in data else data["name"]) asyncio.run(main())
✅ return_exceptions=True is production-safe
By default, asyncio.gather() cancels all tasks if any one fails. Pass return_exceptions=True to receive exceptions as values instead — so one failed request doesn't abort all the others.
Python offers three concurrency models. Choosing the wrong one is a common performance mistake.
| Model | Best For | GIL? | Overhead |
|---|---|---|---|
| asyncio | I/O-bound (network, disk) | Yes, but yields | Very low |
| threading | I/O-bound, blocking libs | Yes (limited) | Medium |
| multiprocessing | CPU-bound (math, ML) | No (own process) | High |
FAQ
With consistent practice, you can learn Python fundamentals in 4-6 weeks. Mastering advanced concepts like OOP, data structures, and frameworks typically takes 3-6 months. Our structured learning path helps you progress efficiently at your own pace.
Python is beginner-friendly and requires no prior programming experience. Basic computer literacy, problem-solving skills, and logical thinking are sufficient. Our fundamentals tutorial starts from absolute basics with no assumed knowledge.
Start with our Python Fundamentals tutorial, which covers variables, data types, control flow, functions, and basic input/output. Then progress to Data Structures and Object-Oriented Programming. Each tutorial builds on previous concepts with practical examples.
Currently, we don't offer certificates, but all our tutorials are completely free. We focus on providing high-quality, practical knowledge that you can immediately apply to real projects. Each tutorial includes exercises and projects to build your portfolio.
We update our tutorials regularly to reflect the latest Python versions (3.11+), best practices, and community standards. All tutorials were last updated in 2026 and include modern features like type hints, dataclasses, and pattern matching.
Get curated Python tutorials, weekly coding tips, and early access to new content — straight to your inbox.
Curated tutorials, tips & industry news every Tuesday
No paywalls, no upsells. Pure Python knowledge
New tutorials before they're publicly posted
One-click unsubscribe, always respected
CHOOSE YOUR INTERESTS
Join 20,000+ Python developers. Unsubscribe anytime.
SETTING UP YOUR PREFERENCES…
Saving your interests…
COMMUNITY
Comments