Complete guide to Python's __aiter__ method covering asynchronous iteration, custom async iterators, and practical examples.
Last modified April 8, 2025
This comprehensive guide explores Python’s aiter method, the special method that enables asynchronous iteration. We’ll cover basic usage, custom async iterators, practical examples, and common patterns.
The aiter method defines an object’s behavior when used in an async for loop. It returns an asynchronous iterator object that must implement anext.
Key characteristics: it must be an async function (defined with async def), returns an async iterator, and works with anext to support asynchronous iteration. It’s the async equivalent of iter.
Here’s a simple implementation showing how aiter works with anext to create an asynchronous iterator.
basic_aiter.py
class AsyncCounter: def init(self, stop): self.stop = stop self.current = 0
async def __aiter__(self):
return self
async def __anext__(self):
if self.current >= self.stop:
raise StopAsyncIteration
self.current += 1
return self.current - 1
async def main(): async for i in AsyncCounter(3): print(i)
import asyncio asyncio.run(main())
This example creates an async counter that yields numbers from 0 to 2. The aiter method returns self, and anext produces values until reaching the stop value.
Note the use of StopAsyncIteration instead of StopIteration and the async for loop which requires an async context.
This example demonstrates a practical use case - fetching data from multiple URLs asynchronously while supporting async iteration.
async_fetcher.py
import aiohttp
class AsyncDataFetcher: def init(self, urls): self.urls = urls
async def __aiter__(self):
self.session = aiohttp.ClientSession()
self.index = 0
return self
async def __anext__(self):
if self.index >= len(self.urls):
await self.session.close()
raise StopAsyncIteration
url = self.urls[self.index]
self.index += 1
async with self.session.get(url) as response:
data = await response.text()
return (url, len(data))
async def main(): urls = [ ‘https://python.org’, ‘https://docs.python.org’, ‘https://pypi.org’ ]
async for url, length in AsyncDataFetcher(urls):
print(f"{url}: {length} bytes")
asyncio.run(main())
This async fetcher downloads multiple web pages concurrently. aiter sets up the HTTP session, and anext fetches each URL in sequence.
The iterator properly manages resources by closing the session when iteration completes. Each iteration returns a tuple of URL and content length.
Python 3.6+ allows using async generators which automatically implement aiter and anext.
async_gen.py
async def async_counter(stop): for i in range(stop): await asyncio.sleep(0.1) # Simulate async work yield i
async def main(): async for num in async_counter(5): print(num)
asyncio.run(main())
This async generator simplifies creating async iterators. Under the hood, it implements the same protocol as classes with aiter and anext.
The generator pauses at each yield and await, making it perfect for producing values asynchronously. No explicit StopAsyncIteration is needed.
This example shows an async iterator that reads files in chunks, useful for processing large files without loading everything into memory.
async_reader.py
class AsyncFileReader: def init(self, filename, chunk_size=1024): self.filename = filename self.chunk_size = chunk_size
async def __aiter__(self):
self.file = open(self.filename, 'rb')
return self
async def __anext__(self):
data = await asyncio.to_thread(
self.file.read, self.chunk_size
)
if not data:
self.file.close()
raise StopAsyncIteration
return data
async def main(): async for chunk in AsyncFileReader(’large_file.dat’): print(f"Read {len(chunk)} bytes")
asyncio.run(main())
This reader processes files in chunks using a thread pool for the blocking I/O operations. aiter opens the file, and anext reads each chunk.
The asyncio.to_thread runs the blocking file operation in a separate thread, keeping the async loop responsive. The iterator properly closes the file when done.
This example demonstrates an async iterator that produces items with rate limiting, useful for APIs with request limits.
rate_limited.py
class RateLimitedProducer: def init(self, items, requests_per_second): self.items = items self.delay = 1.0 / requests_per_second
async def __aiter__(self):
self.index = 0
return self
async def __anext__(self):
if self.index >= len(self.items):
raise StopAsyncIteration
item = self.items[self.index]
self.index += 1
await asyncio.sleep(self.delay)
return item
async def main(): items = [f"item_{i}" for i in range(10)]
async for item in RateLimitedProducer(items, 2): # 2 items/sec
print(f"Processed {item} at {time.time()}")
asyncio.run(main())
This producer yields items with a controlled rate. The anext method sleeps between items to maintain the desired rate.
The sleep duration is calculated from the desired rate (items/second). This pattern is useful when interacting with rate-limited APIs or services.
Resource management: Clean up resources in anext when done
Error handling: Handle and properly signal errors during iteration
Use async generators: Prefer them for simpler cases
Document behavior: Clearly document iteration patterns
Consider cancellation: Handle asyncio cancellation properly
My name is Jan Bodnar, and I am a passionate programmer with extensive programming experience. I have been writing programming articles since 2007. To date, I have authored over 1,400 articles and 8 e-books. I possess more than ten years of experience in teaching programming.
List all Python tutorials.