State-of-the-art results in image inpainting are obtained with patch-based methods that fill in the missing region patch-by-patch by searching for similar patches in the known region and placing them at corresponding locations. In this paper, we introduce a context-aware patch-based inpainting method, where the context is represented by texture and color features of a block surrounding the patch to be filled in. We use this context to recognize other blocks in the image that have similar features and then we constrain the search for similar patches within them. Such an approach guides the search process towards less ambiguous matching candidates, while also speeding up the algorithm. Experimental results demonstrate benefits of the proposed context-aware approach, both in terms of inpainting quality and computation time.