I’ve read about Docker containers that can directly access the GPU for OpenGL/CUDA/OpenCL/etc. They obviously require to run on a Linux host, or otherwise the direct access to the GPU wouldn’t be possible.
But… is it supported (either by Docker or by other alternatives) to write your own translation layer for some API that you design? For example, I only use a very small subset of OpenCL, so for me it would be straightforward to encapsulate all my OpenCL calls in a small intermediate API. If Docker (or any other alternative) lets me write my implementation of such custom API for each host (Windows/Linux/Mac), then it would be great.
Obviously, I know that Docker is open source, and that you can modify its source to your liking, but I’m not talking about modifying Docker (which would be beyond my scope), but about an already supported mechanism for registering a custom API and its implementation for each host.
Source: Docker Questions