Description
This is more of a question then an issue. For my case i have a websocket server using the python websockets
package. Since its async i decided to use this library and my implementation of it works. At least on the surface level. i did some stress tests and immediately found out that concurrency is an issue. it raises asyncpg.exceptions._base.InterfaceError: cannot perform operation: another operation is in progress
. not the libs fault is obvious whats going on. i am creating a connection object at the global scope of the script and using that same object throughout the entire program for everything including for all different connections. These connections are long lived and ideally i would like to be able to support hundreds to thousands of websocket connections (users). A naive approach is opening a new asyncpg connection for every websocket connection but i doubt its a smart idea to open thousands of database connections if working with thousands of websocket connections.
In the documentation i found connection pooling however a concern of mine is in its example its used for short lived connections in the context of an http request. not a long lived socket. my idea was to have a pool connection object at the global scope and only acquire the pool for every database operation that happens during the life of that websocket connection. which under peak loads i would say is about once per second (for each connection). My concern with this however is the performance. Does it take time to acquire the pool or is it instant? and what happens under high loads with lots of concurrent operations where the pool is trying to be acquired but some other operation still hasn't finished its with
block? can multiple concurrent pool acquisitions happen? and about how much?
I'm going to attempt this implementation and test and respond with my findings. But if someone else can give an insight if this is a good idea and if it will scale well it would be greatly appreciated.