码迷,mamicode.com
首页 > 其他好文 > 详细

mongo

时间:2018-06-09 00:53:23      阅读:263      评论:0      收藏:0      [点我收藏+]

标签:pool   连接数   otf   exception   html   函数   long   copy   order   

1 QA.http://api.mongodb.com/python/current/faq.html

参考:http://api.mongodb.com/python/current/faq.html#how-does-connection-pooling-work-in-pymongo 

client = MongoClient(host, port, maxPoolSize=50, waitQueueMultiple=500, waitQueueTimeoutMS=100)

Create this client once for each process, and reuse it for all operations. It is a common mistake to create a new client for each request, which is very inefficient.

MongoClient 不可以被多次创建。

When 500 threads are waiting for a socket, the 501st that needs a socket raises ExceededMaxWaiters.

A thread that waits more than 100ms (in this example) for a socket raises ConnectionFailure

When close() is called by any thread, all idle sockets are closed, and all sockets that are in use will be closed as they are returned to the pool.

 

关于_id:

PyMongo adds an _id field in this manner for a few reasons:

  • All MongoDB documents are required to have an _id field.
  • If PyMongo were to insert a document without an _id MongoDB would add one itself, but it would not report the value back to PyMongo.
  • Copying the document to insert before adding the _id field would be prohibitively expensive for most high write volume applications.

If you don’t want PyMongo to add an _id to your documents, insert only documents that already have an _id field, added by your application.

 

What does CursorNotFound cursor id not valid at server mean?

Cursors in MongoDB can timeout on the server if they’ve been open for a long time without any operations being performed on them. This can lead to an CursorNotFound exception being raised when attempting to iterate the cursor.

How do I change the timeout value for cursors?

MongoDB doesn’t support custom timeouts for cursors, but cursor timeouts can be turned off entirely. Pass no_cursor_timeout=True to find().

 

MongoDB1.3版本以上都通过MongoClient类进行连接,其策略默认就是长连接,而且无法修改。
所以连接数其实取决于fpm的客户进程数。如果fpm量太大,必然会导致连接数过多的问题。如果你所有机器上一共有1000个fpm,那么就会创建1000个长连接,按mongodb服务端的策略,每个连接最低消耗1M内存,那这1G内存就没了。
所以直接方案是每次使用完后进行close操作,这样不会让服务端需要保持大量的连接。
而close函数也有一个坑,就是默认只关闭写连接(比如master或者replica sets的primary),如果要关闭全部连接,需要添加参数true即:$mongo->close(true)
每次关闭连接的方案可以有效减少服务器的并发连接数,除非你的操作本身非常慢。但是同样也有它的问题,比如每次不能复用之前的tcp连接,需要重新进行连接,这样连接耗时会比较高,特别是用replica sets的时候,需要创建多个tcp连接。
所以最终可能只有两个方案
一是减小fpm的数量
二是自建连接池,通过连接池将之个客户端的连接收敛成固定数量对MongoDB的连接。

mongo

标签:pool   连接数   otf   exception   html   函数   long   copy   order   

原文地址:https://www.cnblogs.com/yuanzhenliu/p/9158071.html

(0)
(0)
   
举报
评论 一句话评论(0
登录后才能评论!
© 2014 mamicode.com 版权所有  联系我们:gaon5@hotmail.com
迷上了代码!