<!-- Kamailio Pull Request Template -->
<!-- IMPORTANT: - for detailed contributing guidelines, read: https://github.com/kamailio/kamailio/blob/master/.github/CONTRIBUTING.md - pull requests must be done to master branch, unless they are backports of fixes from master branch to a stable branch - backports to stable branches must be done with 'git cherry-pick -x ...' - code is contributed under BSD for core and main components (tm, sl, auth, tls) - code is contributed GPLv2 or a compatible license for the other components - GPL code is contributed with OpenSSL licensing exception -->
#### Pre-Submission Checklist <!-- Go over all points below, and after creating the PR, tick all the checkboxes that apply --> <!-- All points should be verified, otherwise, read the CONTRIBUTING guidelines from above--> <!-- If you're unsure about any of these, don't hesitate to ask on sr-dev mailing list --> - [ ] Commit message has the format required by CONTRIBUTING guide - [ ] Commits are split per component (core, individual modules, libs, utils, ...) - [ ] Each component has a single commit (if not, squash them into one commit) - [ ] No commits to README files for modules (changes must be done to docbook files in `doc/` subfolder, the README file is autogenerated)
#### Type Of Change - [x] Small bug fix (non-breaking change which fixes an issue) - [ ] New feature (non-breaking change which adds new functionality) - [ ] Breaking change (fix or feature that would change existing functionality)
#### Checklist: <!-- Go over all points below, and after creating the PR, tick the checkboxes that apply --> - [x] PR should be backported to stable branches - [x] Tested changes locally - [ ] Related to issue #XXXX (replace XXXX with an open issue number)
#### Description I can’t get big responses via jsonrpcs with default tcp params. According to log response size ~3200k, kamailio send ~2500k of data and can’t add 700k to queue after. This patch should fix, if my understanding of “queued” value is right.
You can view, comment on, or merge this pull request online at:
https://github.com/kamailio/kamailio/pull/1376
-- Commit Summary --
* tcp: correct queued length checking
-- File Changes --
M src/core/tcp_main.c (2)
-- Patch Links --
https://github.com/kamailio/kamailio/pull/1376.patch https://github.com/kamailio/kamailio/pull/1376.diff
Thanks! I will look a bit at the code before merging, last weeks were too busy...
After first review, the patch is not correct, because the size of the data to be added in the connection queue results in going over the limit of `tcpconn_wq_max` (corresponding to config parameter `tcp_conn_wq_max`).
If you have a different opinion, can you provide more details about how you concluded this would be a fix?
It woks just according to: "i can dump the same big htable via jsonrpcs/tcp with this patch, and can't without it". In my mind we have to check already "queued" length, not "queued+size", but as mensioned before my understanding of these parameters is superficial
`queued` gives what was written in the outbound queue already, and the `queued + size` checks that what is going to be written is not exceeding the configured limit.
Have you increased the limits with global parameters (tcp_conn_wq_max and tcp_wq_max), or just used the defaults? If you deal with large output, these values have to be adjusted.
i use default TCP params and change _tcp_conn_wq_max_ and _tcp_wq_max_ can help of course according to if statement.
_q->queued_ is zero in my fail test, so it is strange for me. in docs
tcp_conn_wq_max Maximum bytes queued for write allowed per connection. Attempting to queue more bytes would result in an error and in the connection being closed (**too slow**). If tcp_write_buf is not enabled, it has no effect.
So i try to add the last 700k and it is failed always. I undestand this paramenter as not yet sent data (connection too slow), so if _q->queued<tcp_conn_wq_max_, then connection is fast and i can add more data to queue
Resuming on this one: maybe you can describe here how this can be reproduced, eventually provide a minimal kamailio.cfg as well as some scripts to do it and we can analyse if it is expected behaviour or not.
here is minimal config for current master [kamailio.txt](https://github.com/kamailio/kamailio/files/2170252/kamailio.txt)
here is sql script for load data [big_htable_create.txt](https://github.com/kamailio/kamailio/files/2170253/big_htable_create.txt)
how to reproduce ``` [snen@sw5 kamailio]# cat big_htable_create.txt | mysql kamailio_dev [snen@sw5 kamailio]# devkamctl restart [snen@sw5 kamailio]# curl -X GET -H "Content-Type: application/json" -d '{"jsonrpc": "2.0", "method": "htable.dump", "params":["big_htable"], "id": 1 }' http://192.168.10.190:5071/jsonrpc/ > /tmp/big_htable.data % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 59 4181k 59 2479k 100 79 9.7M 318 --:--:-- --:--:-- --:--:-- 9.8M curl: (18) transfer closed with 1743335 bytes remaining to read ```
Coming back to this one after quite long time ...
So I just tried to reproduce based on your minimal config and current master branch. I had to increase the size of PKG memory (-M 12), but then all was fine -- the content of `/tmp/big_htable.data` had all 30k records (based on number of lines printed there and the closing of the json document was ok). No error was printed by kamailio.
I use "-m64 -M32" for current master and result is the same (can't read all response data). But then I remembered the Centos7 buffers. Current (and Centos7 default) net.ipv4.tcp_wmem is "4096 16384 4194304", after change it via sysctl to '8194304 8194304 8194304' all works fine...
Closed #1376.
OK, good it was sorted out and thanks for sharing the solution.
I added a section about network buffers size in the tcp tuning file:
* https://github.com/kamailio/kamailio/commit/38a696fff66f0a453e54c92c93e8c459...
Closing this one.