Jan 2, 2010

Response time optimisation with Varnish


This blog post shows you how to optimize the tools chain on your server to improve its performance by an order of magnetude with out changing a single line in your django project. In order to do so I will use again django-cms [1] as guinea pig because there is a fair amount for processing to display a page but it is still easy to install. Note: django-cms example has the cache middleware activated by default.


Then I will run ab testing on a particular page and compare the results. These tests are being performed on my laptop hp dv6-1030. The important information is not the figures but by them self but rather the variation of the response time.



Before starting my test I have moved django-cms to be mounted under "/". In order to do this you will need to change the configuration into the file called example_uwsgi.py.


import os
import django.core.handlers.wsgi

# Set the django settings and define the wsgi app
os.environ['DJANGO_SETTINGS_MODULE'] = 'example.settings'
application = django.core.handlers.wsgi.WSGIHandler()

# Mount the application to the url
applications = {'/':application, }



Then you need to change the rule behavior in cherokee admin to reflect this change. Cheorkee admin makes this task a breeze.









Before diving head first into the the meat of this article here it is a diagram of the architecture that we are going to work with :





The goal of this article is to show you the incredible boost that varnish can give to certain type of web application. 



varnish [2] is a state-of-the-art, high-performance HTTP accelerator. It uses the advanced features in Linux 2.6, FreeBSD 6/7 and Solaris 10 to achieve its high performance.
Some of the features include:
  • VCL - a very flexible configuration language
  • Load balancing with health checking of backends
  • Partial support for ESI
  • URL rewriting
  • Graceful handling of "dead" backends
  • ...

The bottom line is that just by installing it and using it with a vanilla configuration, on ubuntu, will increase the responsiveness of your site by an order of magnitude that is hard to believe we are talking here of an improvement factor ranging from 50 to 600 times.



The first thing that you would like to do is to install varnish [2]. On ubuntu varnish is very easy to install/configure since there is a package that exists. Once this operation is executed you will need to define the backend, this varnish jargon means that you need to tell varnish where Cherokee is located.


backend default {
.host = "127.0.0.1";
.port = "8080";
}



Here it is some ab tests that I have done to illustrate this article, 8080 and 6081 are respectively the port for Cherokee and Varnish.

Cherokee

ab -n 100 -c 1 http://192.168.1.18:8080/

This is ApacheBench, Version 2.3 <$Revision: 655654 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/

Benchmarking 192.168.1.18 (be patient).....done


Server Software: Cherokee/0.99.37
Server Hostname: 192.168.1.18
Server Port: 8080

Document Path: /
Document Length: 3440 bytes

Concurrency Level: 1
Time taken for tests: 15.285 seconds
Complete requests: 100
Failed requests: 0
Write errors: 0
Total transferred: 364300 bytes
HTML transferred: 344000 bytes
Requests per second: 6.54 [#/sec] (mean)
Time per request: 152.851 [ms] (mean)
Time per request: 152.851 [ms] (mean, across all concurrent requests)
Transfer rate: 23.28 [Kbytes/sec] received

Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 0 2.4 0 24
Processing: 133 153 17.3 149 235
Waiting: 133 153 17.3 149 235
Total: 134 153 17.2 149 235

Percentage of the requests served within a certain time (ms)
50% 149
66% 158
75% 164
80% 166
90% 172
95% 175
98% 230
99% 235
100% 235 (longest request)


ab -n 100 -c 50 http://192.168.1.18:8080/

This is ApacheBench, Version 2.3 <$Revision: 655654 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/

Benchmarking 192.168.1.18 (be patient).....done


Server Software: Cherokee/0.99.37
Server Hostname: 192.168.1.18
Server Port: 8080

Document Path: /
Document Length: 3440 bytes

Concurrency Level: 50
Time taken for tests: 8.202 seconds
Complete requests: 100
Failed requests: 0
Write errors: 0
Total transferred: 364300 bytes
HTML transferred: 344000 bytes
Requests per second: 12.19 [#/sec] (mean)
Time per request: 4101.021 [ms] (mean)
Time per request: 82.020 [ms] (mean, across all concurrent requests)
Transfer rate: 43.37 [Kbytes/sec] received

Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 1 1.1 1 3
Processing: 740 3283 1158.0 3906 4438
Waiting: 740 3283 1158.0 3906 4438
Total: 743 3284 1157.1 3906 4438

Percentage of the requests served within a certain time (ms)
50% 3906
66% 4048
75% 4112
80% 4182
90% 4285
95% 4341
98% 4359
99% 4438
100% 4438 (longest request)

ab -n 100 -c 100 http://192.168.1.18:8080/

This is ApacheBench, Version 2.3 <$Revision: 655654 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/

Benchmarking 192.168.1.18 (be patient).....done


Server Software: Cherokee/0.99.37
Server Hostname: 192.168.1.18
Server Port: 8080

Document Path: /
Document Length: 3440 bytes

Concurrency Level: 100
Time taken for tests: 8.236 seconds
Complete requests: 100
Failed requests: 0
Write errors: 0
Total transferred: 364300 bytes
HTML transferred: 344000 bytes
Requests per second: 12.14 [#/sec] (mean)
Time per request: 8235.626 [ms] (mean)
Time per request: 82.356 [ms] (mean, across all concurrent requests)
Transfer rate: 43.20 [Kbytes/sec] received

Connection Times (ms)
min mean[+/-sd] median max
Connect: 3 4 0.8 4 5
Processing: 699 4533 2337.0 4650 8228
Waiting: 699 4533 2337.0 4650 8228
Total: 704 4537 2336.2 4654 8230

Percentage of the requests served within a certain time (ms)
50% 4654
66% 5884
75% 6717
80% 7017
90% 7749
95% 8115
98% 8221
99% 8230
100% 8230 (longest request)

Varnish


ab -n 100 -c 1 http://192.168.1.18:6081/

This is ApacheBench, Version 2.3 <$Revision: 655654 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/

Benchmarking 192.168.1.18 (be patient).....done


Server Software: Cherokee/0.99.37
Server Hostname: 192.168.1.18
Server Port: 6081

Document Path: /
Document Length: 3440 bytes

Concurrency Level: 1
Time taken for tests: 0.030 seconds
Complete requests: 100
Failed requests: 0
Write errors: 0
Total transferred: 374800 bytes
HTML transferred: 344000 bytes
Requests per second: 3320.49 [#/sec] (mean)
Time per request: 0.301 [ms] (mean)
Time per request: 0.301 [ms] (mean, across all concurrent requests)
Transfer rate: 12153.53 [Kbytes/sec] received

Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 0 0.0 0 0
Processing: 0 0 0.9 0 8
Waiting: 0 0 0.9 0 8
Total: 0 0 0.9 0 8

Percentage of the requests served within a certain time (ms)
50% 0
66% 0
75% 0
80% 0
90% 0
95% 0
98% 4
99% 8
100% 8 (longest request)

ab -n 100 -c 50 http://192.168.1.18:6081/

This is ApacheBench, Version 2.3 <$Revision: 655654 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/

Benchmarking 192.168.1.18 (be patient).....done


Server Software: Cherokee/0.99.37
Server Hostname: 192.168.1.18
Server Port: 6081

Document Path: /
Document Length: 3440 bytes

Concurrency Level: 50
Time taken for tests: 0.012 seconds
Complete requests: 100
Failed requests: 0
Write errors: 0
Total transferred: 378548 bytes
HTML transferred: 347440 bytes
Requests per second: 8522.97 [#/sec] (mean)
Time per request: 5.866 [ms] (mean)
Time per request: 0.117 [ms] (mean, across all concurrent requests)
Transfer rate: 31507.35 [Kbytes/sec] received

Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 1 1.2 0 3
Processing: 0 3 2.0 3 8
Waiting: 0 3 2.0 2 8
Total: 0 4 2.8 4 11

Percentage of the requests served within a certain time (ms)
50% 4
66% 5
75% 6
80% 7
90% 9
95% 10
98% 11
99% 11
100% 11 (longest request)

ab -n 100 -c 100 http://192.168.1.18:6081/

This is ApacheBench, Version 2.3 <$Revision: 655654 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/

Benchmarking 192.168.1.18 (be patient).....done


Server Software: Cherokee/0.99.37
Server Hostname: 192.168.1.18
Server Port: 6081

Document Path: /
Document Length: 3440 bytes

Concurrency Level: 100
Time taken for tests: 0.013 seconds
Complete requests: 100
Failed requests: 0
Write errors: 0
Total transferred: 374800 bytes
HTML transferred: 344000 bytes
Requests per second: 7662.25 [#/sec] (mean)
Time per request: 13.051 [ms] (mean)
Time per request: 0.131 [ms] (mean, across all concurrent requests)
Transfer rate: 28045.03 [Kbytes/sec] received

Connection Times (ms)
min mean[+/-sd] median max
Connect: 2 3 0.6 3 4
Processing: 7 7 0.7 7 9
Waiting: 5 6 0.6 6 7
Total: 9 10 1.2 10 13

Percentage of the requests served within a certain time (ms)
50% 10
66% 10
75% 10
80% 11
90% 12
95% 13
98% 13
99% 13
100% 13 (longest request)

Conclusion

Installing varnish in front of your web server is propably this first step you should take in the end less journey of optimising your web application. It is interesting to note that in addition of dramatically improving the response time Varnish will also reduce the load on your application server stack [ uWSGI + django +db].



This blog post barely scratches the surface of how django can take advantage of of caches, django gives you the possibility to cache information at different stages during the request/response cycle. You can cache the output of specific views, you can cache only the pieces that are difficult to produce, you can cache a portion of template, or you can cache your entire site. Django also works well with "upstream" caches, such as varnish and browser-based caches. These are the types of caches that you don't directly control but to which you can provide hints (via HTTP headers) about which parts of your site should be cached, and how. If you want more information about this you can read the django's cache documentation.

Varnish is also a beast by itself, you can fine tuned it to suit your particular situation and you can used it to do much more in your infrastructure than just upstream cache of your dynamic web site.





[1] http://yml-blog.blogspot.com/2009/12/flup-vs-uwsgi-with-cherokee.html
[2] http://varnish.projects.linpro.no/
[3] http://docs.djangoproject.com/en/1.1/topics/cache/#topics-cache

Dec 29, 2009

flup vs uWSGI with cherokee

Cherokee is one of this many web servers that supports both deployment strategies. Since I have recently blogs about uwsgi
  Several people asked me how do they compare performance wise. Up to now my answer was I don't know and I don't really care because this is not the most critical aspect in my opinion.

However since this question keep comming I have decided to give it a more accurate answer. This blog post takes you on how to setup cherokee with both alternatives and compare their performance. The guinea pig application I have chosen to do this comparison is django-cms because I think it represents fairly well the overhead introduced by the dynamic generation of the page.

Note : in this example django-cms will not be mounted under "/" this will cause all sort of issues if you try to do this but none of them impact the particular page that we try to access.

One of the nice Cherokee's feature is its documentation, here it is the 2 pages related to this blog post :
flup [1]
uswgi [2]

In this blog post I will assume that django-cms is installed, its example project properly configured and that you have added a page to test test against it. I will aslo assume that cherokee, flup and uwsgi are setup.
During this test I have used uwsgi changeset: 145:db356717823c.

Cherokee provides 2 wizard that enables you to get running very fast. First thing first you will need to start cherokee admin :


sudo cherokee-admin -u

media admin
Add a rule to serve the "/media/admin" in my case these files are in "/opt/www/django-cms_tutorial/ve/lib/python2.6/site-packages/django/contrib/admin/media"



media cms
Add a rule to serve the "/media/cms" in my case these files are in "/opt/www/django-cms_tutorial/ve/src/django-cms/cms/media/cms"



Flup


In this case you will need to use the wizard called "Plateforms > django". This wizard takes the web directory and the project directory. In our case :

web directory : /flup
project directory : /opt/www/django-cms_tutorial/ve/src/django-cms/example






Then we need to modify the interpreter command from the "django 1" source because we used virtualenv instead of the global python.
So we need to change it from :


python /opt/www/django-cms_tutorial/ve/src/django-cms/example/manage.py runfcgi protocol=scgi host=127.0.0.1 port=37134
to:


/opt/www/django-cms_tutorial/ve/src/django-cms/example/start_fcgi.sh


Then you need to create this file called start_fcgi.sh

example$ cat > start_fcgi.sh << EOF
> source /opt/www/django-cms_tutorial/ve/bin/activate;
> python /opt/www/django-cms_tutorial/ve/src/django-cms/example/manage.py runfcgi protocol=scgi host=127.0.0.1 port=37134
> EOF


chmod +x start_fcgi.sh

Restarts cherokee and direct your browser to "/flup/".

uWSGI


In order to setup the uwsgi server we are going to use the cherokee's wizard and then modify the result to adapt it to our particular use case.

uwsgi wizard takes only one argument the path to the uwsgi configuration file :

/opt/www/django-cms_tutorial/ve/src/django-cms/example/example_uwsgi.py





We are going to edit the interpreter command to adapt it a bit to our use case :


/usr/local/bin/uwsgi -s 127.0.0.1:42597 -t 10 -M -p 1 -C -w example.example_uwsgi -H /opt/www/django-cms_tutorial/ve/src/django-cms

to :

/usr/local/bin/uwsgi -s 127.0.0.1:46075 -t 10 -M -p 10 -C -w example.example_uwsgi -H /opt/www/django-cms_tutorial/ve/

The file called example_uwsgi looks like this :


import os
import django.core.handlers.wsgi
# Set the django settings and define the wsgi app
os.environ['DJANGO_SETTINGS_MODULE'] = 'example.settings'
application = django.core.handlers.wsgi.WSGIHandler()
# Mount the application to the url
applications = {'/uwsgi':application, }

Comparison: uwsgi vs flup

flup


Resource usage for concurency equal to 50 :






(ve)yml@yml-laptop:django-cms_tutorial$ ab -n 1000 -c 50 http://192.168.1.18:8080/flup/
This is ApacheBench, Version 2.3 <$Revision: 655654 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/

Benchmarking 192.168.1.18 (be patient)
Completed 100 requests
Completed 200 requests
Completed 300 requests
Completed 400 requests
Completed 500 requests
Completed 600 requests
Completed 700 requests
Completed 800 requests
Completed 900 requests
Completed 1000 requests
Finished 1000 requests


Server Software: Cherokee/0.99.37
Server Hostname: 192.168.1.18
Server Port: 8080

Document Path: /flup/
Document Length: 3485 bytes

Concurrency Level: 50
Time taken for tests: 82.385 seconds
Complete requests: 1000
Failed requests: 0
Write errors: 0
Total transferred: 3688000 bytes
HTML transferred: 3485000 bytes
Requests per second: 12.14 [#/sec] (mean)
Time per request: 4119.256 [ms] (mean)
Time per request: 82.385 [ms] (mean, across all concurrent requests)
Transfer rate: 43.72 [Kbytes/sec] received

Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 0 0.4 0 3
Processing: 750 4042 1424.0 3791 11665
Waiting: 750 4041 1424.0 3791 11665
Total: 753 4042 1424.2 3791 11666

Percentage of the requests served within a certain time (ms)
50% 3791
66% 4098
75% 4350
80% 4509
90% 4972
95% 5566
98% 10388
99% 10974
100% 11666 (longest request)

(ve)yml@yml-laptop:django-cms_tutorial$ ab -n 1000 -c 100 http://192.168.1.18:8080/flup/
This is ApacheBench, Version 2.3 <$Revision: 655654 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/

Benchmarking 192.168.1.18 (be patient)
Completed 100 requests
Completed 200 requests
Completed 300 requests
Completed 400 requests
Completed 500 requests
Completed 600 requests
Completed 700 requests
Completed 800 requests
Completed 900 requests
Completed 1000 requests
Finished 1000 requests


Server Software: Cherokee/0.99.37
Server Hostname: 192.168.1.18
Server Port: 8080

Document Path: /flup/
Document Length: 3485 bytes

Concurrency Level: 100
Time taken for tests: 82.614 seconds
Complete requests: 1000
Failed requests: 0
Write errors: 0
Total transferred: 3688000 bytes
HTML transferred: 3485000 bytes
Requests per second: 12.10 [#/sec] (mean)
Time per request: 8261.424 [ms] (mean)
Time per request: 82.614 [ms] (mean, across all concurrent requests)
Transfer rate: 43.59 [Kbytes/sec] received

Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 0 0.8 0 3
Processing: 1209 7920 1519.3 7694 14118
Waiting: 1209 7920 1518.1 7694 14118
Total: 1213 7920 1519.5 7694 14120

Percentage of the requests served within a certain time (ms)
50% 7694
66% 8161
75% 8405
80% 8574
90% 9215
95% 10456
98% 13048
99% 13983
100% 14120 (longest request)


Server Software: Cherokee/0.99.37
Server Hostname: 192.168.1.18
Server Port: 8080

Document Path: /flup/
Document Length: 3485 bytes

Concurrency Level: 100
Time taken for tests: 81.737 seconds
Complete requests: 1000
Failed requests: 0
Write errors: 0
Total transferred: 3688000 bytes
HTML transferred: 3485000 bytes
Requests per second: 12.23 [#/sec] (mean)
Time per request: 8173.722 [ms] (mean)
Time per request: 81.737 [ms] (mean, across all concurrent requests)
Transfer rate: 44.06 [Kbytes/sec] received

Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 0 1.2 0 5
Processing: 830 7843 1455.0 7574 14385
Waiting: 830 7843 1455.0 7574 14385
Total: 835 7843 1455.3 7574 14388

Percentage of the requests served within a certain time (ms)
50% 7574
66% 7912
75% 8137
80% 8335
90% 8893
95% 10624
98% 13435
99% 13670
100% 14388 (longest request)

Resource usage for concurency equal to 100 :




uwsgi


uWSGI resource usage for concurrency equal to 50 :






(ve)yml@yml-laptop:django-cms_tutorial$ ab -n 1000 -c 50 http://192.168.1.18:8080/uwsgi/
This is ApacheBench, Version 2.3 <$Revision: 655654 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/

Benchmarking 192.168.1.18 (be patient)
Completed 100 requests
Completed 200 requests
Completed 300 requests
Completed 400 requests
Completed 500 requests
Completed 600 requests
Completed 700 requests
Completed 800 requests
Completed 900 requests
Completed 1000 requests
Finished 1000 requests


Server Software: Cherokee/0.99.37
Server Hostname: 192.168.1.18
Server Port: 8080

Document Path: /uwsgi/
Document Length: 3494 bytes

Concurrency Level: 50
Time taken for tests: 84.013 seconds
Complete requests: 1000
Failed requests: 0
Write errors: 0
Total transferred: 3697000 bytes
HTML transferred: 3494000 bytes
Requests per second: 11.90 [#/sec] (mean)
Time per request: 4200.629 [ms] (mean)
Time per request: 84.013 [ms] (mean, across all concurrent requests)
Transfer rate: 42.97 [Kbytes/sec] received

Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 0 0.3 0 2
Processing: 682 4115 472.7 4189 4707
Waiting: 682 4115 472.7 4189 4707
Total: 684 4115 472.5 4189 4707

Percentage of the requests served within a certain time (ms)
50% 4189
66% 4246
75% 4286
80% 4316
90% 4378
95% 4440
98% 4515
99% 4544
100% 4707 (longest request)
(ve)yml@yml-laptop:django-cms_tutorial$ ab -n 1000 -c 100 http://192.168.1.18:8080/uwsgi/
This is ApacheBench, Version 2.3 <$Revision: 655654 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/

Benchmarking 192.168.1.18 (be patient)
Completed 100 requests
Completed 200 requests
Completed 300 requests
Completed 400 requests
Completed 500 requests
Completed 600 requests
Completed 700 requests
Completed 800 requests
apr_socket_recv: Connection reset by peer (104)
Total of 839 requests completed

This reveals the conservative nature of uwsgi. I had a discussion about this with Roberto De Ioris. He mades a detailled answer detailled answer explaining the situation. So the bottom line is that we will need to modify our configuration and increase the socket timeout "-z" and the socket listen queue "-l"


/usr/local/bin/uwsgi -i -l 120 -z 60 -p 10 -M -s 127.0.0.1:46075 -w example.example_uwsgi -H /opt/www/django-cms_tutorial/ve/

(ve)yml@yml-laptop:django-cms_tutorial$ ab -n 1000 -c 100 http://192.168.1.18:8080/uwsgi/
This is ApacheBench, Version 2.3 <$Revision: 655654 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/

Benchmarking 192.168.1.18 (be patient)
Completed 100 requests
Completed 200 requests
Completed 300 requests
Completed 400 requests
Completed 500 requests
Completed 600 requests
Completed 700 requests
Completed 800 requests
Completed 900 requests
Completed 1000 requests
Finished 1000 requests


Server Software: Cherokee/0.99.37
Server Hostname: 192.168.1.18
Server Port: 8080

Document Path: /uwsgi/
Document Length: 3494 bytes

Concurrency Level: 100
Time taken for tests: 86.246 seconds
Complete requests: 1000
Failed requests: 0
Write errors: 0
Total transferred: 3697000 bytes
HTML transferred: 3494000 bytes
Requests per second: 11.59 [#/sec] (mean)
Time per request: 8624.594 [ms] (mean)
Time per request: 86.246 [ms] (mean, across all concurrent requests)
Transfer rate: 41.86 [Kbytes/sec] received

Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 1 1.7 0 7
Processing: 762 8248 1460.7 8509 9706
Waiting: 762 8248 1460.7 8509 9706
Total: 766 8248 1459.5 8509 9706

Percentage of the requests served within a certain time (ms)
50% 8509
66% 8669
75% 8852
80% 8998
90% 9182
95% 9360
98% 9474
99% 9566
100% 9706 (longest request)

uWSGI resource usage for concurrency equal to 100 :





Conclusion

flup is slightly faster than uWSGI at this point but this has to be put into perspective and you need to take into consideration the features that come with uWSGI, an exhaustive list can be found hereuWSGI is unable to complete the ab test with the following argument -n 1000 -c 100 with its default settings you will need to adjust the timeout socket and socket listen queue. However it is interesting to note that the memory footprint for uwsgi is lower by an order of magnitude and that my laptop remains responsive during the ab test using uWSGI where it was almost taken down by same test with flup.

[1] http://www.cherokee-project.com/doc/cookbook_django.html
[2] http://www.cherokee-project.com/doc/cookbook_uwsgi.html
[3] http://projects.unbit.it/uwsgi/

Dec 28, 2009

Add bpython to django's shell management command

Like many of us I am spending a lot of time in the REPL loop to develop django code and this bug in readline cost me an extra backspace each time  I hit "tab" in ipython.

This blog post capture my attention so I have decided to give bpython a try and so far I have been impressed by what I have seen. Here it is a management command that adds the support for bpython to django's shell management command.


import os
from django.core.management.base import NoArgsCommand
from optparse import make_option
from IPython.Shell import IPShell

def start_plain_shell():
    import code
    # Set up a dictionary to serve as the environment for the shell, so
    # that tab completion works on objects that are imported at runtime.
    # See ticket 5082.
    imported_objects = {}
    try: # Try activating rlcompleter, because it's handy.
        import readline
    except ImportError:
        pass
    else:
        # We don't have to wrap the following import in a 'try', because
        # we already know 'readline' was imported successfully.
        import rlcompleter
        readline.set_completer(rlcompleter.Completer(imported_objects).complete)
        readline.parse_and_bind("tab:complete")

    # We want to honor both $PYTHONSTARTUP and .pythonrc.py, so follow system
    # conventions and get $PYTHONSTARTUP first then import user.
    if not use_plain: 
        pythonrc = os.environ.get("PYTHONSTARTUP") 
        if pythonrc and os.path.isfile(pythonrc): 
            try: 
                execfile(pythonrc) 
            except NameError: 
                pass
        # This will import .pythonrc.py as a side-effect
        import user
    code.interact(local=imported_objects)

def start_ipython_shell():
    import IPython
    # Explicitly pass an empty list as arguments, because otherwise IPython
    # would use sys.argv from this script.
    shell = IPython.Shell.IPShell(argv=[])
    shell.mainloop()
    
def start_bpython_shell():
    from bpython import cli
    cli.main(args=[])
    

class Command(NoArgsCommand):
    option_list = NoArgsCommand.option_list + (
        make_option('--plain', action='store_true', dest='plain',
            help='Tells Django to use plain Python, not IPython.'),
        make_option('--ipython', action='store_true', dest='ipython',
            help='Tells Django to use ipython.'),
        make_option('--bpython', action='store_true', dest='bpython',
            help='Tells Django to use bpython.'),
    )
    help = "Runs a Python interactive interpreter. Tries to use IPython, if it's available."

    requires_model_validation = False

    def handle_noargs(self, **options):
        # XXX: (Temporary) workaround for ticket #1796: force early loading of all
        # models from installed apps.
        from django.db.models.loading import get_models
        loaded_models = get_models()

        use_plain = options.get('plain', False)
        use_ipython = options.get('ipython', False)
        use_bpython = options.get('bpython', False)
        
        try:
            if use_plain:
                # Don't bother loading IPython, because the user wants plain Python.
                raise ImportError
            elif use_ipython:
                start_ipython_shell()
            elif use_bpython:
                start_bpython_shell()
            else:
                # backward compatible behavior.
                start_ipython_shell()

        except ImportError:
            # fallback to plain shell if we encounter an ImportError
            start_plain_shell()

Is there an exiting debugger that use bpython ? I am looking for something equivalent to ipdb but for bpython. I am looking for something that could replace :


import ipdb; ipdb.set_trace()