Q_Id
int64 337
49.3M
| CreationDate
stringlengths 23
23
| Users Score
int64 -42
1.15k
| Other
int64 0
1
| Python Basics and Environment
int64 0
1
| System Administration and DevOps
int64 0
1
| Tags
stringlengths 6
105
| A_Id
int64 518
72.5M
| AnswerCount
int64 1
64
| is_accepted
bool 2
classes | Web Development
int64 0
1
| GUI and Desktop Applications
int64 0
1
| Answer
stringlengths 6
11.6k
| Available Count
int64 1
31
| Q_Score
int64 0
6.79k
| Data Science and Machine Learning
int64 0
1
| Question
stringlengths 15
29k
| Title
stringlengths 11
150
| Score
float64 -1
1.2
| Database and SQL
int64 0
1
| Networking and APIs
int64 0
1
| ViewCount
int64 8
6.81M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
131,327 | 2008-09-25T03:25:00.000 | 2 | 0 | 0 | 0 | python,django | 136,399 | 2 | true | 1 | 0 | If it's core functionality for saving the model you'll want it as part of the save method. However, if you already have a functioning model and you want to extend it for other purposes then signals are your best bet since they allow for properly decoupled modules.
A good example might be that you want to add event logging to your site, so you simply listen for the signals that signify an event rather than modifying the original site code.
post_save() is usually best because it means the model has been successfully saved, using pre_save() doesn't guarantee that the save will be successful so shouldn't be used for anything that would depend on the save being completed. | 2 | 1 | 0 | I plan to serialize a Django model to XML when it's saved or updated. (The XML's going to be imported into a flash movie). Is it better to listen for a post_save() or pre_save() signal and then perform the serialization, or to just handle it in the model's save() methon | Style - When to serialize a Django model Instance: signals vs model's save method | 1.2 | 0 | 0 | 1,205 |
131,327 | 2008-09-25T03:25:00.000 | 0 | 0 | 0 | 0 | python,django | 131,383 | 2 | false | 1 | 0 | Post save. That way the new data (the reason for performing the serialization) is already in the database. It'll make for a much cleaner bit of code that simply takes from the database and doesn't have to worry about adding an extra value.
The other way that comes to mind is to maintain the xml file in parallel to the database. That is to say, in your save() add the data to the database, and to the xml file. This would have a much less overhead if you're dealing with huge tables. | 2 | 1 | 0 | I plan to serialize a Django model to XML when it's saved or updated. (The XML's going to be imported into a flash movie). Is it better to listen for a post_save() or pre_save() signal and then perform the serialization, or to just handle it in the model's save() methon | Style - When to serialize a Django model Instance: signals vs model's save method | 0 | 0 | 0 | 1,205 |
134,314 | 2008-09-25T16:18:00.000 | 0 | 0 | 1 | 0 | ironpython,ironruby | 134,329 | 4 | false | 0 | 0 | We don't actively track these kinds of numbers, but you could download them and run them against the respective test suites for the languages if you wanted to boil it down to a single numeric value. | 1 | 4 | 0 | Does anyone have some numbers on this? I am just looking for a percentage, a summary will be better.
Standards compliance: How does the implementation stack up to the standard language specification?
For those still unclear: I place emphasis on current. The IronPython link provided below has info that was last edited more than 2 years back. | Current standard compliance level of IronPython & IronRuby | 0 | 0 | 0 | 350 |
135,041 | 2008-09-25T18:24:00.000 | 2 | 0 | 1 | 0 | python,range,xrange | 135,531 | 12 | false | 0 | 0 | Okay, everyone here as a different opinion as to the tradeoffs and advantages of xrange versus range. They're mostly correct, xrange is an iterator, and range fleshes out and creates an actual list. For the majority of cases, you won't really notice a difference between the two. (You can use map with range but not with xrange, but it uses up more memory.)
What I think you rally want to hear, however, is that the preferred choice is xrange. Since range in Python 3 is an iterator, the code conversion tool 2to3 will correctly convert all uses of xrange to range, and will throw out an error or warning for uses of range. If you want to be sure to easily convert your code in the future, you'll use xrange only, and list(xrange) when you're sure that you want a list. I learned this during the CPython sprint at PyCon this year (2008) in Chicago. | 4 | 462 | 0 | Why or why not? | Should you always favor xrange() over range()? | 0.033321 | 0 | 0 | 213,342 |
135,041 | 2008-09-25T18:24:00.000 | 4 | 0 | 1 | 0 | python,range,xrange | 135,081 | 12 | false | 0 | 0 | Go with range for these reasons:
1) xrange will be going away in newer Python versions. This gives you easy future compatibility.
2) range will take on the efficiencies associated with xrange. | 4 | 462 | 0 | Why or why not? | Should you always favor xrange() over range()? | 0.066568 | 0 | 0 | 213,342 |
135,041 | 2008-09-25T18:24:00.000 | 13 | 0 | 1 | 0 | python,range,xrange | 135,070 | 12 | false | 0 | 0 | xrange() is more efficient because instead of generating a list of objects, it just generates one object at a time. Instead of 100 integers, and all of their overhead, and the list to put them in, you just have one integer at a time. Faster generation, better memory use, more efficient code.
Unless I specifically need a list for something, I always favor xrange() | 4 | 462 | 0 | Why or why not? | Should you always favor xrange() over range()? | 1 | 0 | 0 | 213,342 |
135,041 | 2008-09-25T18:24:00.000 | 42 | 0 | 1 | 0 | python,range,xrange | 135,074 | 12 | false | 0 | 0 | You should favour range() over xrange() only when you need an actual list. For instance, when you want to modify the list returned by range(), or when you wish to slice it. For iteration or even just normal indexing, xrange() will work fine (and usually much more efficiently). There is a point where range() is a bit faster than xrange() for very small lists, but depending on your hardware and various other details, the break-even can be at a result of length 1 or 2; not something to worry about. Prefer xrange(). | 4 | 462 | 0 | Why or why not? | Should you always favor xrange() over range()? | 1 | 0 | 0 | 213,342 |
135,169 | 2008-09-25T18:46:00.000 | 1 | 0 | 0 | 1 | python,django,google-app-engine,web-applications | 139,634 | 7 | false | 1 | 0 | If you're app solely relies on Django, then App Engine is a good bet. However, if you ever need to add C-enhanced libraries, you're up a creek. App Engine doesn't support things like PIL or ReportLab, which use C to speed up processing times. I'm only mentioning this because you may want to use C to speed up some of your routines in the long run.
If you decide to use a co-loc, check out WebFaction.com. They have great Django/Python support and they have no issue with you using the aforementioned lirbaries. | 4 | 7 | 0 | I'm building a Lifestreaming app that will involve pulling down lots of feeds for lots of users, and performing data-mining, and machine learning algorithms on the results. GAE's load balanced and scalable hosting sounds like a good fit for a system that could eventually be moving around a LOT of data, but it's lack of cron jobs is a nuisance. Would I be better off using Django on a co-loc and dealing with my own DB scaling? | Is Google App Engine a worthy platform for a Lifestreaming app? | 0.028564 | 0 | 0 | 1,087 |
135,169 | 2008-09-25T18:46:00.000 | 0 | 0 | 0 | 1 | python,django,google-app-engine,web-applications | 135,223 | 7 | false | 1 | 0 | Pulling feeds or doing calculations won't be a problem. But you'll soon have to pay for your account. App engine includes Django, except you'll need to work with some adaptors for the model part. It will surely save you from maintenance headaches. | 4 | 7 | 0 | I'm building a Lifestreaming app that will involve pulling down lots of feeds for lots of users, and performing data-mining, and machine learning algorithms on the results. GAE's load balanced and scalable hosting sounds like a good fit for a system that could eventually be moving around a LOT of data, but it's lack of cron jobs is a nuisance. Would I be better off using Django on a co-loc and dealing with my own DB scaling? | Is Google App Engine a worthy platform for a Lifestreaming app? | 0 | 0 | 0 | 1,087 |
135,169 | 2008-09-25T18:46:00.000 | 0 | 0 | 0 | 1 | python,django,google-app-engine,web-applications | 135,201 | 7 | false | 1 | 0 | No. If you need to pull lots of things down, App Engine isn't going to work so well. You can use it as a front end by putting your data in their store after doing your offline preprocessing, but you can't do much in the ~1 second time you have per request without doing some really crazy things.
Your app would likely be better off on your own hosting. | 4 | 7 | 0 | I'm building a Lifestreaming app that will involve pulling down lots of feeds for lots of users, and performing data-mining, and machine learning algorithms on the results. GAE's load balanced and scalable hosting sounds like a good fit for a system that could eventually be moving around a LOT of data, but it's lack of cron jobs is a nuisance. Would I be better off using Django on a co-loc and dealing with my own DB scaling? | Is Google App Engine a worthy platform for a Lifestreaming app? | 0 | 0 | 0 | 1,087 |
135,169 | 2008-09-25T18:46:00.000 | 3 | 0 | 0 | 1 | python,django,google-app-engine,web-applications | 135,199 | 7 | false | 1 | 0 | It might change when they offer paid plans, but as it stands, App Engine is not good for CPU intensive apps. It is designed to scale to handle a large number of requests, not necessarily a large amount of calculation per request. I am running into this issue with fairly minor calculations, and I fear I may have to start looking elsewhere as my data set grows. | 4 | 7 | 0 | I'm building a Lifestreaming app that will involve pulling down lots of feeds for lots of users, and performing data-mining, and machine learning algorithms on the results. GAE's load balanced and scalable hosting sounds like a good fit for a system that could eventually be moving around a LOT of data, but it's lack of cron jobs is a nuisance. Would I be better off using Django on a co-loc and dealing with my own DB scaling? | Is Google App Engine a worthy platform for a Lifestreaming app? | 0.085505 | 0 | 0 | 1,087 |
135,834 | 2008-09-25T20:29:00.000 | 70 | 0 | 0 | 0 | python,c++,swig,ctypes,ffi | 135,966 | 10 | true | 0 | 0 | SWIG generates (rather ugly) C or C++ code. It is straightforward to use for simple functions (things that can be translated directly) and reasonably easy to use for more complex functions (such as functions with output parameters that need an extra translation step to represent in Python.) For more powerful interfacing you often need to write bits of C as part of the interface file. For anything but simple use you will need to know about CPython and how it represents objects -- not hard, but something to keep in mind.
ctypes allows you to directly access C functions, structures and other data, and load arbitrary shared libraries. You do not need to write any C for this, but you do need to understand how C works. It is, you could argue, the flip side of SWIG: it doesn't generate code and it doesn't require a compiler at runtime, but for anything but simple use it does require that you understand how things like C datatypes, casting, memory management and alignment work. You also need to manually or automatically translate C structs, unions and arrays into the equivalent ctypes datastructure, including the right memory layout.
It is likely that in pure execution, SWIG is faster than ctypes -- because the management around the actual work is done in C at compiletime rather than in Python at runtime. However, unless you interface a lot of different C functions but each only a few times, it's unlikely the overhead will be really noticeable.
In development time, ctypes has a much lower startup cost: you don't have to learn about interface files, you don't have to generate .c files and compile them, you don't have to check out and silence warnings. You can just jump in and start using a single C function with minimal effort, then expand it to more. And you get to test and try things out directly in the Python interpreter. Wrapping lots of code is somewhat tedious, although there are attempts to make that simpler (like ctypes-configure.)
SWIG, on the other hand, can be used to generate wrappers for multiple languages (barring language-specific details that need filling in, like the custom C code I mentioned above.) When wrapping lots and lots of code that SWIG can handle with little help, the code generation can also be a lot simpler to set up than the ctypes equivalents. | 6 | 61 | 0 | In python, under what circumstances is SWIG a better choice than ctypes for calling entry points in shared libraries? Let's assume you don't already have the SWIG interface file(s). What are the performance metrics of the two? | Python: SWIG vs ctypes | 1.2 | 0 | 0 | 25,826 |
135,834 | 2008-09-25T20:29:00.000 | 3 | 0 | 0 | 0 | python,c++,swig,ctypes,ffi | 5,113,986 | 10 | false | 0 | 0 | Something to keep in mind is that SWIG targets only the CPython implementation. Since ctypes is also supported by the PyPy and IronPython implementations it may be worth writing your modules with ctypes for compatibility with the wider Python ecosystem. | 6 | 61 | 0 | In python, under what circumstances is SWIG a better choice than ctypes for calling entry points in shared libraries? Let's assume you don't already have the SWIG interface file(s). What are the performance metrics of the two? | Python: SWIG vs ctypes | 0.059928 | 0 | 0 | 25,826 |
135,834 | 2008-09-25T20:29:00.000 | 5 | 0 | 0 | 0 | python,c++,swig,ctypes,ffi | 6,936,927 | 10 | false | 0 | 0 | Just wanted to add a few more considerations that I didn't see mentioned yet.
[EDIT: Ooops, didn't see Mike Steder's answer]
If you want to try using a non Cpython implementation (like PyPy, IronPython or Jython), then ctypes is about the only way to go. PyPy doesn't allow writing C-extensions, so that rules out pyrex/cython and Boost.python. For the same reason, ctypes is the only mechanism that will work for IronPython and (eventually, once they get it all working) jython.
As someone else mentioned, no compilation is required. This means that if a new version of the .dll or .so comes out, you can just drop it in, and load that new version. As long as the none of the interfaces changed, it's a drop in replacement. | 6 | 61 | 0 | In python, under what circumstances is SWIG a better choice than ctypes for calling entry points in shared libraries? Let's assume you don't already have the SWIG interface file(s). What are the performance metrics of the two? | Python: SWIG vs ctypes | 0.099668 | 0 | 0 | 25,826 |
135,834 | 2008-09-25T20:29:00.000 | 7 | 0 | 0 | 0 | python,c++,swig,ctypes,ffi | 136,232 | 10 | false | 0 | 0 | ctypes is great, but does not handle C++ classes. I've also found ctypes is about 10% slower than a direct C binding, but that will highly depend on what you are calling.
If you are going to go with ctypes, definitely check out the Pyglet and Pyopengl projects, that have massive examples of ctype bindings. | 6 | 61 | 0 | In python, under what circumstances is SWIG a better choice than ctypes for calling entry points in shared libraries? Let's assume you don't already have the SWIG interface file(s). What are the performance metrics of the two? | Python: SWIG vs ctypes | 1 | 0 | 0 | 25,826 |
135,834 | 2008-09-25T20:29:00.000 | 11 | 0 | 0 | 0 | python,c++,swig,ctypes,ffi | 463,848 | 10 | false | 0 | 0 | In my experience, ctypes does have a big disadvantage: when something goes wrong (and it invariably will for any complex interfaces), it's a hell to debug.
The problem is that a big part of your stack is obscured by ctypes/ffi magic and there is no easy way to determine how did you get to a particular point and why parameter values are what they are.. | 6 | 61 | 0 | In python, under what circumstances is SWIG a better choice than ctypes for calling entry points in shared libraries? Let's assume you don't already have the SWIG interface file(s). What are the performance metrics of the two? | Python: SWIG vs ctypes | 1 | 0 | 0 | 25,826 |
135,834 | 2008-09-25T20:29:00.000 | 1 | 0 | 0 | 0 | python,c++,swig,ctypes,ffi | 16,795,658 | 10 | false | 0 | 0 | I have found SWIG to be be a little bloated in its approach (in general, not just Python) and difficult to implement without having to cross the sore point of writing Python code with an explicit mindset to be SWIG friendly, rather than writing clean well-written Python code. It is, IMHO, a much more straightforward process to write C bindings to C++ (if using C++) and then use ctypes to interface to any C layer.
If the library you are interfacing to has a C interface as part of the library, another advantage of ctypes is that you don't have to compile a separate python-binding library to access third-party libraries. This is particularly nice in formulating a pure-python solution that avoids cross-platform compilation issues (for those third-party libs offered on disparate platforms). Having to embed compiled code into a package you wish to deploy on something like PyPi in a cross-platform friendly way is a pain; one of my most irritating points about Python packages using SWIG or underlying explicit C code is their general inavailability cross-platform. So consider this if you are working with cross-platform available third party libraries and developing a python solution around them.
As a real-world example, consider PyGTK. This (I believe) uses SWIG to generate C code to interface to the GTK C calls. I used this for the briefest time only to find it a real pain to set up and use, with quirky odd errors if you didn't do things in the correct order on setup and just in general. It was such a frustrating experience, and when I looked at the interace definitions provided by GTK on the web I realized what a simple excercise it would be to write a translator of those interface to python ctypes interface. A project called PyGGI was born, and in ONE day I was able to rewrite PyGTK to be a much more functiona and useful product that matches cleanly to the GTK C-object-oriented interfaces. And it required no compilation of C-code making it cross-platform friendly. (I was actually after interfacing to webkitgtk, which isn't so cross-platform). I can also easily deploy PyGGI to any platform supporting GTK. | 6 | 61 | 0 | In python, under what circumstances is SWIG a better choice than ctypes for calling entry points in shared libraries? Let's assume you don't already have the SWIG interface file(s). What are the performance metrics of the two? | Python: SWIG vs ctypes | 0.019997 | 0 | 0 | 25,826 |
136,069 | 2008-09-25T20:59:00.000 | 2 | 1 | 0 | 0 | python,frameworks | 138,888 | 8 | false | 1 | 0 | Go for a framework. Basic stuffs like session handling are a nightmare if you don't use a one because Python is not web specialized like PHP.
If you think django is too much, you can try a lighter one like the very small but still handy web.py. | 4 | 19 | 0 | I am planning on porting a PHP application over to Python. The application is mostly about data collection and processing. The main application runs as a stand alone command line application. There is a web interface to the application which is basically a very light weight reporting interface.
I did not use a framework in the PHP version, but being new to Python, I am wondering if it would be advantageous to use something like Django or at the very least Genshi. The caveat is I do not want my application distribution to be overwhelmed by the framework parts I would need to distribute with the application.
Is using only the cgi import in Python the best way to go in this circumstance? I would tend to think a framework is too much overhead, but perhaps I'm not thinking in a very "python" way about them. What suggestions do you have in this scenario? | Python web development - with or without a framework | 0.049958 | 0 | 0 | 7,200 |
136,069 | 2008-09-25T20:59:00.000 | 0 | 1 | 0 | 0 | python,frameworks | 136,166 | 8 | false | 1 | 0 | It depends on the way you are going to distribute your application.
If it will only be used internally, go for django. It's a joy to work with it.
However, django really falls short at the distribution-task; django-applications are a pain to set up. | 4 | 19 | 0 | I am planning on porting a PHP application over to Python. The application is mostly about data collection and processing. The main application runs as a stand alone command line application. There is a web interface to the application which is basically a very light weight reporting interface.
I did not use a framework in the PHP version, but being new to Python, I am wondering if it would be advantageous to use something like Django or at the very least Genshi. The caveat is I do not want my application distribution to be overwhelmed by the framework parts I would need to distribute with the application.
Is using only the cgi import in Python the best way to go in this circumstance? I would tend to think a framework is too much overhead, but perhaps I'm not thinking in a very "python" way about them. What suggestions do you have in this scenario? | Python web development - with or without a framework | 0 | 0 | 0 | 7,200 |
136,069 | 2008-09-25T20:59:00.000 | 2 | 1 | 0 | 0 | python,frameworks | 136,683 | 8 | false | 1 | 0 | Django makes it possible to whip out a website rapidly, that's for sure. You don't need to be a Python master to use it, and since it's very pythonic in it's design, and there is not really any "magic" going on, it will help you learn Python along the way.
Start with the examples, check out some django screencasts from TwiD and you'll be on your way.
Start slow, tweaking the admin, and playing with it via shell is the way to start. Once you have a handle on the ORM and get how things work, start building the real stuff!
The framework isn't going to cause any performance problems, like S. Lott said, it's code you don't have to maintain, and that's the best kind. | 4 | 19 | 0 | I am planning on porting a PHP application over to Python. The application is mostly about data collection and processing. The main application runs as a stand alone command line application. There is a web interface to the application which is basically a very light weight reporting interface.
I did not use a framework in the PHP version, but being new to Python, I am wondering if it would be advantageous to use something like Django or at the very least Genshi. The caveat is I do not want my application distribution to be overwhelmed by the framework parts I would need to distribute with the application.
Is using only the cgi import in Python the best way to go in this circumstance? I would tend to think a framework is too much overhead, but perhaps I'm not thinking in a very "python" way about them. What suggestions do you have in this scenario? | Python web development - with or without a framework | 0.049958 | 0 | 0 | 7,200 |
136,069 | 2008-09-25T20:59:00.000 | 4 | 1 | 0 | 0 | python,frameworks | 136,152 | 8 | false | 1 | 0 | Depends on the size of the project. If you had only a few previous php-scripts which called your stand alone application then I'd probably go for a cgi-app.
If you have use for databases, url rewriting, templating, user management and such, then using a framework is a good idea.
And of course, before you port it, consider if it's worth it just to switch the language or if there are specific Python features you need.
Good luck! | 4 | 19 | 0 | I am planning on porting a PHP application over to Python. The application is mostly about data collection and processing. The main application runs as a stand alone command line application. There is a web interface to the application which is basically a very light weight reporting interface.
I did not use a framework in the PHP version, but being new to Python, I am wondering if it would be advantageous to use something like Django or at the very least Genshi. The caveat is I do not want my application distribution to be overwhelmed by the framework parts I would need to distribute with the application.
Is using only the cgi import in Python the best way to go in this circumstance? I would tend to think a framework is too much overhead, but perhaps I'm not thinking in a very "python" way about them. What suggestions do you have in this scenario? | Python web development - with or without a framework | 0.099668 | 0 | 0 | 7,200 |
136,207 | 2008-09-25T21:18:00.000 | 0 | 1 | 1 | 0 | python,code-reuse | 48,569,865 | 3 | false | 0 | 0 | I store it all offline in a logical directory structure, with commonly used modules grouped as utilities. This means it's easier to control which versions I publish, and manage. I also automate the build process to interpret the logical directory structure. | 2 | 9 | 0 | I write tons of python scripts, and I find myself reusing lots code that I've written for other projects. My solution has been to make sure the code is separated into logical modules/packages (this one's a given). I then make them setuptools-aware and publish them on PyPI. This allows my other scripts to always have the most up-to-date code, I get a warm fuzzy feeling because I'm not repeating myself, and my development, in general, is made less complicated. I also feel good that there MAY be someone out there that finds my code handy for something they're working on, but it's mainly for selfish reasons :)
To all the pythonistas, how do you handle this? Do you use PyPI or setuptools (easy_install)? or something else? | How do you manage your custom modules? | 0 | 0 | 0 | 344 |
136,207 | 2008-09-25T21:18:00.000 | 1 | 1 | 1 | 0 | python,code-reuse | 137,291 | 3 | false | 0 | 0 | What kind of modules are we talking about here? If you're planning on distributing your projects to other python developers, setuptools is great. But it's usually not a very good way to distribute apps to end users. Your best bet in the latter case is to tailor your packaging to the platforms you're distributing it for. Sure, it's a pain, but it makes life for end users far easier.
For example, in my Debian system, I usually don't use easy_install because it is a little bit more difficult to get eggs to work well with the package manager. In OS X and windows, you'd probably want to package everything up using py2app and py2exe respectively. This makes life for the end user better. After all, they shouldn't know or care what language your scripts are written in. They just need them to install. | 2 | 9 | 0 | I write tons of python scripts, and I find myself reusing lots code that I've written for other projects. My solution has been to make sure the code is separated into logical modules/packages (this one's a given). I then make them setuptools-aware and publish them on PyPI. This allows my other scripts to always have the most up-to-date code, I get a warm fuzzy feeling because I'm not repeating myself, and my development, in general, is made less complicated. I also feel good that there MAY be someone out there that finds my code handy for something they're working on, but it's mainly for selfish reasons :)
To all the pythonistas, how do you handle this? Do you use PyPI or setuptools (easy_install)? or something else? | How do you manage your custom modules? | 0.066568 | 0 | 0 | 344 |
136,734 | 2008-09-25T22:58:00.000 | 0 | 1 | 0 | 0 | python,keypress | 66,835,510 | 11 | false | 0 | 0 | You can use pyautogui module which can be used for automatically moving the mouse and for pressing a key. It can also be used for some GUI(very basic).
You can do the following :-
import pyautogui
pyautogui.press('A') # presses the 'A' key
If you want to do it 1000 times, then you can use a while loop
Hope this is helpful :) | 1 | 36 | 0 | Is it possible to make it appear to a system that a key was pressed, for example I need to make A key be pressed thousands of times, and it is much to time consuming to do it manually, I would like to write something to do it for me, and the only thing I know well enough is Python.
A better way to put it, I need to emulate a key press, I.E. not capture a key press.
More Info (as requested):
I am running windows XP and need to send the keys to another application. | Key Presses in Python | 0 | 0 | 0 | 186,710 |
138,521 | 2008-09-26T09:51:00.000 | 3 | 0 | 1 | 0 | python,c,linker,compilation | 138,554 | 10 | false | 0 | 0 | Jython has a compiler targeting JVM bytecode. The bytecode is fully dynamic, just like the Python language itself! Very cool. (Yes, as Greg Hewgill's answer alludes, the bytecode does use the Jython runtime, and so the Jython jar file must be distributed with your app.) | 2 | 147 | 0 | How feasible would it be to compile Python (possibly via an intermediate C representation) into machine code?
Presumably it would need to link to a Python runtime library, and any parts of the Python standard library which were Python themselves would need to be compiled (and linked in) too.
Also, you would need to bundle the Python interpreter if you wanted to do dynamic evaluation of expressions, but perhaps a subset of Python that didn't allow this would still be useful.
Would it provide any speed and/or memory usage advantages? Presumably the startup time of the Python interpreter would be eliminated (although shared libraries would still need loading at startup). | Is it feasible to compile Python to machine code? | 0.059928 | 0 | 0 | 102,301 |
138,521 | 2008-09-26T09:51:00.000 | 2 | 0 | 1 | 0 | python,c,linker,compilation | 138,605 | 10 | false | 0 | 0 | The answer is "Yes, it is possible". You could take Python code and attempt to compile it into the equivalent C code using the CPython API. In fact, there used to be a Python2C project that did just that, but I haven't heard about it in many years (back in the Python 1.5 days is when I last saw it.)
You could attempt to translate the Python code into native C as much as possible, and fall back to the CPython API when you need actual Python features. I've been toying with that idea myself the last month or two. It is, however, an awful lot of work, and an enormous amount of Python features are very hard to translate into C: nested functions, generators, anything but simple classes with simple methods, anything involving modifying module globals from outside the module, etc, etc. | 2 | 147 | 0 | How feasible would it be to compile Python (possibly via an intermediate C representation) into machine code?
Presumably it would need to link to a Python runtime library, and any parts of the Python standard library which were Python themselves would need to be compiled (and linked in) too.
Also, you would need to bundle the Python interpreter if you wanted to do dynamic evaluation of expressions, but perhaps a subset of Python that didn't allow this would still be useful.
Would it provide any speed and/or memory usage advantages? Presumably the startup time of the Python interpreter would be eliminated (although shared libraries would still need loading at startup). | Is it feasible to compile Python to machine code? | 0.039979 | 0 | 0 | 102,301 |
139,005 | 2008-09-26T12:05:00.000 | 1 | 0 | 0 | 0 | python,pyqt | 139,056 | 3 | true | 0 | 1 | It will come down to you using the QScrollArea, it is a widget that implements showing something that is larger than the available space. You will not need to use QScrollBar directly. I don't have a PyQt example but there is a C++ example in the QT distribution it is called the "Image Viewer". The object hierarchy will still be the same | 1 | 0 | 0 | Dear Stacktoverflow, can you show me an example of how to use a QScrollBar? Thanks. | PyQt - QScrollBar | 1.2 | 0 | 0 | 3,106 |
139,212 | 2008-09-26T12:45:00.000 | 0 | 1 | 0 | 0 | python,web-services,cookies,soappy,zsi | 148,379 | 2 | false | 0 | 0 | Additionally, the Binding class also allows any header to be added. So I figured out that I can just add a "Cookie" header for each cookie I need to add. This worked well for the code generated by wsdl2py, just adding the cookies right after the binding is formed in the SOAP client class. Adding a parameter to the generated class to take in the cookies as a dictionary is easy and then they can easily be iterated through and added. | 1 | 2 | 0 | I've added cookie support to SOAPpy by overriding HTTPTransport. I need functionality beyond that of SOAPpy, so I was planning on moving to ZSI, but I can't figure out how to put the Cookies on the ZSI posts made to the service. Without these cookies, the server will think it is an unauthorized request and it will fail.
How can I add cookies from a Python CookieJar to ZSI requests? | Adding Cookie to ZSI Posts | 0 | 0 | 1 | 526 |
140,026 | 2008-09-26T14:56:00.000 | 1 | 0 | 0 | 0 | python,database,algorithm,dsl | 141,872 | 9 | false | 0 | 0 | "implement a Domain Specific Language"
"nobody is going to want to install a server that downloads and executes arbitrary Python code at runtime"
I want a DSL but I don't want Python to be that DSL. Okay. How will you execute this DSL? What runtime is acceptable if not Python?
What if I have a C program that happens to embed the Python interpreter? Is that acceptable?
And -- if Python is not an acceptable runtime -- why does this have a Python tag? | 6 | 5 | 0 | I'm writing a server that I expect to be run by many different people, not all of whom I will have direct contact with. The servers will communicate with each other in a cluster. Part of the server's functionality involves selecting a small subset of rows from a potentially very large table. The exact choice of what rows are selected will need some tuning, and it's important that it's possible for the person running the cluster (eg, myself) to update the selection criteria without getting each and every server administrator to deploy a new version of the server.
Simply writing the function in Python isn't really an option, since nobody is going to want to install a server that downloads and executes arbitrary Python code at runtime.
What I need are suggestions on the simplest way to implement a Domain Specific Language to achieve this goal. The language needs to be capable of simple expression evaluation, as well as querying table indexes and iterating through the returned rows. Ease of writing and reading the language is secondary to ease of implementing it. I'd also prefer not to have to write an entire query optimiser, so something that explicitly specifies what indexes to query would be ideal.
The interface that this will have to compile against will be similar in capabilities to what the App Engine datastore exports: You can query for sequential ranges on any index on the table (eg, less-than, greater-than, range and equality queries), then filter the returned row by any boolean expression. You can also concatenate multiple independent result sets together.
I realise this question sounds a lot like I'm asking for SQL. However, I don't want to require that the datastore backing this data be a relational database, and I don't want the overhead of trying to reimplement SQL myself. I'm also dealing with only a single table with a known schema. Finally, no joins will be required. Something much simpler would be far preferable.
Edit: Expanded description to clear up some misconceptions. | Writing a Domain Specific Language for selecting rows from a table | 0.022219 | 1 | 0 | 2,773 |
140,026 | 2008-09-26T14:56:00.000 | 0 | 0 | 0 | 0 | python,database,algorithm,dsl | 140,066 | 9 | false | 0 | 0 | Why not create a language that when it "compiles" it generates SQL or whatever query language your datastore requires ?
You would be basically creating an abstraction over your persistence layer. | 6 | 5 | 0 | I'm writing a server that I expect to be run by many different people, not all of whom I will have direct contact with. The servers will communicate with each other in a cluster. Part of the server's functionality involves selecting a small subset of rows from a potentially very large table. The exact choice of what rows are selected will need some tuning, and it's important that it's possible for the person running the cluster (eg, myself) to update the selection criteria without getting each and every server administrator to deploy a new version of the server.
Simply writing the function in Python isn't really an option, since nobody is going to want to install a server that downloads and executes arbitrary Python code at runtime.
What I need are suggestions on the simplest way to implement a Domain Specific Language to achieve this goal. The language needs to be capable of simple expression evaluation, as well as querying table indexes and iterating through the returned rows. Ease of writing and reading the language is secondary to ease of implementing it. I'd also prefer not to have to write an entire query optimiser, so something that explicitly specifies what indexes to query would be ideal.
The interface that this will have to compile against will be similar in capabilities to what the App Engine datastore exports: You can query for sequential ranges on any index on the table (eg, less-than, greater-than, range and equality queries), then filter the returned row by any boolean expression. You can also concatenate multiple independent result sets together.
I realise this question sounds a lot like I'm asking for SQL. However, I don't want to require that the datastore backing this data be a relational database, and I don't want the overhead of trying to reimplement SQL myself. I'm also dealing with only a single table with a known schema. Finally, no joins will be required. Something much simpler would be far preferable.
Edit: Expanded description to clear up some misconceptions. | Writing a Domain Specific Language for selecting rows from a table | 0 | 1 | 0 | 2,773 |
140,026 | 2008-09-26T14:56:00.000 | 0 | 0 | 0 | 0 | python,database,algorithm,dsl | 140,304 | 9 | false | 0 | 0 | It really sounds like SQL, but perhaps it's worth to try using SQLite if you want to keep it simple? | 6 | 5 | 0 | I'm writing a server that I expect to be run by many different people, not all of whom I will have direct contact with. The servers will communicate with each other in a cluster. Part of the server's functionality involves selecting a small subset of rows from a potentially very large table. The exact choice of what rows are selected will need some tuning, and it's important that it's possible for the person running the cluster (eg, myself) to update the selection criteria without getting each and every server administrator to deploy a new version of the server.
Simply writing the function in Python isn't really an option, since nobody is going to want to install a server that downloads and executes arbitrary Python code at runtime.
What I need are suggestions on the simplest way to implement a Domain Specific Language to achieve this goal. The language needs to be capable of simple expression evaluation, as well as querying table indexes and iterating through the returned rows. Ease of writing and reading the language is secondary to ease of implementing it. I'd also prefer not to have to write an entire query optimiser, so something that explicitly specifies what indexes to query would be ideal.
The interface that this will have to compile against will be similar in capabilities to what the App Engine datastore exports: You can query for sequential ranges on any index on the table (eg, less-than, greater-than, range and equality queries), then filter the returned row by any boolean expression. You can also concatenate multiple independent result sets together.
I realise this question sounds a lot like I'm asking for SQL. However, I don't want to require that the datastore backing this data be a relational database, and I don't want the overhead of trying to reimplement SQL myself. I'm also dealing with only a single table with a known schema. Finally, no joins will be required. Something much simpler would be far preferable.
Edit: Expanded description to clear up some misconceptions. | Writing a Domain Specific Language for selecting rows from a table | 0 | 1 | 0 | 2,773 |
140,026 | 2008-09-26T14:56:00.000 | 0 | 0 | 0 | 0 | python,database,algorithm,dsl | 140,091 | 9 | false | 0 | 0 | You mentioned Python. Why not use Python? If someone can "type in" an expression in your DSL, they can type in Python.
You'll need some rules on structure of the expression, but that's a lot easier than implementing something new. | 6 | 5 | 0 | I'm writing a server that I expect to be run by many different people, not all of whom I will have direct contact with. The servers will communicate with each other in a cluster. Part of the server's functionality involves selecting a small subset of rows from a potentially very large table. The exact choice of what rows are selected will need some tuning, and it's important that it's possible for the person running the cluster (eg, myself) to update the selection criteria without getting each and every server administrator to deploy a new version of the server.
Simply writing the function in Python isn't really an option, since nobody is going to want to install a server that downloads and executes arbitrary Python code at runtime.
What I need are suggestions on the simplest way to implement a Domain Specific Language to achieve this goal. The language needs to be capable of simple expression evaluation, as well as querying table indexes and iterating through the returned rows. Ease of writing and reading the language is secondary to ease of implementing it. I'd also prefer not to have to write an entire query optimiser, so something that explicitly specifies what indexes to query would be ideal.
The interface that this will have to compile against will be similar in capabilities to what the App Engine datastore exports: You can query for sequential ranges on any index on the table (eg, less-than, greater-than, range and equality queries), then filter the returned row by any boolean expression. You can also concatenate multiple independent result sets together.
I realise this question sounds a lot like I'm asking for SQL. However, I don't want to require that the datastore backing this data be a relational database, and I don't want the overhead of trying to reimplement SQL myself. I'm also dealing with only a single table with a known schema. Finally, no joins will be required. Something much simpler would be far preferable.
Edit: Expanded description to clear up some misconceptions. | Writing a Domain Specific Language for selecting rows from a table | 0 | 1 | 0 | 2,773 |
140,026 | 2008-09-26T14:56:00.000 | 0 | 0 | 0 | 0 | python,database,algorithm,dsl | 140,228 | 9 | false | 0 | 0 | You said nobody is going to want to install a server that downloads and executes arbitrary code at runtime. However, that is exactly what your DSL will do (eventually) so there probably isn't that much of a difference. Unless you're doing something very specific with the data then I don't think a DSL will buy you that much and it will frustrate the users who are already versed in SQL. Don't underestimate the size of the task you'll be taking on.
To answer your question however, you will need to come up with a grammar for your language, something to parse the text and walk the tree, emitting code or calling an API that you've written (which is why my comment that you're still going to have to ship some code).
There are plenty of educational texts on grammars for mathematical expressions you can refer to on the net, that's fairly straight forward. You may have a parser generator tool like ANTLR or Yacc you can use to help you generate the parser (or use a language like Lisp/Scheme and marry the two up). Coming up with a reasonable SQL grammar won't be easy. But google 'BNF SQL' and see what you come up with.
Best of luck. | 6 | 5 | 0 | I'm writing a server that I expect to be run by many different people, not all of whom I will have direct contact with. The servers will communicate with each other in a cluster. Part of the server's functionality involves selecting a small subset of rows from a potentially very large table. The exact choice of what rows are selected will need some tuning, and it's important that it's possible for the person running the cluster (eg, myself) to update the selection criteria without getting each and every server administrator to deploy a new version of the server.
Simply writing the function in Python isn't really an option, since nobody is going to want to install a server that downloads and executes arbitrary Python code at runtime.
What I need are suggestions on the simplest way to implement a Domain Specific Language to achieve this goal. The language needs to be capable of simple expression evaluation, as well as querying table indexes and iterating through the returned rows. Ease of writing and reading the language is secondary to ease of implementing it. I'd also prefer not to have to write an entire query optimiser, so something that explicitly specifies what indexes to query would be ideal.
The interface that this will have to compile against will be similar in capabilities to what the App Engine datastore exports: You can query for sequential ranges on any index on the table (eg, less-than, greater-than, range and equality queries), then filter the returned row by any boolean expression. You can also concatenate multiple independent result sets together.
I realise this question sounds a lot like I'm asking for SQL. However, I don't want to require that the datastore backing this data be a relational database, and I don't want the overhead of trying to reimplement SQL myself. I'm also dealing with only a single table with a known schema. Finally, no joins will be required. Something much simpler would be far preferable.
Edit: Expanded description to clear up some misconceptions. | Writing a Domain Specific Language for selecting rows from a table | 0 | 1 | 0 | 2,773 |
140,026 | 2008-09-26T14:56:00.000 | 1 | 0 | 0 | 0 | python,database,algorithm,dsl | 140,275 | 9 | false | 0 | 0 | I think we're going to need a bit more information here. Let me know if any of the following is based on incorrect assumptions.
First of all, as you pointed out yourself, there already exists a DSL for selecting rows from arbitrary tables-- it is called "SQL". Since you don't want to reinvent SQL, I'm assuming that you only need to query from a single table with a fixed format.
If this is the case, you probably don't need to implement a DSL (although that's certainly one way to go); it may be easier, if you are used to Object Orientation, to create a Filter object.
More specifically, a "Filter" collection that would hold one or more SelectionCriterion objects. You can implement these to inherit from one or more base classes representing types of selections (Range, LessThan, ExactMatch, Like, etc.) Once these base classes are in place, you can create column-specific inherited versions which are appropriate to that column. Finally, depending on the complexity of the queries you want to support, you'll want to implement some kind of connective glue to handle AND and OR and NOT linkages between the various criteria.
If you feel like it, you can create a simple GUI to load up the collection; I'd look at the filtering in Excel as a model, if you don't have anything else in mind.
Finally, it should be trivial to convert the contents of this Collection to the corresponding SQL, and pass that to the database.
However: if what you are after is simplicity, and your users understand SQL, you could simply ask them to type in the contents of a WHERE clause, and programmatically build up the rest of the query. From a security perspective, if your code has control over the columns selected and the FROM clause, and your database permissions are set properly, and you do some sanity checking on the string coming in from the users, this would be a relatively safe option. | 6 | 5 | 0 | I'm writing a server that I expect to be run by many different people, not all of whom I will have direct contact with. The servers will communicate with each other in a cluster. Part of the server's functionality involves selecting a small subset of rows from a potentially very large table. The exact choice of what rows are selected will need some tuning, and it's important that it's possible for the person running the cluster (eg, myself) to update the selection criteria without getting each and every server administrator to deploy a new version of the server.
Simply writing the function in Python isn't really an option, since nobody is going to want to install a server that downloads and executes arbitrary Python code at runtime.
What I need are suggestions on the simplest way to implement a Domain Specific Language to achieve this goal. The language needs to be capable of simple expression evaluation, as well as querying table indexes and iterating through the returned rows. Ease of writing and reading the language is secondary to ease of implementing it. I'd also prefer not to have to write an entire query optimiser, so something that explicitly specifies what indexes to query would be ideal.
The interface that this will have to compile against will be similar in capabilities to what the App Engine datastore exports: You can query for sequential ranges on any index on the table (eg, less-than, greater-than, range and equality queries), then filter the returned row by any boolean expression. You can also concatenate multiple independent result sets together.
I realise this question sounds a lot like I'm asking for SQL. However, I don't want to require that the datastore backing this data be a relational database, and I don't want the overhead of trying to reimplement SQL myself. I'm also dealing with only a single table with a known schema. Finally, no joins will be required. Something much simpler would be far preferable.
Edit: Expanded description to clear up some misconceptions. | Writing a Domain Specific Language for selecting rows from a table | 0.022219 | 1 | 0 | 2,773 |
140,758 | 2008-09-26T17:20:00.000 | 2 | 0 | 1 | 0 | java,python,file-traversal | 140,822 | 9 | false | 1 | 0 | Use os.path.walk if you want subdirectories as well.
walk(top, func, arg)
Directory tree walk with callback function.
For each directory in the directory tree rooted at top (including top
itself, but excluding '.' and '..'), call func(arg, dirname, fnames).
dirname is the name of the directory, and fnames a list of the names of
the files and subdirectories in dirname (excluding '.' and '..'). func
may modify the fnames list in-place (e.g. via del or slice assignment),
and walk will only recurse into the subdirectories whose names remain in
fnames; this can be used to implement a filter, or to impose a specific
order of visiting. No semantics are defined for, or required of, arg,
beyond that arg is always passed to func. It can be used, e.g., to pass
a filename pattern, or a mutable object designed to accumulate
statistics. Passing None for arg is common. | 2 | 10 | 0 | In Java you can do File.listFiles() and receive all of the files in a directory. You can then easily recurse through directory trees.
Is there an analogous way to do this in Python? | Looking for File Traversal Functions in Python that are Like Java's | 0.044415 | 0 | 0 | 9,026 |
140,758 | 2008-09-26T17:20:00.000 | 2 | 0 | 1 | 0 | java,python,file-traversal | 141,277 | 9 | false | 1 | 0 | I'd recommend against os.path.walk as it is being removed in Python 3.0. os.walk is simpler, anyway, or at least I find it simpler. | 2 | 10 | 0 | In Java you can do File.listFiles() and receive all of the files in a directory. You can then easily recurse through directory trees.
Is there an analogous way to do this in Python? | Looking for File Traversal Functions in Python that are Like Java's | 0.044415 | 0 | 0 | 9,026 |
141,647 | 2008-09-26T20:07:00.000 | 0 | 0 | 0 | 0 | python,sockets,mobile | 142,502 | 4 | false | 0 | 1 | If the code is working in the interactive interpreter when typed, but not when run directly then I would suggest seeing if your code has reached a deadlock on the socket, for example both ends are waiting for data from the other. When typing into the interactive interpreter there is a longer delay between the execution of each line on code. | 3 | 1 | 0 | I've written code for communication between my phone and comp thru TCP sockets. When I type out the code line by line in the interactive console it works fine. However, when i try running the script directly through filebrowser.py it just wont work. I'm using Nokia N95. Is there anyway I can run this script directly without using filebrowser.py? | Socket programming for mobile phones in Python | 0 | 0 | 0 | 1,588 |
141,647 | 2008-09-26T20:07:00.000 | 0 | 0 | 0 | 0 | python,sockets,mobile | 215,001 | 4 | false | 0 | 1 | Don't you have the "Run script" menu in your interactive Python shell? | 3 | 1 | 0 | I've written code for communication between my phone and comp thru TCP sockets. When I type out the code line by line in the interactive console it works fine. However, when i try running the script directly through filebrowser.py it just wont work. I'm using Nokia N95. Is there anyway I can run this script directly without using filebrowser.py? | Socket programming for mobile phones in Python | 0 | 0 | 0 | 1,588 |
141,647 | 2008-09-26T20:07:00.000 | 0 | 0 | 0 | 0 | python,sockets,mobile | 142,786 | 4 | false | 0 | 1 | Well, it doesn't appear to be a deadlock situation. It throws an error saying remote server refused connection. However, like I said before, if i type the very same code into the interactive interpreter it works just fine. I'm wondering if the error is because the script is run through filebrowser.py? | 3 | 1 | 0 | I've written code for communication between my phone and comp thru TCP sockets. When I type out the code line by line in the interactive console it works fine. However, when i try running the script directly through filebrowser.py it just wont work. I'm using Nokia N95. Is there anyway I can run this script directly without using filebrowser.py? | Socket programming for mobile phones in Python | 0 | 0 | 0 | 1,588 |
142,545 | 2008-09-26T23:59:00.000 | 28 | 0 | 1 | 0 | python,module,global | 15,035,172 | 12 | false | 0 | 0 | I believe that there are plenty of circumstances in which it does make sense and it simplifies programming to have some globals that are known across several (tightly coupled) modules. In this spirit, I would like to elaborate a bit on the idea of having a module of globals which is imported by those modules which need to reference them.
When there is only one such module, I name it "g". In it, I assign default values for every variable I intend to treat as global. In each module that uses any of them, I do not use "from g import var", as this only results in a local variable which is initialized from g only at the time of the import. I make most references in the form g.var, and the "g." serves as a constant reminder that I am dealing with a variable that is potentially accessible to other modules.
If the value of such a global variable is to be used frequently in some function in a module, then that function can make a local copy: var = g.var. However, it is important to realize that assignments to var are local, and global g.var cannot be updated without referencing g.var explicitly in an assignment.
Note that you can also have multiple such globals modules shared by different subsets of your modules to keep things a little more tightly controlled. The reason I use short names for my globals modules is to avoid cluttering up the code too much with occurrences of them. With only a little experience, they become mnemonic enough with only 1 or 2 characters.
It is still possible to make an assignment to, say, g.x when x was not already defined in g, and a different module can then access g.x. However, even though the interpreter permits it, this approach is not so transparent, and I do avoid it. There is still the possibility of accidentally creating a new variable in g as a result of a typo in the variable name for an assignment. Sometimes an examination of dir(g) is useful to discover any surprise names that may have arisen by such accident. | 2 | 127 | 0 | The __debug__ variable is handy in part because it affects every module. If I want to create another variable that works the same way, how would I do it?
The variable (let's be original and call it 'foo') doesn't have to be truly global, in the sense that if I change foo in one module, it is updated in others. I'd be fine if I could set foo before importing other modules and then they would see the same value for it. | How to make a cross-module variable? | 1 | 0 | 0 | 177,508 |
142,545 | 2008-09-26T23:59:00.000 | 5 | 0 | 1 | 0 | python,module,global | 3,911,089 | 12 | false | 0 | 0 | You can already do this with module-level variables. Modules are the same no matter what module they're being imported from. So you can make the variable a module-level variable in whatever module it makes sense to put it in, and access it or assign to it from other modules. It would be better to call a function to set the variable's value, or to make it a property of some singleton object. That way if you end up needing to run some code when the variable's changed, you can do so without breaking your module's external interface.
It's not usually a great way to do things — using globals seldom is — but I think this is the cleanest way to do it. | 2 | 127 | 0 | The __debug__ variable is handy in part because it affects every module. If I want to create another variable that works the same way, how would I do it?
The variable (let's be original and call it 'foo') doesn't have to be truly global, in the sense that if I change foo in one module, it is updated in others. I'd be fine if I could set foo before importing other modules and then they would see the same value for it. | How to make a cross-module variable? | 0.083141 | 0 | 0 | 177,508 |
142,812 | 2008-09-27T02:47:00.000 | 7 | 0 | 1 | 0 | python,bit-fields,bitarray | 143,643 | 12 | false | 0 | 0 | I use the binary bit-wise operators !, &, |, ^, >>, and <<. They work really well and are implemented directly in the underlying C, which is usually directly on the underlying hardware. | 1 | 54 | 0 | I need a compact representation of an array of booleans, does Python have a builtin bitfield type or will I need to find an alternate solution? | Does Python have a bitfield type? | 1 | 0 | 0 | 62,846 |
142,844 | 2008-09-27T03:02:00.000 | 2 | 0 | 0 | 1 | python,windows,drag-and-drop,windows-explorer | 21,840,490 | 8 | false | 0 | 0 | Create a shortcut of the file. In case you don't have python open .py files by default, go into the properties of the shortcut and edit the target of the shortcut to include the python version you're using. For example:
Target: C:\Python26\python.exe < shortcut target path>
I'm posting this because I didn't want to edit the Registry and the .bat workaround didn't work for me. | 3 | 59 | 0 | I would like to drag and drop my data file onto a Python script and have it process the file and generate output. The Python script accepts the name of the data file as a command-line parameter, but Windows Explorer doesn't allow the script to be a drop target.
Is there some kind of configuration that needs to be done somewhere for this work? | Drag and drop onto Python script in Windows Explorer | 0.049958 | 0 | 0 | 59,279 |
142,844 | 2008-09-27T03:02:00.000 | 2 | 0 | 0 | 1 | python,windows,drag-and-drop,windows-explorer | 53,586,688 | 8 | false | 0 | 0 | 1). create shortcut of .py
2). right click -> properties
3). prefix "Target:" with "python" so it runs the .py as an argument into the python command
or
1). create a .bat
2). python some.py %*
these shortcut versions are simplest for me to do what i'm doing
otherwise i'd convert it to a .exe, but would rather just use java or c/c++ | 3 | 59 | 0 | I would like to drag and drop my data file onto a Python script and have it process the file and generate output. The Python script accepts the name of the data file as a command-line parameter, but Windows Explorer doesn't allow the script to be a drop target.
Is there some kind of configuration that needs to be done somewhere for this work? | Drag and drop onto Python script in Windows Explorer | 0.049958 | 0 | 0 | 59,279 |
142,844 | 2008-09-27T03:02:00.000 | 6 | 0 | 0 | 1 | python,windows,drag-and-drop,windows-explorer | 4,486,506 | 8 | false | 0 | 0 | Try using py2exe. Use py2exe to convert your python script into a windows executable. You should then be able to drag and drop input files to your script in Windows Explorer. You should also be able to create a shortcut on your desktop and drop input files onto it. And if your python script can take a file list you should be able to drag and drop multiple files on your script (or shortcut). | 3 | 59 | 0 | I would like to drag and drop my data file onto a Python script and have it process the file and generate output. The Python script accepts the name of the data file as a command-line parameter, but Windows Explorer doesn't allow the script to be a drop target.
Is there some kind of configuration that needs to be done somewhere for this work? | Drag and drop onto Python script in Windows Explorer | 1 | 0 | 0 | 59,279 |
143,515 | 2008-09-27T12:12:00.000 | 1 | 0 | 0 | 1 | java,python,c,ocsp | 143,996 | 3 | true | 1 | 0 | Have you check pyOpenSSL.. am sure openssl supports ocsp and python binding may support it | 1 | 3 | 0 | Going back to my previous question on OCSP, does anybody know of "reliable" OCSP libraries for Python, Java and C?
I need "client" OCSP functionality, as I'll be checking the status of Certs against an OCSP responder, so responder functionality is not that important.
Thanks | OCSP libraries for python / java / c? | 1.2 | 0 | 0 | 2,998 |
143,714 | 2008-09-27T14:39:00.000 | 1 | 0 | 1 | 0 | python,string,quotes,double-quotes | 143,726 | 9 | false | 0 | 0 | There are 3 ways you can qoute strings in python:
"string"
'string'
"""
string
string
"""
they all produce the same result. | 1 | 71 | 0 | In PHP, a string enclosed in "double quotes" will be parsed for variables to replace whereas a string enclosed in 'single quotes' will not. In Python, does this also apply? | Is there any difference between "string" and 'string' in Python? | 0.022219 | 0 | 0 | 66,616 |
144,448 | 2008-09-27T20:55:00.000 | 0 | 0 | 0 | 0 | python,postgresql,module | 1,579,851 | 6 | false | 0 | 0 | I uses only psycopg2 and had no problems with that. | 2 | 28 | 0 | I've seen a number of postgresql modules for python like pygresql, pypgsql, psyco. Most of them are Python DB API 2.0 compliant, some are not being actively developed anymore.
Which module do you recommend? Why? | Python PostgreSQL modules. Which is best? | 0 | 1 | 0 | 15,582 |
144,448 | 2008-09-27T20:55:00.000 | 0 | 0 | 0 | 0 | python,postgresql,module | 145,801 | 6 | false | 0 | 0 | Psycopg1 is known for better performance in heavilyy threaded environments (like web applications) than Psycopg2, although not maintained. Both are well written and rock solid, I'd choose one of these two depending on use case. | 2 | 28 | 0 | I've seen a number of postgresql modules for python like pygresql, pypgsql, psyco. Most of them are Python DB API 2.0 compliant, some are not being actively developed anymore.
Which module do you recommend? Why? | Python PostgreSQL modules. Which is best? | 0 | 1 | 0 | 15,582 |
145,155 | 2008-09-28T03:39:00.000 | 2 | 0 | 0 | 0 | python,user-interface | 145,174 | 3 | false | 0 | 1 | In wxPython there's a plethora of ready-made list and tree controls (CustomTreeCtrl, TreeListCtrl, and others), a mixture of which you can use to create a simple explorer in minutes. The wxPython demo even has a few relevant examples (see the demo of MVCTree). | 1 | 1 | 0 | I am making a Python gui project that needs to duplicate the look of a Windows gui environment (ie Explorer). I have my own custom icons to draw but they should be selectable by the same methods as usual; click, ctrl-click, drag box etc. Are any of the gui toolkits going to help with this or will I have to implement it all myself. If there aren't any tools to help with this advice would be greatly appreciated.
edit I am not trying to recreate explorer, that would be madness. I simply want to be able to take icons and lay them out in a scrollable window. Any number of them may be selected at once. It would be great if there was something that could select/deselect them in the same (appearing at least) way that Windows does. Then all I would need is a list of all the selected icons. | Something like Explorer's icon grid view in a Python GUI | 0.132549 | 0 | 0 | 547 |
145,191 | 2008-09-28T04:06:00.000 | 1 | 1 | 1 | 0 | .net,performance,ironpython | 145,200 | 4 | false | 0 | 0 | Currently IronRuby is pretty slow in most regards. It's definitely slower than MRI (Matz' Ruby Implementation) overall, though in some places they're faster.
IronRuby does have the potential to be much faster, though I doubt they'll ever get near C# in terms of speed. In most cases it just doesn't matter. A database call will probably make up 90% of the overall duration of a web request, for example.
I suspect the team will go for language-completeness rather than performance first. This will allow you to run IronRuby & run most ruby programs when 1.0 ships, then they can improve perf as they go.
I suspect IronPython has a similar story. | 3 | 8 | 0 | I understand that IronPython is an implementation of Python on the .NET platform just like IronRuby is an implementation of Ruby and F# is more or less OCaml.
What I can't seem to grasp is whether these languages perform closer to their "ancestors" or closer to something like C# in terms of speed. For example, is IronPython somehow "compiled" down to the same bytecode used by C# and, therefore, will run just as fast? | Dynamic .NET language performance? | 0.049958 | 0 | 0 | 1,684 |
145,191 | 2008-09-28T04:06:00.000 | 9 | 1 | 1 | 0 | .net,performance,ironpython | 145,195 | 4 | true | 0 | 0 | IronPython and IronRuby are built on top of the DLR -- dynamic language runtime -- and are compiled to CIL (the bytecode used by .NET) on the fly. They're slower than C# but faaaaaaar faster than their non-.NET counterparts. There aren't any decent benchmarks out there, to my knowledge, but you'll see the difference. | 3 | 8 | 0 | I understand that IronPython is an implementation of Python on the .NET platform just like IronRuby is an implementation of Ruby and F# is more or less OCaml.
What I can't seem to grasp is whether these languages perform closer to their "ancestors" or closer to something like C# in terms of speed. For example, is IronPython somehow "compiled" down to the same bytecode used by C# and, therefore, will run just as fast? | Dynamic .NET language performance? | 1.2 | 0 | 0 | 1,684 |
145,191 | 2008-09-28T04:06:00.000 | 7 | 1 | 1 | 0 | .net,performance,ironpython | 145,300 | 4 | false | 0 | 0 | IronPython is actually the fastest Python implementation out there. For some definition of "fastest", at least: the startup overhead of the CLR, for example, is huge compared to CPython. Also, the optimizing compiler IronPython has, really only makes sense, when code is executed multiple times.
IronRuby has the potential to be as fast IronPython, since many of the interesting features that make IronPython fast, have been extracted into the Dynamic Language Runtime, on which both IronPython and IronRuby (and Managed JavaScript, Dynamic VB, IronScheme, VistaSmalltalk and others) are built.
In general, the speed of a language implementation is pretty much independent of the actual language features, and more dependent on the number of engineering man-years that go into it. IOW: dynamic vs. static doesn't matter, money does.
E.g., Common Lisp is a language that is even more dynamic than Ruby or Python, and yet there are Common Lisp compilers out there that can even give C a run for its money. Good Smalltalk implementations run as fast as Java (which is no surprise, since both major JVMs, Sun HotSpot and IBM J9, are actually just slightly modified Smalltalk VMs) or C++. In just the past 6 months, the major JavaScript implementations (Mozilla TraceMonkey, Apple SquirrelFish Extreme and the new kid on the block, Google V8) have made ginormous performance improvements, 10x and more, to bring JavaScript head-to-head with un-optimized C. | 3 | 8 | 0 | I understand that IronPython is an implementation of Python on the .NET platform just like IronRuby is an implementation of Ruby and F# is more or less OCaml.
What I can't seem to grasp is whether these languages perform closer to their "ancestors" or closer to something like C# in terms of speed. For example, is IronPython somehow "compiled" down to the same bytecode used by C# and, therefore, will run just as fast? | Dynamic .NET language performance? | 1 | 0 | 0 | 1,684 |
145,607 | 2008-09-28T10:12:00.000 | 0 | 0 | 1 | 0 | c#,python,diff | 1,937,776 | 11 | false | 0 | 0 | One method I've employed for a different functionality, to calculate how much data was new in a modified file, could perhaps work for you as well.
I have a diff/patch implementation C# that allows me to take two files, presumably old and new version of the same file, and calculate the "difference", but not in the usual sense of the word. Basically I calculate a set of operations that I can perform on the old version to update it to have the same contents as the new version.
To use this for the functionality initially described, to see how much data was new, I simple ran through the operations, and for every operation that copied from the old file verbatim, that had a 0-factor, and every operation that inserted new text (distributed as part of the patch, since it didn't occur in the old file) had a 1-factor. All characters was given this factory, which gave me basically a long list of 0's and 1's.
All I then had to do was to tally up the 0's and 1's. In your case, with my implementation, a low number of 1's compared to 0's would mean the files are very similar.
This implementation would also handle cases where the modified file had inserted copies from the old file out of order, or even duplicates (ie. you copy a part from the start of the file and paste it near the bottom), since they would both be copies of the same original part from the old file.
I experimented with weighing copies, so that the first copy counted as 0, and subsequent copies of the same characters had progressively higher factors, in order to give a copy/paste operation some "new-factor", but I never finished it as the project was scrapped.
If you're interested, my diff/patch code is available from my Subversion repository. | 1 | 43 | 0 | I need an algorithm that can compare two text files and highlight their difference and ( even better!) can compute their difference in a meaningful way (like two similar files should have a similarity score higher than two dissimilar files, with the word "similar" defined in the normal terms). It sounds easy to implement, but it's not.
The implementation can be in c# or python.
Thanks. | Text difference algorithm | 0 | 0 | 0 | 19,086 |
145,894 | 2008-09-28T14:02:00.000 | 1 | 0 | 0 | 1 | macos,wxpython,wxwidgets | 25,845,284 | 4 | false | 0 | 1 | As of wxPython 2.9.2.0 wx.TaskBarIcon will create a menubar icon now instead on OSX, so long as you call SetIcon. | 1 | 6 | 0 | I could not find any pointers on how to create a menubar icon on OSX using wx. I originally thought that the wxTaskBarIcon class would do, but it actually creates an icon on the Dock. On Windows, wxTaskBarIcon creates a Systray icon and associated menu, and I would think that on mac osx it would create a menubar icon, I guess not. | how to set a menubar icon on mac osx using wx | 0.049958 | 0 | 0 | 4,972 |
147,650 | 2008-09-29T05:41:00.000 | 10 | 1 | 0 | 1 | python,eclipse,pylons,pydev,pyramid | 147,768 | 7 | true | 1 | 0 | Create a new launch configuration (Python Run)
Main tab
Use paster-script.py as main module (you can find it in the Scripts sub-directory in your python installation directory)
Don't forget to add the root folder of your application in the PYTHONPATH zone
Arguments
Set the base directory to the root folder also.
As Program Arguments use "serve development.ini" (or whatever you use to debug your app")
Common Tab
Check allocate console and launch in background | 4 | 11 | 0 | I have Eclipse setup with PyDev and love being able to debug my scripts/apps. I've just started playing around with Pylons and was wondering if there is a way to start up the paster server through Eclipse so I can debug my webapp? | Debug Pylons application through Eclipse | 1.2 | 0 | 0 | 5,139 |
147,650 | 2008-09-29T05:41:00.000 | 2 | 1 | 0 | 1 | python,eclipse,pylons,pydev,pyramid | 3,817,880 | 7 | false | 1 | 0 | I was able to get --reload working by changing the 'Working directory' in the arguments tab to not use default (i.e. select 'Other'->File System->'Root of your Pylons' app where development.ini is stored. | 4 | 11 | 0 | I have Eclipse setup with PyDev and love being able to debug my scripts/apps. I've just started playing around with Pylons and was wondering if there is a way to start up the paster server through Eclipse so I can debug my webapp? | Debug Pylons application through Eclipse | 0.057081 | 0 | 0 | 5,139 |
147,650 | 2008-09-29T05:41:00.000 | 1 | 1 | 0 | 1 | python,eclipse,pylons,pydev,pyramid | 2,958,194 | 7 | false | 1 | 0 | On linux that will probably be /usr/bin/paster or /usr/local/bin/paster for paste script, and for arguments i have: serve ${workspace_loc}${project_path}/development.ini | 4 | 11 | 0 | I have Eclipse setup with PyDev and love being able to debug my scripts/apps. I've just started playing around with Pylons and was wondering if there is a way to start up the paster server through Eclipse so I can debug my webapp? | Debug Pylons application through Eclipse | 0.028564 | 0 | 0 | 5,139 |
147,650 | 2008-09-29T05:41:00.000 | 2 | 1 | 0 | 1 | python,eclipse,pylons,pydev,pyramid | 1,306,122 | 7 | false | 1 | 0 | yanjost has it right, just wanted to add that you need to make sure you do not use the --reload option, this will prevent the debugger from properly attaching itself and cause your breakpoints not to work. Just a little thing I ran in to. | 4 | 11 | 0 | I have Eclipse setup with PyDev and love being able to debug my scripts/apps. I've just started playing around with Pylons and was wondering if there is a way to start up the paster server through Eclipse so I can debug my webapp? | Debug Pylons application through Eclipse | 0.057081 | 0 | 0 | 5,139 |
150,284 | 2008-09-29T19:31:00.000 | 9 | 0 | 1 | 0 | python,pickle | 150,318 | 2 | false | 0 | 0 | __reduce_ex__ is what __reduce__ should have been but never became. __reduce_ex__ works like __reduce__ but the pickle protocol is passed. | 1 | 19 | 1 | I understand that these methods are for pickling/unpickling and have no relation to the reduce built-in function, but what's the difference between the 2 and why do we need both? | What is the difference between __reduce__ and __reduce_ex__? | 1 | 0 | 0 | 9,908 |
152,580 | 2008-09-30T11:00:00.000 | 70 | 1 | 1 | 0 | python,types | 152,592 | 14 | false | 0 | 0 | isinstance(o, str) will return True if o is an str or is of a type that inherits from str.
type(o) is str will return True if and only if o is a str. It will return False if o is of a type that inherits from str. | 2 | 1,624 | 0 | How do I check if an object is of a given type, or if it inherits from a given type?
How do I check if the object o is of type str? | What's the canonical way to check for type in Python? | 1 | 0 | 0 | 1,190,669 |
152,580 | 2008-09-30T11:00:00.000 | 7 | 1 | 1 | 0 | python,types | 153,032 | 14 | false | 0 | 0 | I think the cool thing about using a dynamic language like Python is you really shouldn't have to check something like that.
I would just call the required methods on your object and catch an AttributeError. Later on this will allow you to call your methods with other (seemingly unrelated) objects to accomplish different tasks, such as mocking an object for testing.
I've used this a lot when getting data off the web with urllib2.urlopen() which returns a file like object. This can in turn can be passed to almost any method that reads from a file, because it implements the same read() method as a real file.
But I'm sure there is a time and place for using isinstance(), otherwise it probably wouldn't be there :) | 2 | 1,624 | 0 | How do I check if an object is of a given type, or if it inherits from a given type?
How do I check if the object o is of type str? | What's the canonical way to check for type in Python? | 1 | 0 | 0 | 1,190,669 |
153,491 | 2008-09-30T15:13:00.000 | 15 | 0 | 0 | 0 | java,python,jython,code-translation | 155,385 | 4 | false | 1 | 0 | Actually, this may or may not be much help but you could write a script which created a Java class for each Python class, including method stubs, placing the Python implementation of the method inside the Javadoc
In fact, this is probably pretty easy to knock up in Python.
I worked for a company which undertook a port to Java of a huge Smalltalk (similar-ish to Python) system and this is exactly what they did. Filling in the methods was manual but invaluable, because it got you to really think about what was going on. I doubt that a brute-force method would result in nice code.
Here's another possibility: can you convert your Python to Jython more easily? Jython is just Python for the JVM. It may be possible to use a Java decompiler (e.g. JAD) to then convert the bytecode back into Java code (or you may just wish to run on a JVM). I'm not sure about this however, perhaps someone else would have a better idea. | 4 | 33 | 0 | Is there a tool out there that can automatically convert Python to Java?
Can Jython do this? | Automated Python to Java translation | 1 | 0 | 0 | 120,780 |
153,491 | 2008-09-30T15:13:00.000 | 8 | 0 | 0 | 0 | java,python,jython,code-translation | 153,535 | 4 | true | 1 | 0 | It may not be an easy problem.
Determining how to map classes defined in Python into types in Java will be a big challange because of differences in each of type binding time. (duck typing vs. compile time binding). | 4 | 33 | 0 | Is there a tool out there that can automatically convert Python to Java?
Can Jython do this? | Automated Python to Java translation | 1.2 | 0 | 0 | 120,780 |
153,491 | 2008-09-30T15:13:00.000 | 1 | 0 | 0 | 0 | java,python,jython,code-translation | 153,530 | 4 | false | 1 | 0 | to clarify your question:
From Python Source code to Java source code? (I don't think so)
.. or from Python source code to Java Bytecode? (Jython does this under the hood) | 4 | 33 | 0 | Is there a tool out there that can automatically convert Python to Java?
Can Jython do this? | Automated Python to Java translation | 0.049958 | 0 | 0 | 120,780 |
153,491 | 2008-09-30T15:13:00.000 | 4 | 0 | 0 | 0 | java,python,jython,code-translation | 989,822 | 4 | false | 1 | 0 | Yes Jython does this, but it may or may not be what you want | 4 | 33 | 0 | Is there a tool out there that can automatically convert Python to Java?
Can Jython do this? | Automated Python to Java translation | 0.197375 | 0 | 0 | 120,780 |
153,773 | 2008-09-30T16:11:00.000 | 2 | 0 | 0 | 0 | python,post,request,header,pylons | 153,822 | 1 | true | 0 | 0 | Receiving data from a POST depends on the web browser sending data along. When the web browser receives a redirect, it does not resend that data along. One solution would be to URL encode the data you want to keep and use that with a GET. In the worst case, you could always add the data you want to keep to the session and pass it that way. | 1 | 4 | 0 | I'm trying to redirect/forward a Pylons request. The problem with using redirect_to is that form data gets dropped. I need to keep the POST form data intact as well as all request headers.
Is there a simple way to do this? | What is the preferred way to redirect a request in Pylons without losing form data? | 1.2 | 0 | 1 | 1,415 |
153,956 | 2008-09-30T16:50:00.000 | 0 | 0 | 0 | 0 | python,user-interface,wxpython,distribution,freeze | 153,999 | 7 | false | 0 | 1 | I've used py2Exe myself - it's really easy (at least for small apps). | 1 | 11 | 0 | I need to develop a small-medium sized desktop GUI application, preferably with Python as a language of choice because of time constraints.
What GUI library choices do I have which allow me to redistribute my application standalone, assuming that the users don't have a working Python installation and obviously don't have the GUI libraries I'm using either?
Also, how would I go about packaging everything up in binaries of reasonable size for each target OS? (my main targets are Windows and Mac OS X)
Addition:
I've been looking at WxPython, but I've found plenty of horror stories of packaging it with cx_freeze and getting 30mb+ binaries, and no real advice on how to actually do the packaging and how trust-worthy it is. | Python GUI Application redistribution | 0 | 0 | 0 | 5,407 |
154,443 | 2008-09-30T18:55:00.000 | 11 | 0 | 1 | 0 | python | 154,566 | 10 | false | 0 | 0 | In 2.5, theres no way to suppress it, other than measures like not giving users write access to the directory.
In python 2.6 and 3.0 however, there may be a setting in the sys module called "dont_write_bytecode" that can be set to suppress this. This can also be set by passing the "-B" option, or setting the environment variable "PYTHONDONTWRITEBYTECODE" | 2 | 286 | 0 | Can I run the python interpreter without generating the compiled .pyc files? | How to avoid .pyc files? | 1 | 0 | 0 | 134,803 |
154,443 | 2008-09-30T18:55:00.000 | 0 | 0 | 1 | 0 | python | 154,467 | 10 | false | 0 | 0 | As far as I know python will compile all modules you "import". However python will NOT compile a python script run using: "python script.py" (it will however compile any modules that the script imports).
The real questions is why you don't want python to compile the modules? You could probably automate a way of cleaning these up if they are getting in the way. | 2 | 286 | 0 | Can I run the python interpreter without generating the compiled .pyc files? | How to avoid .pyc files? | 0 | 0 | 0 | 134,803 |
155,029 | 2008-09-30T20:47:00.000 | 2 | 0 | 0 | 0 | python,sqlalchemy,kinterbasdb | 175,634 | 1 | false | 0 | 1 | I thought I posted my solution already...
Modifying both apps to run under WSGIApplicationGroup ${GLOBAL} in their httpd conf file
and patching sqlalchemy.databases.firebird.py to check if self.dbapi.initialized is True
before calling self.dbapi.init(... was the only way I could manage to get this scenario up and running.
The SQLAlchemy 0.4.7 patch:
diff -Naur SQLAlchemy-0.4.7/lib/sqlalchemy/databases/firebird.py SQLAlchemy-0.4.7.new/lib/sqlalchemy/databases/firebird.py
--- SQLAlchemy-0.4.7/lib/sqlalchemy/databases/firebird.py 2008-07-26 12:43:52.000000000 -0400
+++ SQLAlchemy-0.4.7.new/lib/sqlalchemy/databases/firebird.py 2008-10-01 10:51:22.000000000 -0400
@@ -291,7 +291,8 @@
global _initialized_kb
if not _initialized_kb and self.dbapi is not None:
_initialized_kb = True
- self.dbapi.init(type_conv=type_conv, concurrency_level=concurrency_level)
+ if not self.dbapi.initialized:
+ self.dbapi.init(type_conv=type_conv, concurrency_level=concurrency_level)
return ([], opts)
def create_execution_context(self, *args, **kwargs): | 1 | 1 | 0 | I'm trying to develop an app using turbogears and sqlalchemy.
There is already an existing app using kinterbasdb directly under mod_wsgi on the same server.
When both apps are used, neither seems to recognize that kinterbasdb is already initialized
Is there something non-obvious I am missing about using sqlalchemy and kinterbasdb in separate apps? In order to make sure only one instance of kinterbasdb gets initialized and both apps use that instance, does anyone have suggestions? | SQLAlchemy and kinterbasdb in separate apps under mod_wsgi | 0.379949 | 1 | 0 | 270 |
157,313 | 2008-10-01T12:20:00.000 | 2 | 0 | 0 | 0 | python,jython,template-engine | 160,496 | 3 | false | 1 | 0 | Jinja is pretty cool and seems to work on Jython. | 1 | 1 | 0 | Im searching a Template Lib or Template Engine for generating HTML (XML) that runs under Jython (Jython 2.5 Alpha is ok). | Template Lib (Engine) in Python running with Jython | 0.132549 | 0 | 0 | 879 |
157,938 | 2008-10-01T14:37:00.000 | 3 | 0 | 0 | 0 | python,security | 158,180 | 21 | false | 0 | 0 | Your operating system probably provides facilities for encrypting data securely. For instance, on Windows there is DPAPI (data protection API). Why not ask the user for their credentials the first time you run then squirrel them away encrypted for subsequent runs? | 10 | 162 | 0 | I have got a python script which is creating an ODBC connection. The ODBC connection is generated with a connection string. In this connection string I have to include the username and password for this connection.
Is there an easy way to obscure this password in the file (just that nobody can read the password when I'm editing the file) ? | Hiding a password in a python script (insecure obfuscation only) | 0.028564 | 0 | 0 | 275,164 |
157,938 | 2008-10-01T14:37:00.000 | 68 | 0 | 0 | 0 | python,security | 22,821,470 | 21 | false | 0 | 0 | Here is a simple method:
Create a python module - let's call it peekaboo.py.
In peekaboo.py, include both the password and any code needing that password
Create a compiled version - peekaboo.pyc - by importing this module (via python commandline, etc...).
Now, delete peekaboo.py.
You can now happily import peekaboo relying only on peekaboo.pyc. Since peekaboo.pyc is byte compiled it is not readable to the casual user.
This should be a bit more secure than base64 decoding - although it is vulnerable to a py_to_pyc decompiler. | 10 | 162 | 0 | I have got a python script which is creating an ODBC connection. The ODBC connection is generated with a connection string. In this connection string I have to include the username and password for this connection.
Is there an easy way to obscure this password in the file (just that nobody can read the password when I'm editing the file) ? | Hiding a password in a python script (insecure obfuscation only) | 1 | 0 | 0 | 275,164 |
157,938 | 2008-10-01T14:37:00.000 | 0 | 0 | 0 | 0 | python,security | 157,974 | 21 | false | 0 | 0 | There are several ROT13 utilities written in Python on the 'Net -- just google for them. ROT13 encode the string offline, copy it into the source, decode at point of transmission.But this is really weak protection... | 10 | 162 | 0 | I have got a python script which is creating an ODBC connection. The ODBC connection is generated with a connection string. In this connection string I have to include the username and password for this connection.
Is there an easy way to obscure this password in the file (just that nobody can read the password when I'm editing the file) ? | Hiding a password in a python script (insecure obfuscation only) | 0 | 0 | 0 | 275,164 |
157,938 | 2008-10-01T14:37:00.000 | 56 | 0 | 0 | 0 | python,security | 158,248 | 21 | false | 0 | 0 | Douglas F Shearer's is the generally approved solution in Unix when you need to specify a password for a remote login.
You add a --password-from-file option to specify the path and read plaintext from a file.
The file can then be in the user's own area protected by the operating system.
It also allows different users to automatically pick up their own own file.
For passwords that the user of the script isn't allowed to know - you can run the script with elavated permission and have the password file owned by that root/admin user. | 10 | 162 | 0 | I have got a python script which is creating an ODBC connection. The ODBC connection is generated with a connection string. In this connection string I have to include the username and password for this connection.
Is there an easy way to obscure this password in the file (just that nobody can read the password when I'm editing the file) ? | Hiding a password in a python script (insecure obfuscation only) | 1 | 0 | 0 | 275,164 |
157,938 | 2008-10-01T14:37:00.000 | 19 | 0 | 0 | 0 | python,security | 158,387 | 21 | false | 0 | 0 | The best solution, assuming the username and password can't be given at runtime by the user, is probably a separate source file containing only variable initialization for the username and password that is imported into your main code. This file would only need editing when the credentials change. Otherwise, if you're only worried about shoulder surfers with average memories, base 64 encoding is probably the easiest solution. ROT13 is just too easy to decode manually, isn't case sensitive and retains too much meaning in it's encrypted state. Encode your password and user id outside the python script. Have he script decode at runtime for use.
Giving scripts credentials for automated tasks is always a risky proposal. Your script should have its own credentials and the account it uses should have no access other than exactly what is necessary. At least the password should be long and rather random. | 10 | 162 | 0 | I have got a python script which is creating an ODBC connection. The ODBC connection is generated with a connection string. In this connection string I have to include the username and password for this connection.
Is there an easy way to obscure this password in the file (just that nobody can read the password when I'm editing the file) ? | Hiding a password in a python script (insecure obfuscation only) | 1 | 0 | 0 | 275,164 |
157,938 | 2008-10-01T14:37:00.000 | 0 | 0 | 0 | 0 | python,security | 53,049,667 | 21 | false | 0 | 0 | This doesn't precisely answer your question, but it's related. I was going to add as a comment but wasn't allowed.
I've been dealing with this same issue, and we have decided to expose the script to the users using Jenkins. This allows us to store the db credentials in a separate file that is encrypted and secured on a server and not accessible to non-admins.
It also allows us a bit of a shortcut to creating a UI, and throttling execution. | 10 | 162 | 0 | I have got a python script which is creating an ODBC connection. The ODBC connection is generated with a connection string. In this connection string I have to include the username and password for this connection.
Is there an easy way to obscure this password in the file (just that nobody can read the password when I'm editing the file) ? | Hiding a password in a python script (insecure obfuscation only) | 0 | 0 | 0 | 275,164 |
157,938 | 2008-10-01T14:37:00.000 | 1 | 0 | 0 | 0 | python,security | 58,501,148 | 21 | false | 0 | 0 | Why not have a simple xor?
Advantages:
looks like binary data
noone can read it without knowing the key (even if it's a single char)
I get to the point where I recognize simple b64 strings for common words and rot13 as well. Xor would make it much harder. | 10 | 162 | 0 | I have got a python script which is creating an ODBC connection. The ODBC connection is generated with a connection string. In this connection string I have to include the username and password for this connection.
Is there an easy way to obscure this password in the file (just that nobody can read the password when I'm editing the file) ? | Hiding a password in a python script (insecure obfuscation only) | 0.009524 | 0 | 0 | 275,164 |
157,938 | 2008-10-01T14:37:00.000 | 2 | 0 | 0 | 0 | python,security | 160,053 | 21 | false | 0 | 0 | Place the configuration information in a encrypted config file. Query this info in your code using an key. Place this key in a separate file per environment, and don't store it with your code. | 10 | 162 | 0 | I have got a python script which is creating an ODBC connection. The ODBC connection is generated with a connection string. In this connection string I have to include the username and password for this connection.
Is there an easy way to obscure this password in the file (just that nobody can read the password when I'm editing the file) ? | Hiding a password in a python script (insecure obfuscation only) | 0.019045 | 0 | 0 | 275,164 |
157,938 | 2008-10-01T14:37:00.000 | 19 | 0 | 0 | 0 | python,security | 158,221 | 21 | false | 0 | 0 | How about importing the username and password from a file external to the script? That way even if someone got hold of the script, they wouldn't automatically get the password. | 10 | 162 | 0 | I have got a python script which is creating an ODBC connection. The ODBC connection is generated with a connection string. In this connection string I have to include the username and password for this connection.
Is there an easy way to obscure this password in the file (just that nobody can read the password when I'm editing the file) ? | Hiding a password in a python script (insecure obfuscation only) | 1 | 0 | 0 | 275,164 |
157,938 | 2008-10-01T14:37:00.000 | 4 | 0 | 0 | 0 | python,security | 158,450 | 21 | false | 0 | 0 | This is a pretty common problem. Typically the best you can do is to either
A) create some kind of ceasar cipher function to encode/decode (just not rot13)
or
B) the preferred method is to use an encryption key, within reach of your program, encode/decode the password. In which you can use file protection to protect access the key.
Along those lines if your app runs as a service/daemon (like a webserver) you can put your key into a password protected keystore with the password input as part of the service startup. It'll take an admin to restart your app, but you will have really good pretection for your configuration passwords. | 10 | 162 | 0 | I have got a python script which is creating an ODBC connection. The ODBC connection is generated with a connection string. In this connection string I have to include the username and password for this connection.
Is there an easy way to obscure this password in the file (just that nobody can read the password when I'm editing the file) ? | Hiding a password in a python script (insecure obfuscation only) | 0.038077 | 0 | 0 | 275,164 |
158,546 | 2008-10-01T16:37:00.000 | 1 | 0 | 1 | 0 | python,boggle | 158,622 | 4 | false | 0 | 0 | Even though it is essentially a singleton at this point, the usual arguments against globals apply. For a pythonic singleton-substitute, look up the "borg" object.
That's really the only difference. Once the dictionary object is created, you are only binding new references as you pass it along unless if you explicitly perform a deep copy. It makes sense that it is centrally constructed once and only once so long as each solver instance does not require a private copy for modification. | 2 | 5 | 0 | I'm creating a networked server for a boggle-clone I wrote in python, which accepts users, solves the boards, and scores the player input. The dictionary file I'm using is 1.8MB (the ENABLE2K dictionary), and I need it to be available to several game solver classes. Right now, I have it so that each class iterates through the file line-by-line and generates a hash table(associative array), but the more solver classes I instantiate, the more memory it takes up.
What I would like to do is import the dictionary file once and pass it to each solver instance as they need it. But what is the best way to do this? Should I import the dictionary in the global space, then access it in the solver class as globals()['dictionary']? Or should I import the dictionary then pass it as an argument to the class constructor? Is one of these better than the other? Is there a third option? | Best way to store and use a large text-file in python | 0.049958 | 0 | 0 | 2,322 |
158,546 | 2008-10-01T16:37:00.000 | 0 | 0 | 1 | 0 | python,boggle | 159,341 | 4 | false | 0 | 0 | Depending on what your dict contains, you may be interested in the 'shelve' or 'anydbm' modules. They give you dict-like interfaces (just strings as keys and items for 'anydbm', and strings as keys and any python object as item for 'shelve') but the data is actually in a DBM file (gdbm, ndbm, dbhash, bsddb, depending on what's available on the platform.) You probably still want to share the actual database between classes as you are asking for, but it would avoid the parsing-the-textfile step as well as the keeping-it-all-in-memory bit. | 2 | 5 | 0 | I'm creating a networked server for a boggle-clone I wrote in python, which accepts users, solves the boards, and scores the player input. The dictionary file I'm using is 1.8MB (the ENABLE2K dictionary), and I need it to be available to several game solver classes. Right now, I have it so that each class iterates through the file line-by-line and generates a hash table(associative array), but the more solver classes I instantiate, the more memory it takes up.
What I would like to do is import the dictionary file once and pass it to each solver instance as they need it. But what is the best way to do this? Should I import the dictionary in the global space, then access it in the solver class as globals()['dictionary']? Or should I import the dictionary then pass it as an argument to the class constructor? Is one of these better than the other? Is there a third option? | Best way to store and use a large text-file in python | 0 | 0 | 0 | 2,322 |
160,834 | 2008-10-02T03:55:00.000 | 3 | 0 | 0 | 1 | python,deployment,build-process | 4,060,962 | 6 | false | 1 | 0 | I always create a develop.py file at the top level of the project, and have also a packages directory with all of the .tar.gz files from PyPI that I want to install, and also included an unpacked copy of virtualenv that is ready to run right from that file. All of this goes into version control. Every developer can simply check out the trunk, run develop.py, and a few moments later will have a virtual environment ready to use that includes all of our dependencies at exactly the versions the other developers are using. And it works even if PyPI is down, which is very helpful at this point in that service's history. | 4 | 7 | 0 | I am a member of a team that is about to launch a beta of a python (Django specifically) based web site and accompanying suite of backend tools. The team itself has doubled in size from 2 to 4 over the past few weeks and we expect continued growth for the next couple of months at least. One issue that has started to plague us is getting everyone up to speed in terms of getting their development environment configured and having all the right eggs installed, etc.
I'm looking for ways to simplify this process and make it less error prone. Both zc.buildout and virtualenv look like they would be good tools for addressing this problem but both seem to concentrate primarily on the python-specific issues. We have a couple of small subprojects in other languages (Java and Ruby specifically) as well as numerous python extensions that have to be compiled natively (lxml, MySQL drivers, etc). In fact, one of the biggest thorns in our side has been getting some of these extensions compiled against appropriate versions of the shared libraries so as to avoid segfaults, malloc errors and all sorts of similar issues. It doesn't help that out of 4 people we have 4 different development environments -- 1 leopard on ppc, 1 leopard on intel, 1 ubuntu and 1 windows.
Ultimately what would be ideal would be something that works roughly like this, from the dos/unix prompt:
$ git clone [repository url]
...
$ python setup-env.py
...
that then does what zc.buildout/virtualenv does (copy/symlink the python interpreter, provide a clean space to install eggs) then installs all required eggs, including installing any native shared library dependencies, installs the ruby project, the java project, etc.
Obviously this would be useful for both getting development environments up as well as deploying on staging/production servers.
Ideally I would like for the tool that accomplishes this to be written in/extensible via python, since that is (and always will be) the lingua franca of our team, but I am open to solutions in other languages.
So, my question then is: does anyone have any suggestions for better alternatives or any experiences they can share using one of these solutions to handle larger/broader install bases? | Are there any other good alternatives to zc.buildout and/or virtualenv for installing non-python dependencies? | 0.099668 | 0 | 0 | 2,576 |
160,834 | 2008-10-02T03:55:00.000 | 4 | 0 | 0 | 1 | python,deployment,build-process | 185,505 | 6 | false | 1 | 0 | Setuptools may be capable of more of what you're looking for than you realize -- if you need a custom version of lxml to work correctly on MacOS X, for instance, you can put a URL to an appropriate egg inside your setup.py and have setuptools download and install that inside your developers' environments as necessary; it also can be told to download and install a specific version of a dependency from revision control.
That said, I'd lean towards using a scriptably generated virtual environment. It's pretty straightforward to build a kickstart file which installs whichever packages you depend on and then boot virtual machines (or production hardware!) against it, with puppet or similar software doing other administration (adding users, setting up services [where's your database come from?], etc). This comes in particularly handy when your production environment includes multiple machines -- just script the generation of multiple VMs within their handy little sandboxed subnet (I use libvirt+kvm for this; while kvm isn't available on all the platforms you have developers working on, qemu certainly is, or you can do as I do and have a small number of beefy VM hosts shared by multiple developers).
This gets you out of the headaches of supporting N platforms -- you only have a single virtual platform to support -- and means that your deployment process, as defined by the kickstart file and puppet code used for setup, is source-controlled and run through your QA and review processes just like everything else. | 4 | 7 | 0 | I am a member of a team that is about to launch a beta of a python (Django specifically) based web site and accompanying suite of backend tools. The team itself has doubled in size from 2 to 4 over the past few weeks and we expect continued growth for the next couple of months at least. One issue that has started to plague us is getting everyone up to speed in terms of getting their development environment configured and having all the right eggs installed, etc.
I'm looking for ways to simplify this process and make it less error prone. Both zc.buildout and virtualenv look like they would be good tools for addressing this problem but both seem to concentrate primarily on the python-specific issues. We have a couple of small subprojects in other languages (Java and Ruby specifically) as well as numerous python extensions that have to be compiled natively (lxml, MySQL drivers, etc). In fact, one of the biggest thorns in our side has been getting some of these extensions compiled against appropriate versions of the shared libraries so as to avoid segfaults, malloc errors and all sorts of similar issues. It doesn't help that out of 4 people we have 4 different development environments -- 1 leopard on ppc, 1 leopard on intel, 1 ubuntu and 1 windows.
Ultimately what would be ideal would be something that works roughly like this, from the dos/unix prompt:
$ git clone [repository url]
...
$ python setup-env.py
...
that then does what zc.buildout/virtualenv does (copy/symlink the python interpreter, provide a clean space to install eggs) then installs all required eggs, including installing any native shared library dependencies, installs the ruby project, the java project, etc.
Obviously this would be useful for both getting development environments up as well as deploying on staging/production servers.
Ideally I would like for the tool that accomplishes this to be written in/extensible via python, since that is (and always will be) the lingua franca of our team, but I am open to solutions in other languages.
So, my question then is: does anyone have any suggestions for better alternatives or any experiences they can share using one of these solutions to handle larger/broader install bases? | Are there any other good alternatives to zc.buildout and/or virtualenv for installing non-python dependencies? | 0.132549 | 0 | 0 | 2,576 |
160,834 | 2008-10-02T03:55:00.000 | 0 | 0 | 0 | 1 | python,deployment,build-process | 177,109 | 6 | false | 1 | 0 | You might consider creating virtual machine appliances with whatever production OS you are running, and all of the software dependencies pre-built. Code can be edited either remotely, or with a shared folder. It worked pretty well for me in a past life that had a fairly complicated development environment. | 4 | 7 | 0 | I am a member of a team that is about to launch a beta of a python (Django specifically) based web site and accompanying suite of backend tools. The team itself has doubled in size from 2 to 4 over the past few weeks and we expect continued growth for the next couple of months at least. One issue that has started to plague us is getting everyone up to speed in terms of getting their development environment configured and having all the right eggs installed, etc.
I'm looking for ways to simplify this process and make it less error prone. Both zc.buildout and virtualenv look like they would be good tools for addressing this problem but both seem to concentrate primarily on the python-specific issues. We have a couple of small subprojects in other languages (Java and Ruby specifically) as well as numerous python extensions that have to be compiled natively (lxml, MySQL drivers, etc). In fact, one of the biggest thorns in our side has been getting some of these extensions compiled against appropriate versions of the shared libraries so as to avoid segfaults, malloc errors and all sorts of similar issues. It doesn't help that out of 4 people we have 4 different development environments -- 1 leopard on ppc, 1 leopard on intel, 1 ubuntu and 1 windows.
Ultimately what would be ideal would be something that works roughly like this, from the dos/unix prompt:
$ git clone [repository url]
...
$ python setup-env.py
...
that then does what zc.buildout/virtualenv does (copy/symlink the python interpreter, provide a clean space to install eggs) then installs all required eggs, including installing any native shared library dependencies, installs the ruby project, the java project, etc.
Obviously this would be useful for both getting development environments up as well as deploying on staging/production servers.
Ideally I would like for the tool that accomplishes this to be written in/extensible via python, since that is (and always will be) the lingua franca of our team, but I am open to solutions in other languages.
So, my question then is: does anyone have any suggestions for better alternatives or any experiences they can share using one of these solutions to handle larger/broader install bases? | Are there any other good alternatives to zc.buildout and/or virtualenv for installing non-python dependencies? | 0 | 0 | 0 | 2,576 |
160,834 | 2008-10-02T03:55:00.000 | 0 | 0 | 0 | 1 | python,deployment,build-process | 160,872 | 6 | false | 1 | 0 | Basically, you're looking for a cross-platform software/package installer (on the lines of apt-get/yum/etc.) I'm not sure something like that exists?
An alternative might be specifying the list of packages that need to be installed via the OS-specific package management system such as Fink or DarwinPorts for Mac OS X and having a script that sets up the build environment for the in-house code? | 4 | 7 | 0 | I am a member of a team that is about to launch a beta of a python (Django specifically) based web site and accompanying suite of backend tools. The team itself has doubled in size from 2 to 4 over the past few weeks and we expect continued growth for the next couple of months at least. One issue that has started to plague us is getting everyone up to speed in terms of getting their development environment configured and having all the right eggs installed, etc.
I'm looking for ways to simplify this process and make it less error prone. Both zc.buildout and virtualenv look like they would be good tools for addressing this problem but both seem to concentrate primarily on the python-specific issues. We have a couple of small subprojects in other languages (Java and Ruby specifically) as well as numerous python extensions that have to be compiled natively (lxml, MySQL drivers, etc). In fact, one of the biggest thorns in our side has been getting some of these extensions compiled against appropriate versions of the shared libraries so as to avoid segfaults, malloc errors and all sorts of similar issues. It doesn't help that out of 4 people we have 4 different development environments -- 1 leopard on ppc, 1 leopard on intel, 1 ubuntu and 1 windows.
Ultimately what would be ideal would be something that works roughly like this, from the dos/unix prompt:
$ git clone [repository url]
...
$ python setup-env.py
...
that then does what zc.buildout/virtualenv does (copy/symlink the python interpreter, provide a clean space to install eggs) then installs all required eggs, including installing any native shared library dependencies, installs the ruby project, the java project, etc.
Obviously this would be useful for both getting development environments up as well as deploying on staging/production servers.
Ideally I would like for the tool that accomplishes this to be written in/extensible via python, since that is (and always will be) the lingua franca of our team, but I am open to solutions in other languages.
So, my question then is: does anyone have any suggestions for better alternatives or any experiences they can share using one of these solutions to handle larger/broader install bases? | Are there any other good alternatives to zc.buildout and/or virtualenv for installing non-python dependencies? | 0 | 0 | 0 | 2,576 |
161,367 | 2008-10-02T08:35:00.000 | 2 | 0 | 1 | 0 | python | 161,546 | 1 | true | 0 | 0 | Converting a traceback to the exception object wouldn't be too hard, given common exception classes (parse the last line for the exception class and the arguments given to it at instantiation.) The traceback object (the third argument returned by sys.exc_info()) is an entirely different matter, though. The traceback object actually contains the chain of frame objects that constituted the stack at the time of the exception. Including local variables, global variables, et cetera. It is impossible to recreate that just from the displayed traceback.
The best you could do would be to parse each 'File "X", line N, in Y:' line and create fake frame objects that are almost entirely empty. There would be very little value in it, as basically the only thing you would be able to do with it would be to print it. What are you trying to accomplish? | 1 | 0 | 0 | Just a curiosity: is there an already-coded way to convert a printed traceback back to the exception that generated it? :) Or to a sys.exc_info-like structure? | Library for converting a traceback to its exception? | 1.2 | 0 | 0 | 181 |
164,901 | 2008-10-02T22:27:00.000 | 62 | 0 | 0 | 0 | python,django,piracy-prevention | 164,987 | 7 | true | 1 | 0 | Don't try and obfuscate or encrypt the code - it will never work.
I would suggest selling the Django application "as a service" - either host it for them, or sell them the code and support. Write up a contract that forbids them from redistributing it.
That said, if you were determined to obfuscate the code in some way - you can distribute python applications entirely as .pyc (Python compiled byte-code).. It's how Py2App works.
It will still be re-distributable, but it will be very difficult to edit the files - so you could add some basic licensing stuff, and not have it foiled by a few #s..
As I said, I don't think you'll succeed in anti-piracy via encryption or obfuscation etc.. Depending on your clients, a simple contract, and maybe some really basic checks will go a long much further than some complicated decryption system (And make the experience of using your application better, instead of hopefully not any worse) | 6 | 42 | 0 | Currently I am hosting a Django app I developed myself for my clients, but I am now starting to look at selling it to people for them to host themselves.
My question is this: How can I package up and sell a Django app, while protecting its code from pirating or theft? Distributing a bunch of .py files doesn't sound like a good idea as the people I sell it to too could just make copies of them and pass them on.
I think for the purpose of this problem it would be safe to assume that everyone who buys this would be running the same (LAMP) setup. | How would I package and sell a Django app? | 1.2 | 0 | 0 | 17,445 |
164,901 | 2008-10-02T22:27:00.000 | 7 | 0 | 0 | 0 | python,django,piracy-prevention | 167,240 | 7 | false | 1 | 0 | You'll never be able to keep the source code from people who really want it. It's best to come to grips with this fact now, and save yourself the headache later. | 6 | 42 | 0 | Currently I am hosting a Django app I developed myself for my clients, but I am now starting to look at selling it to people for them to host themselves.
My question is this: How can I package up and sell a Django app, while protecting its code from pirating or theft? Distributing a bunch of .py files doesn't sound like a good idea as the people I sell it to too could just make copies of them and pass them on.
I think for the purpose of this problem it would be safe to assume that everyone who buys this would be running the same (LAMP) setup. | How would I package and sell a Django app? | 1 | 0 | 0 | 17,445 |
164,901 | 2008-10-02T22:27:00.000 | 13 | 0 | 0 | 0 | python,django,piracy-prevention | 164,920 | 7 | false | 1 | 0 | The way I'd go about it is this:
Encrypt all of the code
Write an installer that contacts the server with the machine's hostname and license file and gets the decryption key, then decrypts the code and compiles it to python bytecode
Add (in the installer) a module that checks the machine's hostname and license file on import and dies if it doesn't match
This way the user only has to contact the server when the hostname changes and on first install, but you get a small layer of security. You could change the hostname to something more complex, but there's really no need -- anyone that wants to pirate this will do so, but a simple mechanism like that will keep honest people honest. | 6 | 42 | 0 | Currently I am hosting a Django app I developed myself for my clients, but I am now starting to look at selling it to people for them to host themselves.
My question is this: How can I package up and sell a Django app, while protecting its code from pirating or theft? Distributing a bunch of .py files doesn't sound like a good idea as the people I sell it to too could just make copies of them and pass them on.
I think for the purpose of this problem it would be safe to assume that everyone who buys this would be running the same (LAMP) setup. | How would I package and sell a Django app? | 1 | 0 | 0 | 17,445 |
164,901 | 2008-10-02T22:27:00.000 | 4 | 0 | 0 | 0 | python,django,piracy-prevention | 5,915,669 | 7 | false | 1 | 0 | May I speak frankly, as a friend? Unless your app is Really Amazing, you may not get many buyers. Why waste the time on lawyers, obfuscation, licensing and whatnot? You stand to gain a better reputation by open-sourcing your code...and maintaining it.
Django comes from the open-source end of the spectrum from licensing (and obfuscating). Granted, the MIT license is more common than the GPL; still they are both very far removed from anything like Microsoft's EULA. A lot of Djangophiles will balk at closed source code, simply because that's what Microsoft does.
Also, people will trust your code more, since they will be able to read it and verify that it contains no malicious code. Remember, "obfuscating" means "hiding;" and who will really know exactly what you've hidden?
Granted, there's no easy way to monetize open-sourced code. But you could offer your services or even post a campaign on Pledgie.com, for those who are thankful for all your great work. | 6 | 42 | 0 | Currently I am hosting a Django app I developed myself for my clients, but I am now starting to look at selling it to people for them to host themselves.
My question is this: How can I package up and sell a Django app, while protecting its code from pirating or theft? Distributing a bunch of .py files doesn't sound like a good idea as the people I sell it to too could just make copies of them and pass them on.
I think for the purpose of this problem it would be safe to assume that everyone who buys this would be running the same (LAMP) setup. | How would I package and sell a Django app? | 0.113791 | 0 | 0 | 17,445 |
164,901 | 2008-10-02T22:27:00.000 | 3 | 0 | 0 | 0 | python,django,piracy-prevention | 395,813 | 7 | false | 1 | 0 | One thing you might want to consider is what FogBugz does. Simply include a small binary (perhaps a C program) that is compiled for the target platforms and contains the code to validate the license.
This way you can keep the honest people honest with minimal headache on your part. | 6 | 42 | 0 | Currently I am hosting a Django app I developed myself for my clients, but I am now starting to look at selling it to people for them to host themselves.
My question is this: How can I package up and sell a Django app, while protecting its code from pirating or theft? Distributing a bunch of .py files doesn't sound like a good idea as the people I sell it to too could just make copies of them and pass them on.
I think for the purpose of this problem it would be safe to assume that everyone who buys this would be running the same (LAMP) setup. | How would I package and sell a Django app? | 0.085505 | 0 | 0 | 17,445 |
164,901 | 2008-10-02T22:27:00.000 | 10 | 0 | 0 | 0 | python,django,piracy-prevention | 445,887 | 7 | false | 1 | 0 | "Encrypting" Python source code (or bytecode, or really bytecode for any language that uses it -- not just Python) is like those little JavaScript things some people put on web pages to try to disable the right-hand mouse button, declaring "now you can't steal my images!"
The workarounds are trivial, and will not stop a determined person.
If you're really serious about selling a piece of Python software, you need to act serious. Pay an attorney to draw up license/contract terms, have people agree to them at the time of purchase, and then just let them have the actual software. This means you'll have to haul people into court if they violate the license/contract terms, but you'd have to do that no matter what (e.g., if somebody breaks your "encryption" and starts distributing your software), and having the actual proper form of legal words already set down on paper, with their signature, will be far better for your business in the long term.
If you're really that paranoid about people "stealing" your software, though, just stick with a hosted model and don't give them access to the server. Plenty of successful businesses are based around that model. | 6 | 42 | 0 | Currently I am hosting a Django app I developed myself for my clients, but I am now starting to look at selling it to people for them to host themselves.
My question is this: How can I package up and sell a Django app, while protecting its code from pirating or theft? Distributing a bunch of .py files doesn't sound like a good idea as the people I sell it to too could just make copies of them and pass them on.
I think for the purpose of this problem it would be safe to assume that everyone who buys this would be running the same (LAMP) setup. | How would I package and sell a Django app? | 1 | 0 | 0 | 17,445 |
165,883 | 2008-10-03T06:18:00.000 | 0 | 0 | 1 | 0 | python,oop,object,attributes | 165,925 | 7 | false | 0 | 0 | There is no real point of doing getter/setters in python, you can't protect stuff anyway and if you need to execute some extra code when getting/setting the property look at the property() builtin (python -c 'help(property)') | 1 | 23 | 0 | Suppose I have a class with some attributes. How is it best (in the Pythonic-OOP) sense to access these attributes ? Just like obj.attr ? Or perhaps write get accessors ?
What are the accepted naming styles for such things ?
Edit:
Can you elaborate on the best-practices of naming attributes with a single or double leading underscore ? I see in most modules that a single underscore is used.
If this question has already been asked (and I have a hunch it has, though searching didn't bring results), please point to it - and I will close this one. | Python object attributes - methodology for access | 0 | 0 | 0 | 40,229 |
166,334 | 2008-10-03T10:57:00.000 | 0 | 0 | 1 | 0 | python,deployment | 171,470 | 5 | false | 0 | 0 | I use Mercurial as my SCM system, and also for deployment too. It's just a matter of cloning the repository from another one, and then a pull/update or a fetch will get it up to date.
I use several instances of the repository - one on the development server, one (or more, depending upon circumstance) on my local machine, one on the production server, and one 'Master' repository that is available to the greater internet (although only by SSH).
The only thing it doesn't do is automatically update the database if it is changed, but with incoming hooks I could probably do this too. | 1 | 18 | 0 | I have a Python web application consisting of several Python packages. What is the best way of building and deploying this to the servers?
Currently I'm deploying the packages with Capistrano, installing the packages into a virtualenv with bash, and configuring the servers with puppet, but I would like to go for a more Python based solution.
I've been looking a bit into zc.buildout, but it's not clear for me what I can/should use it for. | How to build and deploy Python web applications | 0 | 0 | 0 | 4,466 |
166,364 | 2008-10-03T11:10:00.000 | 0 | 0 | 0 | 0 | python,django,apache,session,mod-python | 699,045 | 5 | false | 1 | 0 | If you are using some global variables to hold data of your custom authentication session, you need to change this to use either file, database or memcached. As stated above mod_python launches few processes and there's no shared memory between them.
I recommend using memcached for this, also use cookies to store session ID or pass it with as GET parameter so that later you can easily extract session data from the cache. | 4 | 2 | 0 | I am running a Django through mod_python on Apache on a linux box. I have a custom authentication backend, and middleware that requires authentication for all pages, except static content.
My problem is that after I log in, I will still randomly get the log in screen now and again. It seems to me that each apache process has it's own python process, which in turn has it's own internals. So as long as I get served by the same process I logged in to, everything is fine and dandy. But if my request gets served by a different apache process, I am no longer authenticated.
I have checked the HTTP headers I send with FireBug, and they are the same each time, ie. same cookie.
Is this a known issue and are there workarounds/fixes?
Edit: I have a page that displays a lot of generated images. Some off these will not display. This is because they are too behind the authenticating middleware, so they will randomly put up a login image. However, refreshing this page enough times, and it will eventually work, meaning all processes recognize my session. | Django, mod_python, apache and wacky sessions | 0 | 0 | 0 | 1,356 |
166,364 | 2008-10-03T11:10:00.000 | 2 | 0 | 0 | 0 | python,django,apache,session,mod-python | 166,539 | 5 | false | 1 | 0 | You are correct about how Apache handles the processes, and sometimes you'll get served by a different process. You can see this when you make a change to your site; new processes will pick up the change, but old processes will give you the old site. To get consistency, you have to restart Apache.
Assuming a restart doesn't fix the problem, I would guess it's something in the "custom authentication backend" storing part of the authentication in memory (which won't work very well for a web server). I would try setting MaxRequestsPerChild to 1 in your Apache config and seeing if you still get the login screen. If you do, something is being stored in memory, maybe a model not being saved?
Hope that helps!
P.S. Just out of curiosity, why are you using a custom authentication backend and a middleware to ensure the user is logged in? It seems Django's contrib.auth and @login_required would be easier... | 4 | 2 | 0 | I am running a Django through mod_python on Apache on a linux box. I have a custom authentication backend, and middleware that requires authentication for all pages, except static content.
My problem is that after I log in, I will still randomly get the log in screen now and again. It seems to me that each apache process has it's own python process, which in turn has it's own internals. So as long as I get served by the same process I logged in to, everything is fine and dandy. But if my request gets served by a different apache process, I am no longer authenticated.
I have checked the HTTP headers I send with FireBug, and they are the same each time, ie. same cookie.
Is this a known issue and are there workarounds/fixes?
Edit: I have a page that displays a lot of generated images. Some off these will not display. This is because they are too behind the authenticating middleware, so they will randomly put up a login image. However, refreshing this page enough times, and it will eventually work, meaning all processes recognize my session. | Django, mod_python, apache and wacky sessions | 0.07983 | 0 | 0 | 1,356 |
166,364 | 2008-10-03T11:10:00.000 | 0 | 0 | 0 | 0 | python,django,apache,session,mod-python | 14,830,843 | 5 | false | 1 | 0 | How to ensure that session is not cleared after Apache restart( or stop and start) ?
Because when I upgrade my source code and restart Apache, I refresh the web page and there I have to login again. Session is lost.
Session is stored in Memcache. No idea how and why its cleared. How to preserve the session so that the user need not login after the apache restart? | 4 | 2 | 0 | I am running a Django through mod_python on Apache on a linux box. I have a custom authentication backend, and middleware that requires authentication for all pages, except static content.
My problem is that after I log in, I will still randomly get the log in screen now and again. It seems to me that each apache process has it's own python process, which in turn has it's own internals. So as long as I get served by the same process I logged in to, everything is fine and dandy. But if my request gets served by a different apache process, I am no longer authenticated.
I have checked the HTTP headers I send with FireBug, and they are the same each time, ie. same cookie.
Is this a known issue and are there workarounds/fixes?
Edit: I have a page that displays a lot of generated images. Some off these will not display. This is because they are too behind the authenticating middleware, so they will randomly put up a login image. However, refreshing this page enough times, and it will eventually work, meaning all processes recognize my session. | Django, mod_python, apache and wacky sessions | 0 | 0 | 0 | 1,356 |