Title
stringlengths
11
150
A_Id
int64
518
72.5M
Users Score
int64
-42
283
Q_Score
int64
0
1.39k
ViewCount
int64
17
1.71M
Database and SQL
int64
0
1
Tags
stringlengths
6
105
Answer
stringlengths
14
4.78k
GUI and Desktop Applications
int64
0
1
System Administration and DevOps
int64
0
1
Networking and APIs
int64
0
1
Other
int64
0
1
CreationDate
stringlengths
23
23
AnswerCount
int64
1
55
Score
float64
-1
1.2
is_accepted
bool
2 classes
Q_Id
int64
469
42.4M
Python Basics and Environment
int64
0
1
Data Science and Machine Learning
int64
0
1
Web Development
int64
1
1
Available Count
int64
1
15
Question
stringlengths
17
21k
Django Makemigrations not working in version 1.10 after adding new table
39,481,262
-1
1
1,011
0
python,django,python-2.7
You could try: Deleted everything in the django-migrations table. Deleted all files in migrations folder and then run python manage.py makemigrations followed by python manage.py migrate as you said. If this doesn't work, try: Deleted everything in the django-migrations table. Deleted all files in migrations folder, use your old model.py to run python manage.py makemigrations followed by python manage.py migrate. Add new model, run python manage.py makemigrations followed by python manage.py migrate again.
0
0
0
0
2016-09-13T20:56:00.000
3
-0.066568
false
39,478,845
0
0
1
2
I added some table models in models.py for the first time running the app and then ran python manage.py makemigrations followed by python manage.py migrate. This works well but after adding two more tables it doesn't work again. It created migrations for the changes made but when I run python manage.py migrate nothing happens. My new tables are not added to the database. Things I have done: Deleted all files in migrations folder and then run python manage.py makemigrationsfollowed by python manage.py migrate but the new tables are not still not getting added to the database even though the new table models show in the migration that was created i.e 0001_initial.py. Deleted the database followed by the steps in 1 above but it still didn't solve my problem. Only the first set of tables get created. Tried python manage.py makemigrations app_name but it still didn't help.
Django Makemigrations not working in version 1.10 after adding new table
39,482,630
0
1
1,011
0
python,django,python-2.7
Can you post your models? Have you edited manage.py in any way? Try deleting the migrations and the database again after ensuring that your models are valid, then run manage.py makemigrations appname and then manage.py migrate.
0
0
0
0
2016-09-13T20:56:00.000
3
0
false
39,478,845
0
0
1
2
I added some table models in models.py for the first time running the app and then ran python manage.py makemigrations followed by python manage.py migrate. This works well but after adding two more tables it doesn't work again. It created migrations for the changes made but when I run python manage.py migrate nothing happens. My new tables are not added to the database. Things I have done: Deleted all files in migrations folder and then run python manage.py makemigrationsfollowed by python manage.py migrate but the new tables are not still not getting added to the database even though the new table models show in the migration that was created i.e 0001_initial.py. Deleted the database followed by the steps in 1 above but it still didn't solve my problem. Only the first set of tables get created. Tried python manage.py makemigrations app_name but it still didn't help.
WiringPi and Flask Sudo Conflict
39,521,293
0
0
105
0
python,flask,raspberry-pi,virtualenv,wiringpi
Turns out I just had to make sure that "root" had the proper libraries installed too. Root and User have different directories for their Python binaries.
0
0
0
1
2016-09-14T00:54:00.000
1
0
false
39,480,992
0
0
1
1
I am running my application in a virtualenv using Python3.4. WiringPi requires sudo privilege to access the hardware pins. Flask, on the other hand, resides in my virtualEnv folder, so I can't access it using sudo flask. I've tried making it run on startup by placing some commands in /etc/rc.local so that it can have root access automatically. It only tells me that it can't find basic Python library modules (like re). My RPI2 is running Raspbian. For the time being I am running it using flask run --localhost=0.0.0.0, which I know I am not supposed to do, but I'll change that later.
Django Migration Process for Elasticbeanstalk / Multiple Databases
39,500,763
1
1
795
0
python,django,amazon-web-services,amazon-elastic-beanstalk,django-migrations
Seems that you might have deleted the table or migrations at some point of time. When you run makemigrations, django create migratins and when you run migrate, it creates database whichever is specified in settings file. One thing is if you keep on creating migrations and do not run it in a particular database, it will be absolutely fine. Whenever you switch to databsse and run migrations, it will handle it as every database will store the point upto which migrations have been run until now in django-migrations table and will start running next migrations only. To solve your problem, you can delete all databases and migration files and start afresh as you are perhaps testing right now. Things will go fine untill you delete a migration or a database in any of the server. If you have precious data, you should get into migration files and tables to analyse and manage things.
0
0
0
0
2016-09-14T22:18:00.000
1
0.197375
false
39,500,513
0
0
1
1
I am developing a small web application using Django and Elasticbeanstalk. I created a EB application with two environments (staging and production), created a RDS instance and assigned it to my EB environments. For development I use a local database, because deploying to AWS takes quite some time. However, I am having troubles with the migrations. Because I develop and test locally every couple of minutes, I tend to have different migrations locally and on the two environments. So once I deploy the current version of the app to a certain environment, the "manage.py migrate" fails most of the times because tables already exist or do not exist even though they should (because another environment already created the tables). So I was wondering how to handle the migration process when using multiple environments for development, staging and production with some common and some exclusive database instances that might not reflect the same structure all the time? Should I exclude the migration files from the code repository and the eb deployment and run makemigrations & migrate after every deployment? Should I not run migrations automatically using the .ebextensions and apply all the migrations manually through one of the instances? What's the recommended way of using the same Django application with different database instances on different environments?
Setting the SendAs via python gmail api returns "Custom display name disallowed"
39,777,352
0
1
964
0
python,gmail-api,google-api-python-client
This was a bug in the Gmail API. It is fixed now.
0
0
1
0
2016-09-15T18:11:00.000
1
0
false
39,517,707
0
0
1
1
I can't find any results when searching Google for this response. I'm using the current Google Python API Client to make requests against the Gmail API. I can successfully insert a label, I can successfully retrieve a user's SendAs settings, but I cannot update, patch, or create a SendAS without receiving this error. Here's a brief snippit of my code: sendAsResource = {"sendAsEmail": "[email protected]", "isDefault": True, "replyToAddress": "[email protected]", "displayName": "Test Sendas", "isPrimary": False, "treatAsAlias": False } self.service.users().settings().sendAs().create(userId = "me", body=sendAsResource).execute() The response I get is: <HttpError 400 when requesting https://www.googleapis.com/gmail/v1/users/me/settings/sendAs?alt=json returned "Custom display name disallowed"> I've tried userId="me" as well as the user i'm authenticated with, both result in this error. I am using a service account with domain wide delegation. Since adding a label works fine, I'm confused why this doesn't. All pip modules are up to date as of this morning (google-api-python-client==1.5.3) Edit: After hours of testing I decided to try on another user and this worked fine. There is something unique about my initial test account.
Django - ImportError: No module named apps
63,384,226
0
5
24,726
0
python,django,python-2.7,django-models
I found the solution for me. When you write polls.apps.PollsConfig in your InstalledApps, you need to have in mind that, the first polls refers to the created app, not to the site. In the Django documentation it can be a bit confusing.
0
0
0
0
2016-09-16T03:10:00.000
6
0
false
39,523,214
0
0
1
3
I am trying out the Django tutorial on the djangoproject.com website, but when I reach the part where I do the first "makemigrations polls" I keep getting this error: ImportError: No module named apps Traceback (most recent call last): File "manage.py", line 22, in execute_from_command_line(sys.argv) File "/Library/Python/2.7/site-packages/django/core/management/__init__.py", line 338, in execute_from_command_line utility.execute() File "/Library/Python/2.7/site-packages/django/core/management/__init__.py", line 312, in execute django.setup() File "/Library/Python/2.7/site-packages/django/__init__.py", line 18, in setup apps.populate(settings.INSTALLED_APPS) File "/Library/Python/2.7/site-packages/django/apps/registry.py", line 85, in populate app_config = AppConfig.create(entry) File "/Library/Python/2.7/site-packages/django/apps/config.py", line 112, in create mod = import_module(mod_path) File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/importlib/__init__.py", line 37, in import_module __import__(name) How can I resolve this error?
Django - ImportError: No module named apps
39,537,105
5
5
24,726
0
python,django,python-2.7,django-models
There is an error in the tutorial. It instructs to add polls.apps.PollsConfig in the INSTALLED_APPS section of the settings.py file. I changed it from polls.apps.PollsConfig to simply polls and that did the trick. I was able to successfully make migrations. I hope this helps other people who face similar problems.
0
0
0
0
2016-09-16T03:10:00.000
6
0.16514
false
39,523,214
0
0
1
3
I am trying out the Django tutorial on the djangoproject.com website, but when I reach the part where I do the first "makemigrations polls" I keep getting this error: ImportError: No module named apps Traceback (most recent call last): File "manage.py", line 22, in execute_from_command_line(sys.argv) File "/Library/Python/2.7/site-packages/django/core/management/__init__.py", line 338, in execute_from_command_line utility.execute() File "/Library/Python/2.7/site-packages/django/core/management/__init__.py", line 312, in execute django.setup() File "/Library/Python/2.7/site-packages/django/__init__.py", line 18, in setup apps.populate(settings.INSTALLED_APPS) File "/Library/Python/2.7/site-packages/django/apps/registry.py", line 85, in populate app_config = AppConfig.create(entry) File "/Library/Python/2.7/site-packages/django/apps/config.py", line 112, in create mod = import_module(mod_path) File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/importlib/__init__.py", line 37, in import_module __import__(name) How can I resolve this error?
Django - ImportError: No module named apps
42,808,902
0
5
24,726
0
python,django,python-2.7,django-models
In Django 1.10.6 I had the same error ("no module named..."). The solution that worked for me is changing "polls.apps.PollsConfig" for "mysite.polls" in settings.py. o.O
0
0
0
0
2016-09-16T03:10:00.000
6
0
false
39,523,214
0
0
1
3
I am trying out the Django tutorial on the djangoproject.com website, but when I reach the part where I do the first "makemigrations polls" I keep getting this error: ImportError: No module named apps Traceback (most recent call last): File "manage.py", line 22, in execute_from_command_line(sys.argv) File "/Library/Python/2.7/site-packages/django/core/management/__init__.py", line 338, in execute_from_command_line utility.execute() File "/Library/Python/2.7/site-packages/django/core/management/__init__.py", line 312, in execute django.setup() File "/Library/Python/2.7/site-packages/django/__init__.py", line 18, in setup apps.populate(settings.INSTALLED_APPS) File "/Library/Python/2.7/site-packages/django/apps/registry.py", line 85, in populate app_config = AppConfig.create(entry) File "/Library/Python/2.7/site-packages/django/apps/config.py", line 112, in create mod = import_module(mod_path) File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/importlib/__init__.py", line 37, in import_module __import__(name) How can I resolve this error?
How to create an sObject for a sType without using get_unique_sobject method?
39,710,247
1
0
21
0
python,tactic
Found it, the API itself provides a method of inserting a sObject to any sType in the system. It was by using server.insert( sType, data = {}) where data is a dictionary of key value pairs.
0
0
0
0
2016-09-16T05:55:00.000
1
1.2
true
39,524,577
0
0
1
1
I wish to create a new sobject for a specific stype. Currently I am using server.get_unique_sobject( stype, data), But it assumes that an sobject is already present i.e it creates a new sobject iff there is no combination of sobject with same data already existing in the DB. I wish to create a new sobject each and every time I wish , even if there is a sobject already present with the same name and data present.
Uploading Large files to AWS S3 Bucket with Django on Heroku without 30s request timeout
45,600,079
3
15
1,926
0
python,django,heroku,amazon-s3,large-files
The points in the other answer are valid. The short answer to the question of "Is there anyway that i can possibly upload large files through Django backend without using JavaScript" is "not without switching away from Heroku". Keep in mind that any data transmitted to your dynos goes through Heroku's routing mesh, which is what enforces the 30 second request limit to conserve its own finite resources. Long-running transactions of any kind use up bandwidth/compute/etc that could be used to serve other requests, so Heroku applies the limit to help keep things moving across the thousands of dynos. When uploading a file, you will first be constrained by client bandwidth to your server. Then, you will be constrained by the bandwidth between your dynos and S3, on top of any processing your dyno actually does. The larger the file, the more likely it will be that transmitting the data will exceed the 30 second timeout, particularly in step 1 for clients on unreliable networks. Creating a direct path from client to S3 is a reasonable compromise.
0
0
0
0
2016-09-17T11:20:00.000
2
0.291313
false
39,546,228
0
0
1
1
I have a django app that allows users to upload videos. Its hosted on Heroku and the uploaded files stored on an S3 Bucket. I am using JavaScript to directly upload the files to S3 after obtaining a presigned request from Django app. This is due to Heroku 30s request timeout. Is there anyway that i can possibly upload large files through Django backend without using JavaScript and compromising the user experience?
Django models, adding new value, migrations
39,547,167
1
1
861
0
python,django,django-models,django-migrations
As long as your migration isn't applied to the database you can manually update your migration file located in myapp/migrations/*.py. Find the string '10.07.2016' and update it to a supported format. A less attractive solution would be to delete the old migration file (as long as it isn't apllied to the database) and create a new migrations file with python manage.py makemigrations. Because you've updated the model to use a default value it won't ask for a one-off default this time. To check whether a migration is applied to the database run: python manage.py showmigrations.
0
0
0
0
2016-09-17T12:18:00.000
1
1.2
true
39,546,734
0
0
1
1
I worked with django 1.9 and added a new field (creation_date) to myapp/models.py. After that I run "python manage.py makemigrations". I got: Please select a fix: Provide a one-off default now (will be set on all existing rows) Quit, and let me add a default in models.py." I choose 1-st option and added value in wrong format '10.07.2016'. After this mistake I couldn't run "python manage.py migrate". So I decided to change models.py and add a default value "datetime.now". But after that I still have problems with "python manage.py makemigrations". I see such things like that: django.core.exceptions.ValidationError: [u"'10.07.2016' value has an invalid format. It must be in YYYY-MM-DD HH:MM[:ss[.uuuuuu]][TZ] format."] How to solve this problem?
Determine why WTForms form didn't validate
39,559,508
5
2
1,729
0
python,flask,wtforms,flask-wtforms
For the whole form, form.errors contains a map of fields to lists of errors. If it is not empty, then the form did not validate. For an individual field, field.errors contains a list of errors for that field. The list is the same as the one in form.errors. form.validate() performs validation and populates errors. When using Flask-WTF, form.validate_on_submit() performs an additional check that request.method is a "submit" method, which mostly means it is not a GET request.
0
0
0
0
2016-09-18T14:52:00.000
1
0.761594
false
39,558,984
0
0
1
1
I called form.validate_on_submit(), but it returned False. How can I find out why the form didn't validate?
Post data from html to another html
69,865,240
0
0
95
0
javascript,python,html
If you are not gonna use any sensitive data like password you can use localStorage or Url Hash .
0
0
1
0
2016-09-19T11:05:00.000
2
0
false
39,571,659
0
0
1
1
I want to post data from html to another html I know how to post data html->python and python-> html I have dictionary in the html (I get it from python - return render_to_response('page.html', locals()) how can I use with the dictionary in the second html file?
How to handle concurrent modifications in django?
39,584,832
0
0
76
0
python,django,django-models,django-rest-framework
django-locking is the way to go.
0
0
0
0
2016-09-19T16:09:00.000
1
0
false
39,577,548
0
0
1
1
I am trying to build a project(like e-commerce) using Django and integrate it with android. (I am not building website, I am trying mobile only, so I am using django-rest-framework to create api) So my question is how to handle a case where two or more users can book an item at the same time when there is only a single item. (basically how to handle concurrent modification and access of data) ? Please help. I am stuck on this one.
Is there a way to share a link with only a spefic mail recipient?
39,603,344
2
0
29
0
javascript,python,email,security
The first time someone accesses the URL, you could send them a random cookie, and save that cookie with the document. On future accesses, check if the cookie matches the saved cookie. If they share the URL with someone, that person won't have the cookie. Caveats: If they share the URL with someone else, and the other person goes to the URL first, they will be the one who can access it, not the original recipient. If the recipient clears cookies, they'll lose access to the document. You'll need a recovery procedure. This could send a new URL to the original email address.
0
0
0
0
2016-09-20T19:45:00.000
1
1.2
true
39,602,586
0
0
1
1
Not sure if this question should come to SO, but here it goes. I have the following scenario: A Flask app with typical users that can login using username / password. Users can share some resources among them, but now we want to let them share those with anyone, not users of the app basically. Because the resources content is important, only the person that received the email should be able to access the resource. Not everyone with the link, in other words. What I've thought so far: Create a one-time link -> This could work, but I'd prefer if the link is permanent Add some Javascript in the HTML email message sent and add a parameter to the request sent so I can make sure the email address that opened the link was the correct one. This assuming that I can do that with Javascript...which is not clear to me. This will make the link permanent though. Any toughts? Thanks
PyCharm not respond to my change in JavaScript file
39,629,226
0
1
433
0
javascript,python,pycharm
Open the Chrome Developer tool setting, and disable the cache. credit to @All is Vanity
0
0
0
0
2016-09-21T01:52:00.000
1
0
false
39,606,308
0
0
1
1
I am developing a simple web application integrated with MySQL database. I am using PyCharm to write Python, HTML, JavaScript, CSS. After I make change to my JavaScript and I run my application on Chrome, the Chrome console suggests that the change did not apply. I already invalid PyCharm caches and restart Pycharm, it still cannot work. Anyone has idea about this? PS: if I rename the JavaScript file, it will work. But what is the reason of this problem? And how can I solve it without renaming? Thanks in advance!
Stuck in a django migration IntegrityError loop: can I delete those migrations that aren't yet in the db?
39,607,668
0
2
801
0
python,django
OK, so I crossed my fingers, backed my local 0021-0028 migration files, and then deleted them. It worked. I think they key is that the migration files were not yet in the database yet, but not 100% sure. +1 if anyone can answer further for clarification.
0
0
0
0
2016-09-21T04:03:00.000
2
0
false
39,607,359
0
0
1
1
So, I committed and pushed all my code, and then deployed my web application successfully. Then, I added a new model to my 'home' app, which (for a reason I now understand, but doesn't matter here), created an IntegrityError (django.db.utils.IntegrityError: insert or update on table "foo" violates foreign key constraint "bar"). I ran python manage.py makemigrations, python manage.py migrate, which causes the the IntegrityError. However, even if I remove all of my new model code(so that git status comes up with nothing), the IntegrityError still happens. If I connect to my db via a different python instance and download select * from django_migrations;, the latest db migration: 0020 there is eight migrations away from my latest local home/migrations migration file: 0028. --> My question is: is it safe for me to delete my local 0021-0028 migration files? Will this fix my problem?
Should I handle ajax requests in vanilla Django or rest Django?
39,608,464
0
0
86
0
python,ajax,django,rest,django-rest-framework
I usually follow DDD approach. So all my requests end up being just a CRUD operation for an entity. I always prefer REST APIs, thus I would say if you have DDD approach already in place go with django-rest-framework. Otherwise, it really does not matter, depends on your need.
0
0
0
0
2016-09-21T05:44:00.000
1
1.2
true
39,608,377
0
0
1
1
I have a bunch of ajax requests on my website (ex. upvote sends request to server) Should I integrate this functionality server side just with another view function Or is it recommended that I shove all the necessary views into a Django rest framework?
Google App Engine Memcache Python
39,623,880
0
1
71
0
python,google-app-engine,memcached
Memcache is shared across users. It is not a cookie, but exists in RAM on the server for all pertinent requests to access.
0
1
0
0
2016-09-21T11:54:00.000
2
0
false
39,615,861
0
0
1
1
Using Google App Engine Memcache... Can more than one user access the same key-value pair? or in other words.. Is there a Memcache created per user or is it shared across multiple users?
django email username and password
39,628,967
0
1
268
0
python,django,email,smtp,webfaction
You set EMAIL_HOST and EMAIL_PORT just for sending emails to your user. Mail is sent using the SMTP host and port specified in the EMAIL_HOST and EMAIL_PORT settings. The EMAIL_HOST_USER and EMAIL_HOST_PASSWORD settings, if set, are used to authenticate to the SMTP server, and the EMAIL_USE_TLS and EMAIL_USE_SSL settings control whether a secure connection is used.
0
0
0
0
2016-09-21T16:11:00.000
1
1.2
true
39,621,594
0
0
1
1
I'm implementing a contact form for one of my sites. One thing I'm not sure I understand completely is why you need EMAIL_HOST_USER and EMAIL_HOST_PASSWORD. The user would only need to provide his/her email address, so what is the EMAIL_HOST_USER referring to then and why would I need to specify an email and password? EDIT: I'm using webfaction as my mail server
Is there a way to set a tag on an EC2 instance while creating it?
39,649,819
0
0
66
0
python,amazon-web-services,amazon-ec2,boto
At the time of writing, there is no way to do this in a single operation.
0
0
1
0
2016-09-21T23:44:00.000
1
0
false
39,628,128
0
0
1
1
Is that a way to create an EC2 instance with tags(I mean, adding tag as parameter when creating instance) I can't find this function in boto APIs. According to the document, we can only add tags after creating. However, when creating on the browser, we can configure the tags when creating. So can we do the same thing in boto? (In our course we are required to tag our resource when creating, which is for bill monitor purpose, so adding tags after creating is not allowed.....)
making a python interpreter using javascript
39,632,417
1
1
4,748
0
javascript,python
Well, an interpreter is not really a job for a beginner, also you'd better send the code to server side with AJAX, and then display the result in the page.
0
0
0
0
2016-09-22T06:08:00.000
3
0.066568
false
39,631,465
1
0
1
1
I want to make a python interpreter by using Javascript. Then you can input the python code and the Javascript in the webpage can interpret the code into javascript code and then run the code and return the result. Because I have not much experience in this area, so I want some advice from the senior. Thanks very much ...
How to temporarily disable foreign key constraint in django
39,640,286
1
1
1,839
0
python,django,django-models
ForeignKey is a many-to-one relationship. Requires a positional argument: the class to which the model is related. It must be Relation (class) or Null (if null allowed). You cannot set 0 (integer) to ForeignKey columnn.
0
0
0
0
2016-09-22T13:10:00.000
1
1.2
true
39,640,037
0
0
1
1
I have to update a record which have foreign key constraint. i have to assign 0 to the column which is defined as foreign key while updating django don't let me to update record.
24 bit deep wav file generator
39,747,053
0
0
58
0
python-2.7,audio,wav,wave
I found solution. The trick was to make wav file the same way as you would make it for 32 bit depth, but set LOWER(not upper) 8 bits(LSBs) to zeros. So in hex format you would have 00 xx xx 00 xx xx ... where xx are some hex numbers.
0
0
0
0
2016-09-23T12:38:00.000
1
0
false
39,660,973
0
0
1
1
is it posible to generate wav file in python with 24bit deep and sample width 4, not 3 (3x8=24). The idea is to have 32bit deep, so that sample width of 4 (4x8=32) can be made, but i would try to make upper bits all ones (1), so that it looks like 24bit deep. Im open to suggestions. Thank you.
How can a class hold an array of classes in django
39,679,214
3
0
121
0
python,django
You're confusing two different things here. A class can easily have an attribute that is a list which contains instances of another class, there is nothing difficult about that. (But note that there is no way in which a Message should extend MessageBox; this should be composition, not inheritance.) However then you go on to talk about Django models. But Django models, although they are Python classes, also represent tables in the database. And the way you represent one table containing a list of entries in another table is via a foreign key field. So in this case your Message model would have a ForeignKey to MessageBox. Where you put the send method depends entirely on your logic. A message should probably know how to send itself, so it sounds like the method would go there.
0
0
0
0
2016-09-24T17:38:00.000
1
0.53705
false
39,679,167
0
0
1
1
I have been having trouble using django. Right now, I have a messagebox class that is suppose to hold messages, and a message class that extends it. How do I make it so messagebox will hold messages? Something else that I cannot figure out is how classes are to interact. Like, I have a user that can send messages. Should I call its method to call a method in messagebox to send a msg or can I have a method in user to make a msg directly. My teacher tries to accentuate cohesion and coupling, but he never even talks about how to implement this in django or implement django period. Any help would be appreciated.
Proper model defination in django for a widget manager
39,692,799
1
0
27
0
python,django,django-models
If I understood your description correctly, you want a relationship where there can be many emailWidget or TextWidget for one instance of widgetManager. What you can do in this case is add a ForeignKey field for widgetManager to emailWidget and TextWidget. This way, you can have many instances of the widgets while they refer to the same manager. I think you may have confused inheritance with model relationships when you said you want to extend widgets from a base class. Perhaps I'm wrong? Not sure what you mean't about order of the widget being important either..
1
0
0
0
2016-09-25T20:46:00.000
1
1.2
true
39,691,679
0
0
1
1
I have a model called widgetManager and 2 widget models called emailWidget and TextWidget. Now a single instance of widgetManager can have multiple instances of emailWidget and TextWidget. How can this be achieved with the following in mind Till now i only have two but there can be more in future The order of widget is very important I have tried with adding two many-many relations in widgetManager but that seems impractical and not the best way to go around because if first condition. What i have in mind is maybe i can somehow make a base widget class and extend all the widgets from that class but i am not sure about that. Would be super helpful if someone can point me in the right direction. Thanks in advance.
Django 1.8 startup delay troubleshooting
39,736,367
1
0
268
0
python,django
A partial answer. After some time with WingIDE IDE's debugger, and some profiling with cProfile, I have located the main CPU hogging issue. During initial django startup there's a cascade of imports, in which module validators.py prepares some compiled regular expressions for later use. One in particular, URLValidator.regex, is complicated and also involves five instances of the unicode character set (variable ul). This causes re.compile to perform a large amount of processing, notably in sre_compile.py _optimize_charset() and in large number of calls to the fixup() function. As it happens, the particular combination of calls and data structure apparently hit a special slowness in WingIDE 6.0b2 debugger. It's considerably faster in WingIDE 5.1 debugger (though still much slower than when run from command line). Not sure why yet, but Wingware is looking into it. This doesn't explain the occasional slowness when launched from the command line on Windows; there's an outside change this was waiting for a sleeping drive to awaken. Still observing.
0
0
0
0
2016-09-26T10:19:00.000
1
0.197375
false
39,700,254
0
0
1
1
I'm trying to discover the cause of delays in Django 1.8 startup, especially, but not only, when run in a debugger (WingIDE 5 and 6 in my case). Minimal test case: the Django 1.8 tutorial "poll" example, completed just to the first point where 'manage.py runserver' works. All default configuration, using SQLite. Python 3.5.2 with Django 1.8.14, in a fresh venv. From the command line, on Linux (Mint 18) and Windows (7-64), this may run as fast as 2 seconds to reach the "Starting development server" message. But on Windows it sometimes takes 10+ secs. And in the debugger on both machines, it can take 40 secs. One specific issue: By placing print statements at begin and end of django/__init__.py setup(), I note that this function is called twice before the "Starting... " message, and again after that message; the first two times contribute half the delay each. This suggests that django is getting started three times. What is the purpose of that, or does it indicate a problem? (I did find that I could get rid of one of the first two startup()s using the runserver --noreload option. But why does it happen in the first place? And there's still a startup() call after the "Starting..." message.) To summarize the question: -- Any insights into what might be responsible for the delay? -- Why does django need to start three times? (Or twice, even with --noreload).
How can I out put an Excel file as Email attachment in SAP CMC?
39,727,668
1
0
744
1
python,excel,email,sap,business-objects
It's kind of hack-ish, but it can be done. Have the program (exe) write out the bytes of the Excel file to standard output. Then configure the program object for email destination, and set the filename to a specific name (ex. "whatever.xlsx"). When emailing a program object, the attached file will contain the standard output/error of the program. Generally this will just be text but it works for binary output as well. As this is a hack, if the program generates any other text (such as error message) to standard out, it will be included in the .xlsx file, which will make the file invalid. I'd suggest managing program errors such that they get logged to a file and NOT to standard out/error. I've tested this with a Java program object; but an exe should work just as well.
0
0
0
0
2016-09-27T13:50:00.000
1
0.197375
false
39,726,495
0
0
1
1
I have been trying to schedule a report in SAP BO CMC. This report was initially written in Python and built into a .exe file. This .exe application runs to save the report into an .xlsx file in a local folder. I want to utilize the convenient scheduling functions in SAP BO CMC to send the report in Emails. I tried and created a "Local Program" in CMC and linked it to the .exe file, but you can easily imagine the problem I am faced with -- the application puts the file in the folder as usual but CMC won't be able to grab the Excel file generated. Is there a way to re-write the Python program a bit so that the output is not a file in some folder, but an object that CMC can get as an attachment to the Emails? I have been scheduling Crystal reports in CMC and this happens naturally. The Crystal output can be sent as an attachment to the Email. Wonder if the similar could happen for a .exe , and how? Kindly share your thoughts. Thank you very much! P.S. Don't think it possible to re-write the report in Crystal though, as the data needs to be manipulated based on inputs from different data sources. That's where Python comes in to help. And I hope I don't need to write the program as to cover the Emailing stuff and schedule it in windows' scheduled tasks. Last option... This would be too inconvenient to maintain. We don't get access to the server easily.
Why Flask Migrate doesn't create an empty migration file?
39,761,658
4
1
1,587
1
python,flask-sqlalchemy,flask-migrate
If you have made no changes to your model from the current migration, but you get a non-empty migration file generated, then it suggests for some reason your models became out of sync with the database, and the contents of this new migration are just the things that are mismatched. If you say that the migration contains code that drops some constraints and add some other ones, it makes me think that the constraint names have probable changed, or maybe you upgraded your SQLAlchemy version to a newer version that generates constraints with different names.
0
0
0
0
2016-09-28T10:20:00.000
1
1.2
true
39,744,688
0
0
1
1
I am using Flask, Flask-SqlAlchemy and Flask-Migrate to manage my models. And I just realize that in my latest database state, when I create a new migration file, python manage.py db migrate -m'test migration, it will not create an empty migration file. Instead it tries to create and drop several unique key and foreign key constraints. Any ideas why it behaves like this?
How to point blog to menu item in Mezzanine?
47,832,443
0
0
59
0
python,django,mezzanine
Add a rich text page, call it Blog or however you want, then in meta data group, in the field URL add /blog/ or whichever is the url to the main blog app. Mezzanine will match the url with the page and will add the Page object to the rendering context, so you can use it in templates.
0
0
0
0
2016-09-28T13:34:00.000
1
0
false
39,749,107
0
0
1
1
How do you point blogs to a menu item in Mezzanine? I am able to point my blogs to Home using urls.py but how about to page types like link and richtextpage?
Automatically running app .py in in Heroku
39,754,555
0
0
43
0
python,django,git,heroku
Not sure but try: heroku run --app cghelper python bot.py &
0
1
0
1
2016-09-28T16:45:00.000
1
0
false
39,753,285
0
0
1
1
I have created a bot for my website and I currently host in on heroku.com. I run it by executing the command heroku run --app cghelper python bot.py This executes the command perfectly through CMD and runs that specific .py file in my github repo. The issue is when I close the cmd window this stops the bot.py. How can I get the to run automatically. Thanks
Django: Request timeout for long-running script
39,775,664
2
2
3,180
0
python,django,python-3.x,pythonanywhere,django-1.9
We don't change the request timeout for individual users on PythonAnywhere. In the vast majority of cases, a request that takes 5 min (or even, really, 1 min) indicates that something is very wrong with the app.
0
0
0
0
2016-09-28T17:41:00.000
2
1.2
true
39,754,283
0
0
1
2
I have a webpage made in Django that feeds data from a form to a script that takes quite a long time to run (1-5 minutes) and then returns a detailview with the results of that scripts. I have problem with getting a request timeout. Is there a way to increase time length before a timeout so that the script can finish? [I have a spinner to let users know that the page is loading].
Django: Request timeout for long-running script
39,754,475
0
2
3,180
0
python,django,python-3.x,pythonanywhere,django-1.9
Yes, the timeout value can be adjusted in the web server configuration. Does anyone else but you use this page? If so, you'll have to educate them to be patient and not click the Stop or Reload buttons on their browser.
0
0
0
0
2016-09-28T17:41:00.000
2
0
false
39,754,283
0
0
1
2
I have a webpage made in Django that feeds data from a form to a script that takes quite a long time to run (1-5 minutes) and then returns a detailview with the results of that scripts. I have problem with getting a request timeout. Is there a way to increase time length before a timeout so that the script can finish? [I have a spinner to let users know that the page is loading].
How can I order elements in a window in python apache beam?
39,776,373
6
3
1,665
0
python,google-cloud-dataflow,dataflow,apache-beam
There is not currently built-in value sorting in Beam (in either Python or Java). Right now, the best option is to sort the values yourself in a DoFn like you mentioned.
0
1
0
0
2016-09-29T03:04:00.000
2
1.2
true
39,760,733
0
0
1
1
I noticed that java apache beam has class groupby.sortbytimestamp does python have that feature implemented yet? If not what would be the way to sort elements in a window? I figure I could sort the entire window in a DoFn, but I would like to know if there is a better way.
Flask dev server limits
39,767,391
0
0
412
0
python,flask,request
Besides performance you want an outfacing service (like a webserver) to be as secure as possible. The flask development server is not developed with high security as a goal, so there are probably sercurity relevant bugs.
0
0
0
0
2016-09-29T09:57:00.000
1
0
false
39,767,160
0
0
1
1
I implemented a REST API using flask and I am wondering what is the limit of the dev. server? I mean why investing time and money to deploy the api on a prod server while the dev. server can support the traffic. To avoid marking the question as duplicate, I am not asking for security risks, I want to know what are the limits of the flask dev. server in term of request/seconds. Thanks in advance.
Flask doesn't seem to recognize file changes
41,841,741
-1
3
1,804
0
python,flask
I was having a similar issue and deleting the .pyc files solved it for me.
0
0
0
0
2016-09-29T17:43:00.000
2
-0.099668
false
39,776,791
0
0
1
1
A little background: I've been working on this project for about six months now and it's been running on Flask the whole time. Everything has been fine, multiple versions of the backend have been deployed live to support an app that's been in production for months now. The development cycle involves writing everything locally and using Flask-Script's runserver command to test everything locally on localhost:8080 before deploying to a dev server and then finally to the live server. The Problem: The other day my local flask instance, which runs on localhost:8080 apparently stopped respecting my local files. I tried adding a new view (with a new template) and I got a 404 error when trying to view it in my browser. I then tried making some test changes to one of the existing pages by adding a few extra words to the title. I restarted flask and none of those changes appeared. I then went as far as deleting the entire views.py file. After restarting flask again, much to my dismay, I could still view the pages that were there originally (i.e. before this behavior started). Finally, I made some changes to the manage.py file, which is where I put all of the Flask-Script commands, and they weren't recognized either. It's as if flask started reading from a cached version of the filesystem that won't update (which very well might be the case but I have no idea why it started doing this or how to fix the issue). FYI: Browser caching shouldn't be an issue b/c I have the dev tools open with caching disabled. Plus the fact that changes to manage.py aren't being noticed shouldn't have anything to do with the browser.
how to generate a responsive PDF with Django?
39,792,862
2
1
252
0
python,html,django,pdf-generation,weasyprint
PDF is not built to be responsive, it is built to display the same no matter where it is viewed. As @alxs pointed out in a comment, there are a few features that PDF viewing applications have added to simulate PDFs being responsive. Acrobat's Reflow feature is the best example of this that I am aware of and even it struggles with most PDFs that users come across in the wild. One of the components (if not the only one) that matters, is that in order for a PDF to be useful in Acrobat's Reflow mode is to make sure that the PDFs you are creating contain structure information, this would be a Tagged PDF. Tagged PDF contains content that has been marked, similar to HTML tags, where text that makes up a paragraph is tagged in the PDF as being a paragraph. A number of PDF tools (creation or viewing) do not interpret the structure of a PDF though.
0
0
0
0
2016-09-29T22:02:00.000
1
0.379949
false
39,780,715
0
0
1
1
how to generate a responsive PDF with Django?. I want to generate a PDF with Django but i need that is responsive, that is to say, the text of the PDF has that adapted to don't allow space empty. for example to a agreement this change in the text, then, i need to adapt the to space of paper leaf.
Which is the better location to compress images? In the browser or on the server?
39,805,099
1
0
63
0
javascript,python,html,django,image-compression
I advice you to compress on the browser in order to : avoid loading the server with many CPU and RAM heavy consuming calculations (as numerous as number of clients) dwindle bandwith needed when transfert image threw the network
0
0
0
0
2016-10-01T09:32:00.000
2
0.099668
false
39,805,033
0
0
1
1
I have a Django project and i allow users to upload images. I don't want to limit image upload size for users. But want to compress the image after they select and store them. I want to understand which is better: Compress using java-script on the browser. Back end server using python libraries. Also it will be helpful if links can be provided to implement the better approach.
How to go about incremental scraping large sites near-realtime
39,805,342
1
0
265
0
python,postgresql,web-scraping,scrapy
For example: I have a site with 100 pages and 10 records each. So I scrape page 1, and then go to page 2. But on fast growing sites, at the time I do the request for page 2, there might be 10 new records, so I would get the same items again. Nevertheless I would get all items in the end. BUT next time scraping this site, how would I know where to stop? I can't stop at the first record I already have in my database, because this might be suddenly on the first page, because there a new reply was made. Usually each record has a unique link (permalink) e.g. the above question can be accessed by just entering https://stackoverflow.com/questions/39805237/ & ignoring the text beyond that. You'll have to store the unique URL for each record and when you scrape next time, ignore the ones that you already have. If you take the example of tag python on Stackoverflow, you can view the questions here : https://stackoverflow.com/questions/tagged/python but the sorting order can't be relied upon for ensuring unique entries. One way to scrape would be to sort by newest questions and keep ignoring duplicate ones by their URL. You can have an algorithm that scrapes first 'n' pages every 'x' minutes until it hits an existing record. The whole flow is a bit site specific, but as you scrape more sites, your algorithm will become more generic and robust to handle edge cases and new sites. Another approach is to not run scrapy yourself, but use a distributed spider service. They generally have multiple IPs and can spider large sites within minutes. Just make sure you respect the site's robots.txt file and don't accidentally DDoS them.
0
0
1
0
2016-10-01T09:56:00.000
1
0.197375
false
39,805,237
0
0
1
1
I want to scrape a lot (a few hundred) of sites, which are basically like bulletin boards. Some of these are very large (up to 1.5 million) and also growing very quickly. What I want to achieve is: scrape all the existing entries scrape all the new entries near real-time (ideally around 1 hour intervals or less) For this we are using scrapy and save the items in a postresql database. The problem right now is, how can I make sure I got all the records without scraping the complete site every time? (Which would not be very agressive traffic-wise, but also not possible to complete within 1 hour.) For example: I have a site with 100 pages and 10 records each. So I scrape page 1, and then go to page 2. But on fast growing sites, at the time I do the request for page 2, there might be 10 new records, so I would get the same items again. Nevertheless I would get all items in the end. BUT next time scraping this site, how would I know where to stop? I can't stop at the first record I already have in my database, because this might be suddenly on the first page, because there a new reply was made. I am not sure if I got my point accross, but tl;dr: How to fetch fast growing BBS in an incremental way? So with getting all the records, but only fetching new records each time. I looked at scrapy's resume function and also at scrapinghubs deltafetch middleware, but I don't know if (and how) they can help to overcome this problem.
Automate file downloading using a chrome extension
39,837,450
0
0
282
0
python,automation,imacros
There is a python package called mechanize. It helps you automate the processes that can be done on a browser. So check it out.I think mechanize should give you all the tools required to solve the problem.
0
0
1
0
2016-10-03T17:11:00.000
1
0
false
39,836,893
0
0
1
1
I have a .csv file with a list of URLs I need to extract data from. I need to automate the following process: (1) Go to a URL in the file. (2) Click the chrome extension that will redirect me to another page which displays some of the URL's stats. (3) Click the link in the stats page that enables me to download the data as a .csv file. (4) Save the .csv. (5) Repeat for the next n URLs. Any idea how to do this? Any help greatly appreciated!
How to detect if script ran from Django or command prompt?
39,844,257
0
2
219
0
python,django
Explicit is better than implicit. Wrap your interactivity in a function that's called only if the __name__ == "__main__" part was executed. From the django parts, just use it as a library. Most ways of doing these kinds of checks are semi-magical and hence flaky.
0
0
0
0
2016-10-03T21:31:00.000
3
0
false
39,840,736
0
0
1
1
I have a Python script that pauses for user input (using raw_input, recently I created a Django web UI for this script. Now when I execute the script via Django is pauses as it's waiting for input in the backend. How can I determine if the script was ran from Django or terminal/cmd/etc? I don't want to maintain 2 streams of code, one for web and another one for terminal.
Is it possible to use the selected lines of a one2many list in a function?
39,851,575
0
1
179
0
python,openerp,odoo-8
You can get these selected record ids in ids instead of active_ids.
0
0
0
0
2016-10-04T11:34:00.000
1
0
false
39,851,220
1
0
1
1
Ive been using the module “web_o2m_delete_multi” that lets me select multiple lines in one2many list view and delete them all. Is there a way to use the selected lines in a python function? I tried active_ids but it’s not working.
Should conda, or conda-forge be used for Python environments?
57,060,370
7
195
106,246
0
python,anaconda,conda
The conda-forge channel is where you can find packages that have been built for conda but yet to be part of the official Anaconda distribution. Generally, you can use any of them.
0
0
0
0
2016-10-04T16:19:00.000
4
1
false
39,857,289
1
0
1
1
Conda and conda-forge are both Python package managers. What is the appropriate choice when a package exists in both repositories? Django, for example, can be installed with either, but the difference between the two is several dependencies (conda-forge has many more). There is no explanation for these differences, not even a simple README. Which one should be used? Conda or conda-forge? Does it matter?
Asking for username and password from XDB
40,708,891
0
0
809
0
python,web.py,cx-oracle
XDB is an Oracle database component. It would appear that on your first PC, you're able to automatically log on to the database which is why you're not prompted. However, the second PC isn't able to, so you're prompted. Compare using SQL*Plus (or other oracle client) from your two PCs & configure PC #2 so that it won't require a login (or modify your cx_oracle connect() call to provide the correct connection parameters (user, password, dsn, etc.)
0
0
1
0
2016-10-05T08:30:00.000
1
0
false
39,869,000
0
0
1
1
I have a webservice (web.py+cx_Oracle) and now I will call it with localhost:8080/...! On the local pc it is working. But after installation on a second pc for testing purposes it is not working there. All versions are the same! On the second pc the browser is asking for a username and password from XDB. What is XDB and why is he asking only on the second pc? On the first pc everything works fine and he is not asking for username and password...Can someone explain to me what is going on?
Disable checkbox in django admin if already checked
39,869,931
0
0
917
0
python,django,django-admin
There is no built-in solution to this problem, if you want the fields to display dynamically you will always need a custom javascript/ajax solution! You might be able to hack the admin view and template to conditionally show/not show widgets for a field, but if you want to do it dynamically based on user behaviors in the admin, you'll be using javascript. It's not so terrible, though. At least the Django admin templates have model- and instance-specific ids to give you granular control over your show/hide behavior.
0
0
0
0
2016-10-05T09:03:00.000
2
0
false
39,869,681
0
0
1
1
I have a simple but problematic question for me. How can I disable checkbox, if input is already filled/checked? I must disable some fields after first filling them. Thank you for all your ideas. Sierran
Is there a way to get the sonar result per class or per module
39,873,031
2
1
143
0
python,sonarqube
SonarQube doesn't know the concept of "class". This is a logical element, whereas SonarQube manages only "physical" components like files or folders. The consequence is that the Web API allows you to query only components that are "physical".
0
0
0
1
2016-10-05T10:34:00.000
1
0.379949
false
39,871,632
0
0
1
1
I want to get the sonar result in the class wise classification or modularized format. I am using python and the sonar web API. Apart from the basic APIs are there any other APIs which give me the results per class
Best practice for sequential execution of group of tasks in Celery
39,879,475
0
0
361
0
django,celery,python-3.5
I would use a model. The user selects the tasks and orders them, creating records in the table. A celery task runs and executes the tasks from the table in the specified order.
0
1
0
0
2016-10-05T11:35:00.000
1
1.2
true
39,872,909
0
0
1
1
I have page that allows user to select tasks which should be executed in selected order, one by one. So, it create group of tasks. User can create several of them. For each group I should make possible to look on tasks progress. I'm looked into several things like chain, chord, group but it seems very tricky for me, and I don't see any possibility to look on each task progress. What's good solution for this kind of problem?
502 Bad Gateway nginx/1.1.19 on django
39,879,669
1
0
1,426
0
python,django,nginx
Error 502 Bad Gateway means that the NGINX server used to access your site couldn't communicate properly with the upstream server (your application server). This can mean that either or both of your NGINX server and your Django Application server are configured incorrectly. Double-check the configuration of your NGINX server to check it's proxying to the correct domain/address of your application server and that it is otherwise configured correctly. If you're sure this isn't the issue then check the configuration of your application server. Are you able to connect directly to the application server's address? If you are able to log in to the server running the application, you can try localhost:<port> using your app's port number to connect directly. You can try it with curl to see what response code you get back.
0
0
0
0
2016-10-05T16:13:00.000
1
0.197375
false
39,879,034
0
0
1
1
I am new to this. I took the image of running django application and spawned the new vm that points to a different database but I am getting this "502 Bad Gateway nginx/1.1.1" when i tested this in development mode, it works fine but not otherwise. i looked into /var/log/nginx/access.log and error.log but nothing found there. Any help would be appreciated
django migrate failing after switching from sqlite3 to postgres
40,100,350
2
2
955
1
python,django,postgresql,django-models,sqlite
This may help you : I think you have pre-stored migration files(migrate for sqlite database). Now you have change the database configuration but still django looking for the existing table according to migration files you have(migrated for previous database). Better you delete all the migration files in migration folder of your app, and migrate it again, by running commands python manage.py makemigrations and python manage.py migrate it may work fine.
0
0
0
0
2016-10-05T17:06:00.000
1
1.2
true
39,879,939
0
0
1
1
I have been developing a Django project using sqlite3 as the backend and it has been working well. I am now attempting to switch the project over to use postgres as the backend but running into some issues. After modifying my settings file, setting up postgres, creating the database and user I get the error below when running manage.py migrate django.db.utils.ProgrammingError: relation "financemgr_rate" does not exist financemgr is an app within the project. rate is a table within the app. If I run this same command but specify sqlite3 as my backend it works fine. For clarity I will repeat: Environment Config1 Ubuntu 14.04, Django 1.10 Settings file has 'ENGINE': 'django.db.backends.sqlite3' Run manage.py migrate Migration runs and processes all the migrations successfully Environment Config2 Ubuntu 14.04, Django 1.10 Settings file has 'ENGINE': 'django.db.backends.postgresql_psycopg2' Run manage.py migrate Migration runs and gives the error django.db.utils.ProgrammingError: relation "financemgr_rate" does not exist Everything else is identical. I am not trying to migrate data, just populate the schema etc. Any ideas?
django-tables2 flooding database with queries
39,882,505
2
3
348
1
python,django,django-tables2
Im posting this as a future reference for myself and other who might have the same problem. After searching for a bit I found out that django-tables2 was sending a single query for each row. The query was something like SELECT * FROM "table" LIMIT 1 OFFSET 1 with increasing offset. I reduced the number of sql calls by calling query = list(query) before i create the table and pass the query. By evaluating the query in the python view code the table now seems to work with the evaulated data instead and there is only one database call instead of hundreds.
0
0
0
0
2016-10-05T19:49:00.000
2
1.2
true
39,882,504
0
0
1
1
im using django-tables2 in order to show values from a database query. And everythings works fine. Im now using Django-dabug-toolbar and was looking through my pages with it. More out of curiosity than performance needs. When a lokked at the page with the table i saw that the debug toolbar registerd over 300 queries for a table with a little over 300 entries. I dont think flooding the DB with so many queries is a good idea even if there is no performance impact (at least not now). All the data should be coming from only one query. Why is this happening and how can i reduce the number of queries?
Skip a list of migrations in Django
39,891,704
1
4
1,632
0
python,django,django-models,django-migrations
The django knows about applied migrations is only through migration history table. So if there is no record about applied migration it will think that this migration is not applied. Django does not check real db state against migration files.
0
0
0
0
2016-10-06T08:16:00.000
3
0.066568
false
39,890,923
0
0
1
1
I have migrations 0001_something, 0002_something, 0003_something in a third-party app and all of them are applied to the database by my own app. I simply want to skip these three migrations. One option is to run the following command python manage.py migrate <third_party_app_name> 0003 --fake But I don't want to run this command manually. I was thinking if there can be any method by which I can specify something in settings to skip these migrations. I would simply run python manage.py migrate and it would automatically recognize that 3 migrations need to be faked. Or if there is any way to always fake 0001, 0002 and 0003. If this was in my own app, I could simply remove the migration files but it is a third party app installed via. pip and I don't want to change that.
Inputs on how to achieve REST based interaction between Java and Python?
39,906,371
0
1
2,327
0
java,python,rest,api
Furthermore, in the future you might want to separate them from the same machine and use network to communicate. You can use http requests. Make a contract in java of which output you will provide to your python script (or any other language you will use) send the output as a json to your python script, so in that way you can easily change the language as long as you send the same json.
0
0
1
0
2016-10-06T21:51:00.000
2
0
false
39,906,167
0
0
1
1
I have a Java process which interacts with its REST API called from my program's UI. When I receive the API call, I end up calling the (non-REST based) Python script(s) which do a bunch of work and return me back the results which are returned back as API response. - I wanted to convert this interaction of UI API -> JAVA -> calling python scripts to become end to end a REST one, so that in coming times it becomes immaterial which language I am using instead of Python. - Any inputs on whats the best way of making the call end-to-end a REST based ?
How to modify deprecated imports for a reusable app?
39,910,769
1
1
29
0
python,django
Django has a strict backwards compatibility policy. If it's raising a deprecation warning, then the new version works already in 1.9. You should just switch to it before you upgrade.
0
0
0
0
2016-10-07T00:57:00.000
1
0.197375
false
39,907,808
0
0
1
1
My project depends on an OSS reusable app, and that app includes a Django import which is deprecated in Django 1.10: from django.db.models.sql.aggregates import Aggregate is changing to: from django.db.models.aggregates import Aggregate We get a warning on Django 1.9, which will become an error on Django 1.10. This is blocking our upgrade, and I want to contribute a fix to the app so we can upgrade. One option would be to modify the requirements in setup.py so that Django 1.10 is required. But I'm sure my contribution would be rejected since it would break for everyone else. To maintain backwards compatibility, I can do the import as a try/except but that feels hacky. It seems like I need to do some Django version checking in the imports. Should I do a Django version check, which returns a string, convert that to a float, and do an if version > x? That feels hacky too. What's the best practice on this? Examples?
CPython 2.7 + Java
39,929,049
1
0
195
0
java,python,macos,python-2.7,cpython
If you have lot of dependcieis on Java/JVM, you can consider using Jython. If you would like to develop a scalable/maintainable application, consider using microservices and keep Java and Python components separate. If your call to Java is simple and it is easy to capture the output and failure, you can go ahead with this running the system command to invoke Java parts.
0
0
0
0
2016-10-07T05:54:00.000
2
1.2
true
39,910,350
0
0
1
1
My major program is written in Python 2.7 (on Mac) and need to leverage some function which is written in a Java 1.8, I think CPython cannot import Java library directly (different than Jython)? If there is no solution to call Java from CPython, could I integrate in this way -- wrap the Java function into a Java command line application, Python 2.7 call this Java application (e.g. using os.system) by passing command line parameter as inputs, and retrieve its console output? regards, Lin
Installed Virtualenv and activating virtualenv doesn't work
53,875,262
1
17
97,105
0
python,django,virtualenv
I had installed Django 2 via pip3 install Django, but I was running python manage.py runserver instead of python3 manage.py runserver. Django 2 only works with python 3+.
0
0
0
0
2016-10-08T16:51:00.000
5
0.039979
false
39,934,906
1
0
1
1
I cloned my Django Project from Github Account and activated the virtualenv using famous command source nameofenv/bin/activate And when I run python manage.py runserver It gives me an error saying: ImportError: Couldn't import Django. Are you sure it's installed and available on your PYTHONPATH environment variable? Did you forget to activate a virtual environment?
Providing visibility of periodic changes to a database
39,941,551
1
0
46
1
python,filemaker
This is not a standard requirement and there is no easy way of doing this. The best way to track changes is a Source Control system like git, but it is not applicable to FileMaker Pro as the files are binary. You can try your approach, or you can try to add the new records in FileMaker instead of updating them and flag them as current or use only the last record There are some amazing guys here, but you might want to take it to one of the FileMAker forums as the FIleMAker audience there is much larger then in SO
0
0
0
0
2016-10-08T19:12:00.000
1
1.2
true
39,936,352
0
0
1
1
This is quite a general question, though I’ll give the specific use case for context. I'm using a FileMaker Pro database to record personal bird observations. For each bird on the national list, I have extracted quite a lot of base data by website scraping in Python, for example conservation status, geographical range, scientific name and so on. In day-to-day use of the database, this base data remains fixed and unchanging. However, once a year or so I will want to re-scrape the base data to pick up the most recent published information on status, range, and even changes in scientific name (that happens). I know there are options such as PyFilemaker or bBox which should allow me to write to the FileMaker database from Python, so the update mechanism itself shouldn't be a problem. It would be rather dangerous simply to overwrite all of last year’s base data with the newly scraped data, and I'm looking for general advice as to how best to provide visibility for the changes before manually importing them. What I have in mind is to use pandas to generate a spreadsheet using the base data, and to highlight the changed cells. Does that sound a sensible way of doing it? I suspect that this may be a very standard requirement, and if anybody could help out with comments on an approach which is straightforward to implement in Python that would be most helpful.
Using GET and POST to add data to a database in Django
39,936,550
0
0
467
0
python,django,raspberry-pi
you can use urllib module or requests module in python to send POST request to your Django server. And you can have Django view to respond to that POST request. Inside this view, you can have method to add the data which sent from your Raspberry Pi program into your database which is at the Django server side. Therefore you don't need another GET method to deal with adding data into the database in this case.
0
0
0
0
2016-10-08T19:28:00.000
1
0
false
39,936,494
0
0
1
1
I am working on a project that will have a raspberry PI collect data from a set of sensors and then send the data to a django server. I need the server to then take that data and add it to a database and perform ARIMA time series forecasting on the updated dataset every x seconds after a number of new entries are added. Can I use POST in the raspberry PI program to send the data to that url, and then use GET in a django view to add the incoming data into a database?
Django Rest Framework standalone?
39,953,931
1
1
323
0
python,django,django-rest-framework
There are some parts you can use without Django though it might need to be installed. This question feels like this isn't the real question. Why would you need DRF without Django ?
0
0
0
0
2016-10-09T15:40:00.000
3
0.066568
false
39,945,389
0
0
1
2
Do I need to have a django website in order to use django rest framework, or can I use DRF by itself as a standalone app? Sorry but it is not so obvious to me. thanks for the help.
Django Rest Framework standalone?
52,072,780
-1
1
323
0
python,django,django-rest-framework
django rest framework is a wrapper of django from rest APIs. django is required for django rest framework
0
0
0
0
2016-10-09T15:40:00.000
3
-0.066568
false
39,945,389
0
0
1
2
Do I need to have a django website in order to use django rest framework, or can I use DRF by itself as a standalone app? Sorry but it is not so obvious to me. thanks for the help.
Using different dbs on production and test environment
39,951,058
0
0
68
1
python,github,configuration,travis-ci,configuration-files
Let's take Linux environment for example. Often, the user level configuration of an application is placed under your home folder as a dot file. So what you can do is like this: In your git repository, track a sample configure file, e.g., config.sample.yaml, and put the configuration structure here. When deploying, either in test environment or production environment, you can copy and rename this file as a dot-file, e.g., $HOME/.{app}.config.yaml. Then in your application, you can read this file. If you are developing an python package, you can make the file copy operation done in the setup.py. There are some advantages: You can always track the structure changes of your configuration file. Separate configuration between test and production enviroment. More secure, you do not need to code your import db connection information in the public file. Hope this would be helpful.
0
0
0
0
2016-10-10T03:06:00.000
1
0
false
39,950,769
0
0
1
1
I want to use a test db on my test environment, and the production db on production environment in my Python application. How should I handle routing to two dbs? Should I have an untracked config.yml file that has the test db's connection string on my test server, and the production db's connection string on production server? I'm using github for version control and travis ci for deployment.
ipyparallel displaying "registration: purging stalled registration"
39,958,173
1
0
264
0
ipython,zeromq,pyzmq,ipython-parallel
If you are using --reuse, make sure to remove the files if you change settings. It's possible that it doesn't behave well when --reuse is given and you change things like --ip, as the connection file may be overriding your command-line arguments. When setting --ip=0.0.0.0, it may be useful to also set --location=a.b.c.d where a.b.c.d is an ip address of the controller that you know is accessible to the engines. Changing the If registration works and subsequent connections don't, this may be due to a firewall only opening one port, e.g. 5900. The machine running the controller needs to have all the ports listed in the connection file open. You can specify these to be a port-range by manually entering port numbers in the connection files.
0
1
0
0
2016-10-10T09:13:00.000
1
1.2
true
39,954,942
0
0
1
1
I am trying to use the ipyparallel library to run an ipcontroller and ipengine on different machines. My setup is as follows: Remote machine: Windows Server 2012 R2 x64, running an ipcontroller, listening on port 5900 and ip=0.0.0.0. Local machine: Windows 10 x64, running an ipengine, listening on the remote machine's ip and port 5900. Controller start command: ipcontroller --ip=0.0.0.0 --port=5900 --reuse --log-to-file=True Engine start command: ipengine --file=/c/Users/User/ipcontroller-engine.json --timeout=10 --log-to-file=True I've changed the interface field in ipcontroller-engine.json from "tcp://127.0.0.1" to "tcp://" for ipengine. On startup, here is a snapshot of the ipcontroller log: 2016-10-10 01:14:00.651 [IPControllerApp] Hub listening on tcp://0.0.0.0:5900 for registration. 2016-10-10 01:14:00.677 [IPControllerApp] Hub using DB backend: 'DictDB' 2016-10-10 01:14:00.956 [IPControllerApp] hub::created hub 2016-10-10 01:14:00.957 [IPControllerApp] task::using Python leastload Task scheduler 2016-10-10 01:14:00.959 [IPControllerApp] Heartmonitor started 2016-10-10 01:14:00.967 [IPControllerApp] Creating pid file: C:\Users\Administrator\.ipython\profile_default\pid\ipcontroller.pid 2016-10-10 01:14:02.102 [IPControllerApp] client::client b'\x00\x80\x00\x00)' requested 'connection_request' 2016-10-10 01:14:02.102 [IPControllerApp] client::client [b'\x00\x80\x00\x00)'] connected 2016-10-10 01:14:47.895 [IPControllerApp] client::client b'82f5efed-52eb-46f2-8c92-e713aee8a363' requested 'registration_request' 2016-10-10 01:15:05.437 [IPControllerApp] client::client b'efe6919d-98ac-4544-a6b8-9d748f28697d' requested 'registration_request' 2016-10-10 01:15:17.899 [IPControllerApp] registration::purging stalled registration: 1 And the ipengine log: 2016-10-10 13:44:21.037 [IPEngineApp] Registering with controller at tcp://172.17.3.14:5900 2016-10-10 13:44:21.508 [IPEngineApp] Starting to monitor the heartbeat signal from the hub every 3010 ms. 2016-10-10 13:44:21.522 [IPEngineApp] Completed registration with id 1 2016-10-10 13:44:27.529 [IPEngineApp] WARNING | No heartbeat in the last 3010 ms (1 time(s) in a row). 2016-10-10 13:44:30.539 [IPEngineApp] WARNING | No heartbeat in the last 3010 ms (2 time(s) in a row). ... 2016-10-10 13:46:52.009 [IPEngineApp] WARNING | No heartbeat in the last 3010 ms (49 time(s) in a row). 2016-10-10 13:46:55.028 [IPEngineApp] WARNING | No heartbeat in the last 3010 ms (50 time(s) in a row). 2016-10-10 13:46:55.028 [IPEngineApp] CRITICAL | Maximum number of heartbeats misses reached (50 times 3010 ms), shutting down. (There is a 12.5 hour time difference between the local machine and the remote VM) Any idea why this may happen?
Python: How to simulate a click using BeautifulSoup
39,964,037
3
0
5,581
0
python
You can't do what you want. Beautiful soup is a text processor which has no way to run JavaScript.
0
0
1
0
2016-10-10T17:50:00.000
2
0.291313
false
39,963,972
0
0
1
2
I don't want to use selenium since I dont want to open any browsers. The button triggers a Javascript method that changes something in the page. I want to simulate a button click so I can get the "output" from it. Example (not what the button actually do) : I enter a name such as "John", press the button and it changes "John" to "nhoJ". so I already managed to change the value of the input to John but I have no clue how I could simulate a button click so I can get the output. Thanks.
Python: How to simulate a click using BeautifulSoup
39,964,061
0
0
5,581
0
python
BeautifulSoup is an HtmlParser you can't do such thing. Buf if that button calls an API, you could make a request to that api and I guess that would simulate clicking the button.
0
0
1
0
2016-10-10T17:50:00.000
2
0
false
39,963,972
0
0
1
2
I don't want to use selenium since I dont want to open any browsers. The button triggers a Javascript method that changes something in the page. I want to simulate a button click so I can get the "output" from it. Example (not what the button actually do) : I enter a name such as "John", press the button and it changes "John" to "nhoJ". so I already managed to change the value of the input to John but I have no clue how I could simulate a button click so I can get the output. Thanks.
Recommender engine in python - incorporate custom similarity metrics
40,001,529
2
0
114
0
python,machine-learning,recommendation-engine,data-science
I would keep it simple and separate: Your focus is collaborative filtering, so your recommender should generate scores for the top N recommendations regardless of location. Then you can re-score using distance among those top-N. For a simple MVP, you could start with an inverse distance decay (e.g. final-score = cf-score * 1/distance), and adjust the decay function based on behavioral evidence if necessary.
0
0
0
0
2016-10-11T01:29:00.000
1
1.2
true
39,969,168
0
0
1
1
I am currently building a recommender engine in python and I faced the following problem. I want to incorporate collaborative filtering approach, its user-user variant. To recap, its idea is that we have an information on different users and which items they liked (if applicable - which ratings these users assigned to items). When we have new user who liked couple of things we just find users who liked same items and recommend to this new user items which were liked by users similar to new user. But I want to add some twist to it. I will be recommending places to users, namely 'where to go tonight'. I know user preferences, but I want to also incorporate the distance to each item I could recommend. The father the place I am going to recommend to the user - the least attractive it should be. So in general I want to incorporate a penalty into recommendation engine and the amount of penalty for each place will be based on the distance from user to the place. I tried to googleif anyone did something similar but wasn't able to find anything. Any advice on how to properly add such penalty?
How to configure Django settings for different environments in a modular way?
39,981,423
1
0
780
0
python,django,wsgi,django-settings
Just set DJANGO_SETTINGS_MODULE in environment variables to your desired config file. That won't make you to change any of other services config files, and you don't even need to change django settings files.
0
0
0
0
2016-10-11T15:42:00.000
2
1.2
true
39,981,292
0
0
1
1
I have already searched on the web on this doubt, but they don't really seem to apply to my case. I have 3 different config files - Dev, Staging, Prod (of course) I want to modularize settings properly without repetition. So, I have made base_settings.py and I am importing it to dev_settings.py, stg_settings.py etc. Problem - How to invoke the scripts on each env properly with minimal changes? Right now, I'm doing this (taking dev env as an example)- python manage.py runserver --settings=core.dev_settings This works so far, but I am not convinced on how good workaround is this. Because wsgi.py and a couple of other services have - os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'core.settings') I am looking to do something without changing the config files of other services. Thank you everyone in advance. PS - I've tried to be as clear as possible, but please excuse me if anything is unclear.
Schedule table updates in Django
40,009,025
1
1
752
0
python,django,django-models
I think this is likely best accomplished by writing a server-side python script and adding a cronjob
0
0
0
0
2016-10-12T21:38:00.000
2
0.099668
false
40,008,788
0
0
1
1
How do you schedule updating the contents of a database table in Django based on time of day. For example every 5 minutes Django will call a REST api to update contents of a table.
Apache2 server run script as specific user
40,065,573
0
0
206
0
php,python,apache
It looks like I could use suEXEC. It is an Apache module that is not installed at default because they really don't want you to use it. It can be installed using the apt-get scheme. That said, I found the real answer to my issue, heyu uses the serial ports to do it's work. I needed to add www-data to the dialout group then reboot. This circumvented the need to run my code as me (as I had already add me to the dialout group a long time ago) in favor of properly changing the permissions. Thanks.
0
1
0
1
2016-10-13T01:00:00.000
2
0
false
40,010,657
0
0
1
1
I am using Ubuntu server 12.04 to run Apache2 web server. I am hosting several webpages, and most are working fine. One page is running a cgi script which mostly works (I have the python code working outside Apache building the html code nicely.) However, I am calling a home automation program (heyu) and it is returning different answers then when I run it in my user account. Is there a way I can... 1 call the heyu program from my python script as a specific user, (me) and leave the rest of the python code and cgi code alone? 2, configure apache2 to run the cgi code, as a whole, as me? I would like to leave all the other pages unchanged. Maybe using the sites_available part. 3, at least determine which user is running the cgi code so maybe I can get heyu to be OK with that user. Thanks, Mark.
url encoding in python and sqlite web app
48,018,208
0
0
98
0
python,url,web-applications
I just used a urlencode flag with the title. Something like {{title l urlencode}}. P.s the single l is a pipe.
0
0
0
0
2016-10-13T04:10:00.000
2
1.2
true
40,012,153
0
0
1
1
am new to python and am trying to build a blog kind of web app, my major problem is that I want the title of the each post to be its link which I would store in my database. I am using the serial number of each post as the url, but it doesn't meet my needs. Any help is appreciated.
Confusion on async file upload in python
40,034,220
1
0
786
0
python,ajax,asynchronous
Asynchronous behavior applies to either side independently. Either side can take advantage of the capability to take care of several tasks as they become ready rather than blocking on a single task and doing nothing in the meantime. For example, servers do things asynchronously (or at least they should) while clients usually don't need to (though there can be benefits if they do and modern programming practices encourage that they do).
0
0
1
0
2016-10-14T02:33:00.000
1
1.2
true
40,034,010
1
0
1
1
So I want to implement async file upload for a website. It uses python and javascript for frontend. After googling, there are a few great posts on them. However, the posts use different methods and I don't understand which one is the right one. Method 1: Use ajax post to the backend. Comment: does it make a difference? I thought async has to be in the backend not the front? So when the backend is writing files to disk, it will still be single threaded. Method 2: Use celery or asyncio to upload file in python. Method 3: use background thread to upload file in python. Any advice would be thankful.
packaging django application and deploying it locally
40,051,673
0
1
184
0
python,django
This is possible. However, the client machine would need to be equipped with the correct technologies for this to work. When you launch a web app on a server (live), the server is required to have certain settings and installs. For example, a Django web app: the server must have a version of Django installed. Hence, whichever machine is running your web app, must have Django installed. It presumably also needs to have the database too. It might be quite a hassling process but it's possible. Just like as a developer, you may have multiple users working on 1 project. So, they all need to have that project 'installed' on their devices so they can run it locally.
0
0
0
0
2016-10-14T20:36:00.000
2
0
false
40,051,602
0
0
1
2
I've never worked with Django before so forgive me if a question sounds stupid. I need to develop a web application, but I do not want to deploy it on a server. I need to package it, so that others would "install" it on their machine and run it. Why I want to do it this way? There are many reasons, which I don't want to go into right now. My question is: can I do it? If yes, then how?
packaging django application and deploying it locally
40,051,692
0
1
184
0
python,django
You need to either use a python to executable program, with Django already in it. The website files you can place into the dist folder or whatever folder has the executable in it. Then you can compress it and share it with others (who have the same OS as you). For an example: You have this script in Django (I'm too lazy to actually write one), and you want to share it with someone who doesn't have Python and Django on his/her computer.
0
0
0
0
2016-10-14T20:36:00.000
2
0
false
40,051,602
0
0
1
2
I've never worked with Django before so forgive me if a question sounds stupid. I need to develop a web application, but I do not want to deploy it on a server. I need to package it, so that others would "install" it on their machine and run it. Why I want to do it this way? There are many reasons, which I don't want to go into right now. My question is: can I do it? If yes, then how?
How is Django able to grant reserved port numbers?
40,063,068
1
1
115
0
python,django,port
Port 80 has no magical meaning, it is not "reserved" or "privileged" on your server (besides most likely requiring root privileges to access, as others have mentioned). It is just a regular port that was chosen to be a default for http, so you don't have to write google.com:80 every time in your browser, that's it. If you have no web server running such as apache or nginx which usually listen to that port, then port 80 is up for grabs. You can run django runserver on it, you can run a plain python script listening to it, whatever you like.
0
0
0
0
2016-10-15T06:17:00.000
2
1.2
true
40,055,676
0
0
1
2
Using this command python manage.py runserver 0.0.0.0:8000 we can host a Django server locally on any port.So a developer can use reserved and privileged port numbers say python manage.py runserver 127.0.0.1:80 So, now I am using port 80 defined for the HTTP protocol. So, why does this not raise any issues and how is this request granted ?
How is Django able to grant reserved port numbers?
40,055,695
1
1
115
0
python,django,port
You should use a proper server instead of Django's test server such as nginx or apache to run the server in production on port 80. Running something like sudo python manage.py runserver 0.0.0.0:80 is not recommended at all.
0
0
0
0
2016-10-15T06:17:00.000
2
0.099668
false
40,055,676
0
0
1
2
Using this command python manage.py runserver 0.0.0.0:8000 we can host a Django server locally on any port.So a developer can use reserved and privileged port numbers say python manage.py runserver 127.0.0.1:80 So, now I am using port 80 defined for the HTTP protocol. So, why does this not raise any issues and how is this request granted ?
Using other file names than models.py for Django models?
40,061,752
0
3
965
0
python,django,model
it depend how much model you define, if you have only 1 to 5 class model, just put it into single file, but if you have more than 5 class model, i suggesting put it on several files, but in my experience, if the model put in a serveral files, it become little cumbersome when it comes to importing stuff,
0
0
0
0
2016-10-15T16:33:00.000
2
0
false
40,061,555
0
0
1
1
When creating a reusable app, should I put all models I define into single file models.py or can I group the models into several files like topic1.py, topic2.py? Please describe all reasons pro and contra.
What is proper workflow for insuring "transactional procedures" in case of exceptions
40,071,369
1
0
28
0
python,django,exception,transactions
We use microservices in our company and at least once a month, we have one of our microservices down for a while. We have Transaction model for the payment process and statuses for every step that go before we send a product to the user. If something goes wrong or one of the connected microservices is down, we mark it like status=error and save to the database. Then we use cron job to find and finish those processes. You need to try something for the beginning and if does not fit your needs, try something else.
0
0
0
0
2016-10-16T09:33:00.000
1
1.2
true
40,068,842
0
0
1
1
In programming web applications, Django in particular, sometimes we have a set of actions that must all succeed or all fail (in order to insure a predictable state of some sort). Now obviously, when we are working with the database, we can use transactions. But in some circumstances, these (all or nothing) constraints are needed outside of a database context (e.g. If payment is a success, we must send the product activation code or else risk customer complaints, etc) But lets say on some fateful day, the send_code() function just failed time and again due to some temporary network error (that lasted for 1+ hours) Should I log the error, and manually fix the problem, e.g. send the mail manually Should I set up some kind of work queue, where when things fail, they just go back onto the end of the queue for future retry? What if the logging/queueing systems also fail? (am I worrying too much at this point?)
Is it possible to run program in python with additional HTTP server in infinite loop?
40,074,663
0
0
165
0
python,raspberry-pi
If I were to face with a problem like this right now, I wold do this: 1) First I'd try figuring out if I can use the event loop of the web framework to execute the code communicating with raspberry-pi asynchronously (i.e. inside of the event handlers). 2) If I failed to find a web framework extensible enough to do what I need or if it turned out that the raspberry-pi part can't be done asynchronously (e.g. it is taking to long to execute), I would figure out what is the difference between threads and processes in python, which of the two can I use in my specific situation and what tools can help me with that. This answer is as specific as the question (at the time of writting).
0
0
0
0
2016-10-16T19:14:00.000
1
0
false
40,074,378
0
0
1
1
I want to run program in infinite loop which handles GPIO in raspberry PI and gets requests in infinite loop (as HTTP server). Is it possible? I tried Flask framework, but infinite loop waits for requests and then my program is executed.
Error: [Errno 71] Protocol error: pyvenv
40,120,623
0
0
1,972
0
python,django,python-3.x,vagrant,centos7
pyvenv-3.4 --without-pip name_of_environment worked looks like pip was not installed. thanks for the help.
0
0
0
0
2016-10-17T12:13:00.000
1
0
false
40,086,091
1
0
1
1
I am using Centos7 with vagrant and virtualbox on windows10. I am trying to create pyvenv virtual environment to develop python web apps with django. I have installed python 3.4. However, when I type pyvenv-3.4 name_of_environment, it gives back an error Error: [Errno 71] Protocol error: 'lib' -> '/vagrant/django_apps/app1/name_of_environment/lib64' What is wrong?
GET variables with Jade in Django templates
40,091,767
0
0
241
0
python,django,pug
You could try href="{% static 'images/favicon.ico' %}"?v=1'
0
0
0
0
2016-10-17T16:50:00.000
1
0
false
40,091,704
0
0
1
1
I use Jade (pyjade) with my Django project. For now I need to use static template tag with GET variable specified - something like following: link(rel="shortcut icon", href="{% static 'images/favicon.ico?v=1' %}"). But I get /static/images/favicon.ico%3Fv%3D1 instead of /static/images/favicon.ico?v=1 Why it happens and how can I fix this? Thanks in advance!
How to Make uWSGI die when it encounters an error?
40,096,953
2
1
221
0
python,uwsgi,supervisord
After an hour of searching, I finally found a way to do this. Just pass the --need-app argument when starting uWSGI, or add need-app = true in your .ini file, if you run things that way. No idea why this is off by default (in what situation would you ever want uWSGI to keep running when your app has died?) but so it goes.
0
1
0
0
2016-10-17T22:31:00.000
1
0.379949
false
40,096,695
0
0
1
1
I have my Python app running through uWSGI. Rarely, the app will encounter an error which makes it not be able to load. At that point, if I send requests to uWSGI, I get the error no python application found, check your startup logs for errors. What I would like to happen in this situation is for uWSGI to just die so that the program managing it (Supervisor, in my case) can restart it. Is there a setting or something I can use to force this? More info about my setup: Python 2.7 app being run through uWSGI in a docker container. The docker container is managed by Supervisor, and if it dies, Supervisor will restart it, which is what I want to happen.
SPA webapp for plotting data with angular and python
40,100,193
0
1
250
0
python,angularjs,mongodb
I don't see a problem with your approach except that because you have real-time data, I would encourage you to go with some kind of WebSockets approach, like Socket.io on Node and the front end. The reason why I say this is because the alternative approach, which is long-polling, involved a lot of HTTP traffic back and forth between your server and client, which is a performance bottleneck. Angular is perfectly fine for this, as you will not need to manually update your model data on the front end, thanks to two-way data binding. There are many charting frameworks and libraries like D3.js and HighCharts that can be plugged into your front end to chart your data, use it according to your liking.
0
0
0
0
2016-10-18T05:18:00.000
1
1.2
true
40,100,083
0
0
1
1
I want to write an app for plotting various data (cpu, ram,disk etc.) from Linux machines. On the client side: Data will be collected via a python script and saved to a database (on a remote server) eg.: In each second create an entry in a mongodb collection with: session identifier,cpu used, ram,iops and their values. This data will be written in sessions of a few hours (so ~25K-50K entries per session) On the server side: Data will be processed having the 'session' identified, plotted and saved to a cpu graph png/ram graph png etc. Also it will write to a separate collection in mongodb identification that will be used to gather and present this data in a webpage. The page will have the possibility to start the client on the remote machine. Is this approach optimal? Is there a better but simple way to store the data ? Can I make the page construct and display the session dynamically to be used for example to zoom. Will mongo be able to store/save hundreds of millions of entries like this ? I was thinking on using angular + nodejs or angular + flask on the server and mongodb. I don't know flask or node, which will be easier to use for creating a simple REST. My skill levels: python advanced, javascript/html/css medium, angularjs 1 beginner.
Robotframework Selenium2Library header overlay on element to be clicked during page scroll
61,486,889
0
1
1,072
0
python-2.7,robotframework,selenium2library
If you know the element is clickable and just want to click anyway, try using Click Element At Coordinates with a 0,0 offset. It'll ignore the fact that it's obscured and will just click.
0
0
1
0
2016-10-18T05:53:00.000
3
0
false
40,100,528
0
0
1
1
I'm using Robotframework selenium2Library with python base and Firefox browser for automating our web application. Having below issue when ever a Click event is about occur, Header in the web application is immovable during page scroll(ie., whenever page scroll happens header would always be available for user view, only the contents get's scrolled) now the issue is, when a element about to get clicked is not available in page view, click event tries to scroll page to bring the element on top of the webpage,which is exactly below the header(overlap) and click event never occurs, getting below exception. WebDriverException: Message: Element is not clickable at point (1362.63330078125, 15.5). Other element would receive the click: https://url/url/chat/chat.asp','popup','height=600, width=680, scrollbars=no, resizable=yes, directories=no, menubar=no, status=no, toolbar=no'));"> I have tried Wait Until Page is Visible keyword, but still this doesn't help, as the next statement, Click event(Click Element, Click Link etc) is again scrolling up to the header. Header being visible all time is a feature in our web application and due this scrips are failing, Can some one please help to over come this issue and make the click event to get executed successfully?
u'囧'.encode('gb2312') throws UnicodeEncodeError
40,100,834
3
1
211
0
python,unicode,encode,gb2312
囧 not in gb2312, use gb18030 instead. I guess firefox may extends encode method when she face unknown characters.
0
0
1
0
2016-10-18T05:58:00.000
2
0.291313
false
40,100,596
0
0
1
1
Firefox can display '囧' in gb2312 encoded HTML. But u'囧'.encode('gb2312') throws UnicodeEncodeError. 1.Is there a map, so firefox can lookup gb2312 encoded characters in that map, find 01 display matrix and display 囧. 2.Is there a map for tranlating unicode to gb2312 but u'囧' is not in that map?
Model development choice
40,101,170
1
1
23
0
python,django-models
The questions you should ask are the following: Can A be linked to at most 1 or many (more than 1) B? Can B be linked to at most 1 or many A? If A can be linked to many B and B can be linked to many A, you need a many-to-many link. If A can be linked to at most 1 B and B can be linked to many A, you need a one-to-many link, where the link column is in table A. If A can linked to at most 1 B and B can be linked to at most 1 A, you need a one-to-one link. At this point you should consider whether is viable to join them into 1 single table, though this may not be possible or good from other considerations. In your case, you ask yourself the question: Can a PossessableObject be linked to only at most 1 other PossessableObject or many other PossessableObject? Or in other words: Can a PossessableObject be owned by only at most 1 other PossessableObject or many other PossessableObject? If the answer is at most 1, use a one-to-many link, if the answer is many, use a many to many link. Also with regard to your question on a PossesableObject_Table for each possible type of object: I think it is best to put the things they have in common in a single table and then specify types. Than create a seperate table for each type of object that has the unique properties of an object and connect those, but your way will work as well. It depends on how many different types you have and what you find the easiest to work with. Remember: as long as is works it is fine.
0
0
0
0
2016-10-18T06:29:00.000
1
0.197375
false
40,101,049
0
0
1
1
Lets say, that I'm want to develop a game (RTS-like, with economy orientation) in which player, as well as AI, can posses almost every in-game object. For example: player posses a land and some buildings on it; other players, or AI can also have some buildings, or else, on this land piece; also, someone can posses an entire region with such land pieces and sell some of it to others. Possesable objects can be movable, or immovable, but all of them have common attributes, such as owner, title, world coords and so on. What DB structure with respect to Django models will be most suitable for this description? Owner_Table - (one-to-many) - PossesableObject_Table PossesableObject_Table - (many-to-many) - PossesableObject_Table (in example, building linked to land piece where it is) or Owner_Table - (one-to-many) - PossesableObjectType_Table (table for each type of possible object) PossesableObjectType_Table - (one-to-many) - PossesableObjectType_Table (for already explained above type of linking)
Display Sum of overdue payments in Customer Form view for each customer
40,124,695
1
2
270
0
python,openerp,odoo-9
Your smart button on partners should use a new action, like the button for customer or vendor bills. This button definition should include context="{'default_partner_id': active_id} which will allow to change the partner filter later on, or the upcoming action definition should include the partner in its domain. The action should be for model account.invoice and have to have the following domain: [('date_due', '<', time.strftime('%Y-%m-%d')), ('state', '=', 'open')] If you want to filter only outgoing (customer invoices) add a filter tuple for field type.
0
0
0
0
2016-10-18T13:02:00.000
1
1.2
true
40,109,065
0
0
1
1
In accounting -> Customer Invoices, there is a filter called Overdue. Now I want to calculate the overdue payments per user and then display it onto the customer form view. I just want to know how can we apply the condition of filter in python code. I have already defined a smart button to display it with a (total invoice value) by inheriting account.invoice. "Overdue" filter in invoice search view: ['&', ('date_due', '<', time.strftime('%Y-%m-%d')), ('state', '=', 'open')]
Multiple instances of celerybeat for autoscaled django app on elasticbeanstalk
40,166,437
-3
13
1,251
0
python,django,celery,amazon-elastic-beanstalk,celerybeat
In case someone experience similar issues: I ended up switching to a different Queue / Task framework for django. It is called django-q and was set up and working in less than an hour. It has all the features that I needed and also better Django integration than Celery (since djcelery is no longer active). Django-q is super easy to use and also lighter than the huge Celery framework. I can only recommend it!
0
1
0
0
2016-10-19T00:48:00.000
2
1.2
true
40,120,312
0
0
1
2
I am trying to figure out the best way to structure a Django app that uses Celery to handle async and scheduled tasks in an autoscaling AWS ElasticBeanstalk environment. So far I have used only a single instance Elastic Beanstalk environment with Celery + Celerybeat and this worked perfectly fine. However, I want to have multiple instances running in my environment, because every now and then an instance crashes and it takes a lot of time until the instance is back up, but I can't scale my current architecture to more than one instance because Celerybeat is supposed to be running only once across all instances as otherwise every task scheduled by Celerybeat will be submitted multiple times (once for every EC2 instance in the environment). I have read about multiple solutions, but all of them seem to have issues that don't make it work for me: Using django cache + locking: This approach is more like a quick fix than a real solution. This can't be the solution if you have a lot of scheduled tasks and you need to add code to check the cache for every task. Also tasks are still submitted multiple times, this approach only makes sure that execution of the duplicates stops. Using leader_only option with ebextensions: Works fine initially, but if an EC2 instance in the enviroment crashes or is replaced, this would lead to a situation where no Celerybeat is running at all, because the leader is only defined once at the creation of the environment. Creating a new Django app just for async tasks in the Elastic Beanstalk worker tier: Nice, because web servers and workers can be scaled independently and the web server performance is not affected by huge async work loads performed by the workers. However, this approach does not work with Celery because the worker tier SQS daemon removes messages and posts the message bodies to a predefined urls. Additionally, I don't like the idea of having a complete additional Django app that needs to import the models from the main app and needs to be separately updated and deployed if the tasks are modified in the main app. How to I use Celery with scheduled tasks in a distributed Elastic Beanstalk environment without task duplication? E.g. how can I make sure that exactly one instance is running across all instances all the time in the Elastic Beanstalk environment (even if the current instance with Celerybeat crashes)? Are there any other ways to achieve this? What's the best way to use Elastic Beanstalk's Worker Tier Environment with Django?
Multiple instances of celerybeat for autoscaled django app on elasticbeanstalk
54,745,929
1
13
1,251
0
python,django,celery,amazon-elastic-beanstalk,celerybeat
I guess you could single out celery beat to different group. Your auto scaling group runs multiple django instances, but celery is not included in the ec2 config of the scaling group. You should have different set (or just one) of instance for celery beat
0
1
0
0
2016-10-19T00:48:00.000
2
0.099668
false
40,120,312
0
0
1
2
I am trying to figure out the best way to structure a Django app that uses Celery to handle async and scheduled tasks in an autoscaling AWS ElasticBeanstalk environment. So far I have used only a single instance Elastic Beanstalk environment with Celery + Celerybeat and this worked perfectly fine. However, I want to have multiple instances running in my environment, because every now and then an instance crashes and it takes a lot of time until the instance is back up, but I can't scale my current architecture to more than one instance because Celerybeat is supposed to be running only once across all instances as otherwise every task scheduled by Celerybeat will be submitted multiple times (once for every EC2 instance in the environment). I have read about multiple solutions, but all of them seem to have issues that don't make it work for me: Using django cache + locking: This approach is more like a quick fix than a real solution. This can't be the solution if you have a lot of scheduled tasks and you need to add code to check the cache for every task. Also tasks are still submitted multiple times, this approach only makes sure that execution of the duplicates stops. Using leader_only option with ebextensions: Works fine initially, but if an EC2 instance in the enviroment crashes or is replaced, this would lead to a situation where no Celerybeat is running at all, because the leader is only defined once at the creation of the environment. Creating a new Django app just for async tasks in the Elastic Beanstalk worker tier: Nice, because web servers and workers can be scaled independently and the web server performance is not affected by huge async work loads performed by the workers. However, this approach does not work with Celery because the worker tier SQS daemon removes messages and posts the message bodies to a predefined urls. Additionally, I don't like the idea of having a complete additional Django app that needs to import the models from the main app and needs to be separately updated and deployed if the tasks are modified in the main app. How to I use Celery with scheduled tasks in a distributed Elastic Beanstalk environment without task duplication? E.g. how can I make sure that exactly one instance is running across all instances all the time in the Elastic Beanstalk environment (even if the current instance with Celerybeat crashes)? Are there any other ways to achieve this? What's the best way to use Elastic Beanstalk's Worker Tier Environment with Django?
Serve uploaded files from NGINX server instead of gunicorn/Django
40,125,586
1
1
167
0
python,django,nginx,gunicorn,django-media
You need to implement a solution for sharing files from one server to another. NFS is the standard in Unixes like Linux. An alternative is to use live mirroring, i.e. create a copy of the media files directory in the nginx server and keep it synchronized. There are probably many options for setting this up; I've successfully used lsyncd.
0
0
0
0
2016-10-19T07:12:00.000
1
1.2
true
40,124,568
0
0
1
1
I have separate servers one running NGINX and other running gunicorn/Django, I managed to serve static files from NGINX directly as recommended from Django documentation, but I have an issue with files uploaded by users, which will be upload to server has gunicorn, not the server has NGINX, thus users can't find their files and browse them. How to upload files from Django to another server? or How to transfer files from other server after uploading to NGINX? Note: I don't have the CDN option, I'll server my statics from my servers.
How to expose user passwords in the most "secure" way in django?
40,136,359
3
1
288
0
python,django,passwords,password-encryption
No, there is no logical way of doing this that doesn't imply a huge security breach in the software. If the passwords are stored correctly (salted and hashed), then even site admins with unrestricted access on the database can not tell you what the passwords are in plain text. You should push back against this unreasonable request. If you have a working "password reset" functionality, then nobody but the user ever needs to know a user's password. If you don't have a reliable "password reset" feature, then try and steer the conversation and development effort in this direction. There is rarely any real business need for knowing/printing user passwords, and these kind of feature requests may be coming from non-technical people who have misunderstandings (or no understanding) about the implementation detail of authentication and authorization.
0
0
0
0
2016-10-19T15:53:00.000
1
1.2
true
40,136,285
0
0
1
1
I am working on Django 1.9 project and I have been asked to enable some users to print a page with a list of a set of users and their passwords. Of course passwords are encrypted and there is no out-of-the-box ways of doing this. I know this would imply a security breach so my question is kind of contradictory, but is there any logical way of doing this that doesn't imply a huge security breach in the software?
selenium run chrome on raspberry pi
40,141,261
5
2
729
0
python,selenium,raspberry-pi
I have concluded, after hours and a whole night of debugging that you can't install it, because there is no chromedriver compatible with a raspberry pi processor. Even if you download the linux 32bit. You can confirm it by running this line in a terminal window path/to/chromedriver it will give you this error cannot execute binary file: Exec format error Hope this helps anyone that wanted to do this :)
0
0
1
0
2016-10-19T20:48:00.000
1
0.761594
false
40,141,260
0
0
1
1
If your seeing this I guess you are looking to run chromium on a raspberry pi with selenium. like this Driver = webdriver.Chrome("path/to/chomedriver") or like this webdriver.Chrome()
Celery message queue vs AWS Lambda task processing
48,643,501
24
12
7,335
0
python-2.7,amazon-web-services,nlp,celery,aws-lambda
I would like to share a personal experience. I moved my heavy-lifting tasks to AWS Lambda and I must admit that the ROI has been pretty good. For instance, one of my tasks was to generate monthly statements for the customers and then mail them to the customers as well. The data for each statement was fed into a Jinja template which gave me an HTML of the statement. Using Weasyprint, I converted the HTML to a Pdf file. And then mailing those pdf statements was the last step. I researched through various options for creating pdf files directly, but they didn't looked feasible for me. That said, when the scale was low, i.e. for when the number of customers was small, celery was wonderful. However to mention, during this task, I observed CPU usages went high. I would add to the celery queue this task for each of the customers, from which the celery workers would pick up the tasks and execute it. But when the scale went high, celery didn't turn out to be a robust option. CPU usages were pretty high(I don't blame celery for it, but that is what I observed). Celery is still good though. But do understand this, that with celery, you can face scaling issues. Vertical scaling may not help you. So you need horizontally scale as your backend grows to get get a good performance from celery. When there are a lot of tasks waiting in the queue, and the number of workers is limited, naturally a lot of tasks would have to wait. So in my case, I moved this CPU-intensive task to AWS Lambda. So, I deployed a function that would generate the statement Pdf from the customer's statement data, and mail it afterward. Immediately, AWS Lambda solved our scaling issues. Secondly, since this was more of a period task, not a daily task - so we didn't need to run celery everyday. The Lambda would launch whenever needed - but won't run when not in use. Besides, this function was in NodeJS, since the npm package I found turned out to be more efficient the solution I had in Python. So Lambda is also advantageous because you can take advantages of various programming languages, yet your core may be unchanged. Also, I personally think that Lambda is quite cheap - since the free tier offers a lot of compute time per month(GB-seconds). Also, underlying servers on which your Lambdas are taken care to be updated to the latest security patches as and when available. As you can see, my maintenance cost has drastically dropped. AWS Lambdas scale as per need. Plus, they can serve a good use case for tasks like real-time stream processing, or for heavy data-processing tasks, or for running tasks which could be very CPU intensive.
0
1
0
1
2016-10-21T09:50:00.000
1
1.2
true
40,173,481
0
0
1
1
Currently I'm developing a system to analyse and visualise textual data based on NLP. The backend (Python+Flask+AWS EC2) handles the analysis, and uses an API to feed the result back to a frontend (FLASK+D3+Heroku) app that solely handles interactive visualisations. Right now the analysis in the prototype is a basic python function which means on large files the analysis take longer and thus resulting a request timeout during the API data bridging to frontend. As well as the analysis of many files is done in a linear blocking queue. So to scale this prototype, I need to modify the Analysis(text) function to be a background task so it does not block further execution and can do a callback once the function is done. The input text is fetched from AWS S3 and the output is a relatively large JSON format aiming to be stored in AWS S3 as well, so the API bridge will simply fetch this JSON that contains data for all the graphs in the frontend app. (I find S3 slightly easier to handle than creating a large relational database structure to store persistent data..) I'm doing simple examples with Celery and find it fitting as a solution, however i just did some reading in AWS Lambda which on paper seems like a better solution in terms of scaling... The Analysis(text) function uses a pre-built model and functions from relatively common NLP python packages. As my lack of experience in scaling a prototype I'd like to ask for your experiences and judgement of which solution would be most fitting for this scenario. Thank you :)
Create Campaigns, set bids and buy adds from DoubleClick Bid Manager API
40,370,299
1
3
1,015
0
google-api-python-client,double-click-advertising
I found the way to solve this problem. The actual API v1 has this capabilities but the documentation is not very clear about it. You need to download your Line Items file as CSV or any other supported format, then from that downloaded file you must edit it with any script you want, so you must edit the columns of Status to perform this operation. Also, if you want to create a new campaign, you will need to do the same for new Line Items. After editing the CSV or created one, you must uploaded back to google with the relative endpoint: uploadlineitems. Google will answer to the owner of the Bid Manager account what changes were accepted from that file that you sent. I have confirmed that this is the same behaviour that Google uses for other products where they consume their own API: Download or Create Line Items file as CSV or any other supported format. Edit Line Items. Upload Line Items. So basically you only need to create a script that edits CSV files and another to authenticate with the API.
0
0
1
0
2016-10-21T15:38:00.000
2
1.2
true
40,180,601
0
0
1
1
Is it possible with the Google DoubleClick Bid Manager API to create campaigns, set bids and buy adds?, I have checked the documentation and it seems that there are limited endpoints. These are all the available endpoints according to the documentation: doubleclickbidmanager.lineitems.downloadlineitems Retrieves line items in CSV format. doubleclickbidmanager.lineitems.uploadlineitems Uploads line items in CSV format. doubleclickbidmanager.queries.createquery Creates a query. doubleclickbidmanager.queries.deletequery Deletes a stored query as well as the associated stored reports. doubleclickbidmanager.queries.getquery Retrieves a stored query. doubleclickbidmanager.queries.listqueries Retrieves stored queries. doubleclickbidmanager.queries.runquery Runs a stored query to generate a report. doubleclickbidmanager.reports.listreports Retrieves stored reports. doubleclickbidmanager.sdf.download Retrieves entities in SDF format. None of these endpoints can do tasks as buy ads, set bids or create campaigns, so I think those tasks can only be done through the UI and not with the API. Thanks in advance for your help.
How can I make django to write static files list to database when using collectstatic
40,185,281
0
0
228
1
python,django,amazon-web-services,amazon-s3,django-staticfiles
Clean solution would be to read the source for collectstatic and write your own management command that would do the same thing, but would write a file list into the database. A quick and dirty way would be to pipe the output of collectstatic into a script of some sort that would reformat it as SQL and pipe it through a database client.
0
0
0
0
2016-10-21T20:13:00.000
1
0
false
40,184,760
0
0
1
1
I am storing all the static files in AWS S3 Bucket and I am using Docker containers to run my application. This way, whenever I want to deploy the changes, I create a new container using a new image. I am running ./manage.py collectstatic on every deployment because sometimes I add libraries to the project that have static files; and it takes forever to reupload them to S3 on every deployment. Is there a way I can keep a list of static files uploaded to S3 in my database, so that collectstatic only uploads to the added files.
How do I exit dbshell (SQLite 3) on the command line when using Django?
40,205,233
1
3
5,764
1
python,django,sqlite
Just typing quit does the work.
0
0
0
0
2016-10-23T16:36:00.000
5
0.039979
false
40,205,197
0
0
1
3
How do I exit dbshell (SQLite 3) on the command line when using Django? It's my first time to use the command. I watch a book and am practicing Django at the same time. After I run this command, I have no idea how to leave the environment since I have never learned SQL before.
How do I exit dbshell (SQLite 3) on the command line when using Django?
47,724,071
1
3
5,764
1
python,django,sqlite
You can just hit the key combination Ctrl + C.
0
0
0
0
2016-10-23T16:36:00.000
5
0.039979
false
40,205,197
0
0
1
3
How do I exit dbshell (SQLite 3) on the command line when using Django? It's my first time to use the command. I watch a book and am practicing Django at the same time. After I run this command, I have no idea how to leave the environment since I have never learned SQL before.
How do I exit dbshell (SQLite 3) on the command line when using Django?
51,351,884
3
3
5,764
1
python,django,sqlite
You can type .exit in thew shell to exit. For more information about commands, type .help. It raises an error and exits ... it was helpful :)
0
0
0
0
2016-10-23T16:36:00.000
5
0.119427
false
40,205,197
0
0
1
3
How do I exit dbshell (SQLite 3) on the command line when using Django? It's my first time to use the command. I watch a book and am practicing Django at the same time. After I run this command, I have no idea how to leave the environment since I have never learned SQL before.
Installing flask dependencies takes long on aws
40,207,943
0
0
43
0
python,amazon-web-services,deployment,autoscaling
After everything has been setup on your instance bake an AMI from the fully ready instance. Use the AMI ID in the autoscaling configuration. That way any instance that is spun-up by the autoscaling group will be ready with all the required software.
0
0
0
0
2016-10-23T20:05:00.000
1
0
false
40,207,279
0
0
1
1
I currently have auto-scaling setup so that once my existing instances reach high usage, new nodes are created on which my flask application is deployed and run. The issue I am having is that deployment takes a while (7minsish) because I have many dependencies in my requirements.txt and it takes a while to stand up a node and install all of them. How can I quicken this process?
How can I let Django load models from subpackages of apps?
40,227,954
0
2
370
0
python,django,django-models
There are two options: Along with app, also add app.cog to INSTALLED_APPS Or, include app/cog/models.py in app/models.py (i.e. from .cog.models import * or from .cog.models import model1, model2)
0
0
0
0
2016-10-24T21:23:00.000
1
0
false
40,227,797
0
0
1
1
I essentially have the following issue: Say, the model classes I define are in /app/cog/models.py, but Django only checks for them in /app/models.py . Is there any way to let Django dynamically read all the model classes in all models.py files in all subpackages of app? It might be noteworthy that I really want to follow Django's philosophy concerning apps, which includes "all apps are independent from each other". So, I don't want to give those subpackages their own apps, or otherwise people who use my app would possibly end up with 50 apps after some time (as these subpackages simply extend the functionality of the app, and there will probably be a lot of them).
Can't import flask because werkzeug
52,548,666
0
5
11,418
0
python,flask,import,werkzeug
I faced same issue. I got this error when working in python virtual environment. I had to deactivate virtual environment. Then go to root user and install werkzeug using pip. After that it works in virtual environment.
0
0
0
0
2016-10-25T04:24:00.000
4
0
false
40,231,354
0
0
1
2
When I use from flask import *, I get the error ImportError: No module named werkzeug.exceptions However, when I do pip freeze, I can see that Werkzeug==0.11.11 is indeed installed. How can I fix this?
Can't import flask because werkzeug
44,644,053
1
5
11,418
0
python,flask,import,werkzeug
I am asumming, that the wrong version of Werkzeug was installed in the fist place. This usually happens, when you have 2 versions of python installed, and you use 'pip' for installing dependancies rather than using 'pip3'. Hope this helped!
0
0
0
0
2016-10-25T04:24:00.000
4
0.049958
false
40,231,354
0
0
1
2
When I use from flask import *, I get the error ImportError: No module named werkzeug.exceptions However, when I do pip freeze, I can see that Werkzeug==0.11.11 is indeed installed. How can I fix this?
Changes made to the Python code not reflected on server in Fedora in Virtual Box(Not duplicate)
40,239,274
1
1
84
0
python,virtual-machine,fedora
You failed to provide enough context - like what exactly is "your python server", but anyway, you mention a browser cache so I assume it's a web server process. The point is: Python modules are imported only once per process, and once imported changes to the source files are totally irrelevant. So if you have a long running process, it is expected that you restart the process every time you deploy a new version of your modules.
0
0
0
0
2016-10-25T11:31:00.000
1
0.197375
false
40,238,834
0
0
1
1
I have a VM in Oracle virtual box which has Fedora24. I have my python server running (Django). There is no web server like Apache.However, when I make changes to the code the files are getting saved, but the changes are not reflected on the server. I have to do kill -15 processid of python OR Restart my VM many times to see the changes. Any idea why this is happening? Have tried clearing the browser caches also.