Question
stringlengths 25
7.47k
| Q_Score
int64 0
1.24k
| Users Score
int64 -10
494
| Score
float64 -1
1.2
| Data Science and Machine Learning
int64 0
1
| is_accepted
bool 2
classes | A_Id
int64 39.3k
72.5M
| Web Development
int64 0
1
| ViewCount
int64 15
1.37M
| Available Count
int64 1
9
| System Administration and DevOps
int64 0
1
| Networking and APIs
int64 0
1
| Q_Id
int64 39.1k
48M
| Answer
stringlengths 16
5.07k
| Database and SQL
int64 1
1
| GUI and Desktop Applications
int64 0
1
| Python Basics and Environment
int64 0
1
| Title
stringlengths 15
148
| AnswerCount
int64 1
32
| Tags
stringlengths 6
90
| Other
int64 0
1
| CreationDate
stringlengths 23
23
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
I run a number of queries for adhoc analysis against a postgres database. Many times I will leave the connection open through the day instead of ending after each query.
I receive a postgres dump over scp through a shell script every five minutes and I would like to restore the database without cutting the connections. Is this possible? | 0 | 1 | 1.2 | 0 | true | 39,286,433 | 0 | 271 | 1 | 0 | 0 | 39,282,825 | One of the few activities that you cannot perform while a user is connected is dropping the database.
So – if that is what you are doing during restore – you'll have to change your approach. Don't drop the database (don't use the -C option in pg_dump or pg_restore), but rather drop and recreate the schemas and objects that don't depend on a schema (like large objects).
You can use the -c flag of pg_dump or pg_restore for that.
The other problem you might run into is connections with open transactions (state “idle in transaction”). Such connections can hold locks that keep you from dropping and recreating objects, and you'll have to use pg_terminate_backend() to get rid of them. | 1 | 0 | 0 | Restore postrgres without ending connections | 1 | python,database,postgresql,restore | 0 | 2016-09-02T00:49:00.000 |
I am trying to debug a Pootle (pootle is build on django) installation which fails with a django transaction error whenever I try to add a template to an existing language. Using the python debugger I can see that it fails when pootle tries to save a model as well as all the queries that have been made in that session.
What I can't see is what specifically causes the save to fail. I figure pootle/django must have added some database database constraint, how do I figure out which one? MySql (the database being used) apparently can't log just failed transactions. | 0 | 1 | 1.2 | 0 | true | 39,447,127 | 1 | 223 | 1 | 0 | 0 | 39,387,983 | Install django debug toolbar, you can easily check all of the queries that have been executed | 1 | 0 | 0 | How do I get Django to log why an sql transaction failed? | 1 | python,mysql,django,pootle | 0 | 2016-09-08T10:01:00.000 |
I have a web application that accesses large amounts of JSON data.
I want to use a key value database for storing JSON data owned/shared by different users of the web application (not users of the database). Each user should only be able to access the records they own or share.
In a relational database, I would add a column Owner to the record table, or manage shared ownerships in a separate table, and check access on the application side (Python). For key value stores, two approaches come to mind.
User ID as part of the key
What if I use keys like USERID_RECORDID and then write code to check the USERID before accessing the record? Is that a good idea? It wouldn't work with records that are shared between users.
User ID as part of the value
I could store one or more USERIDs in the value data and check if the data contains the ID of the user trying to access the record. Performance is probably slower than having the user ID as part of the key, but shared ownerships are possible.
What are typical patterns to do what I am trying to do? | 8 | 2 | 1.2 | 0 | true | 39,518,000 | 1 | 171 | 1 | 0 | 0 | 39,423,756 | Both of the solutions you described have some limitations.
You point yourself that including the owner ID in the key does not solve the problem of shared data. However, this solution may be acceptable, if you add another key/value pair, containing the IDs of the contents shared with this user (key: userId:shared, value: [id1, id2, id3...]).
Your second proposal, in which you include the list of users who were granted access to a given content, is OK if and only if you application needs to make a query to retrieve the list of users who have access to a particular content. If your need is to list all contents a given user can access, this design will lead you to poor performances, as the K/V store will have to scan all records -and this type of database engine usually don't allow you to create an index to optimise this kind of request.
From a more general point of view, with NoSQL databases and especially Key/Value stores, the model has to be defined according to the requests to be made by the application. It may lead you to duplicate some information. The application has the responsibility of maintaining the consistency of the data.
By example, if you need to get all contents for a given user, whether this user is the owner of the content or these contents were shared with him, I suggest you to create a key for the user, containing the list of content Ids for that user, as I already said. But if your app also needs to get the list of users allowed to access a given content, you should add their IDs in a field of this content. This would result in something like :
key: contentID, value: { ..., [userId1, userID2...]}
When you remove the access to a given content for a user, your app (and not the datastore) have to remove the userId from the content value, and the contentId from the list of contents for this user.
This design may imply for your app to make multiple requests: by example one to get the list of userIDs allowed to access a given content, and one or more to get these user profiles. However, this should not really be a problem as K/V stores usually have very high performances. | 1 | 0 | 0 | How do you control user access to records in a key-value database? | 1 | python,database,authorization,key-value,nosql | 0 | 2016-09-10T07:31:00.000 |
I used txt files to store data in it and read it any time i need and search in it and append and delete from it
so
why should i use database i can still using txt files ? | 1 | 0 | 1.2 | 0 | true | 39,438,159 | 0 | 55 | 1 | 0 | 0 | 39,437,667 | In fact, you have used files instead of a database. To answer the question, let us check the advantages of using a database:
it is faster: a service is awaiting commands and your app sends some commands to it. Database Management Systems have a lot of cool stuff implemented which you will be lacking if you use a single file. True, you can create a service which loads the file into memory and serves commands, but while that seems to be easy, it will be inferior to RDBMS's, since your implementation is highly unlikely to be even close to a match of the optimizations done for RDBMS's over decades, unless you implement an RDBMS, but then you end up with an RDBMS, after all
it is safer: RDBMS's encrypt data and have user-password authentication along with port handling
it is smaller: data is stored in a compressed manner, so if you end up with a lot of data, data size will get critical much later
it is developed: you will always have possibilities to upgrade your system with juices implemented recently and to keep up the pace with science's current development
you can use ORM's and other stuff built to ease the pain of data handling
it supports concurrent access: imagine the case when many people are reaching your database at the same time. Instead of you implementing very complicated stuff, you can get this feature instantly
All in all, you will either use a database management system (not necessarily relational), implement your own or work with textual files. Your textual file will quickly be overwhelmed if your application is successful and you will need a database management system. If you write your own, you might have a success story to tell, but it will come only after many years of hard work. So, if you get successful, then you will need database management system. If you do not get successful, you can use textual files, but the question is: is it worth it?
And finally, your textual file is a database, but you are managing it by your custom and probably very primitive (no offence, but it is virtually impossible to achieve results when you are racing against the whole world) database management system compared to the ones out there. So, yes, you should learn to use advanced database management systems and should refactor your project to use one. | 1 | 0 | 1 | using Python and SQL | 1 | python,sql | 0 | 2016-09-11T15:26:00.000 |
I'm trying to run a server in python/django and I'm getting the following error:
django.db.uils.OperationslError: (200, "Can't connect to local MySQL
server through socket '/tmp/mysql.sock' (2)").
I have MySQL-python installed (1.2.5 version) and mysql installed (0.0.1), both via pip, so I'm not sure why I can't connect to the MySQL server. Does anyone know why? Thanks! | 0 | 1 | 0.197375 | 0 | false | 39,475,119 | 1 | 295 | 1 | 0 | 0 | 39,474,896 | You can't install mysql through pip; it's a database, not a Python library (and it's currently in version 5.7). You need to install the binary package for your operating system. | 1 | 0 | 0 | OperationalError: Can't connect to local MySQL server through socket | 1 | python,mysql,django | 0 | 2016-09-13T16:29:00.000 |
I wrote a python application that uses cx_Oracle and then generates a pyinstaller bundle (folder/single executable). I should note it is on 64 bit linux. I have a custom spec file that includes the Oracle client libraries so everything that is needed is in the bundle.
When I run the bundled executable on a freshly installed CentOS 7.1 VM, (no Oracle software installed), the program connects to the database successfully and runs without error. However, when I install the bundled executable on another system that contains RHEL 7.2, and I try to run it, I get
Unable to acquire Oracle environment handle.
My understanding is this is due to an Oracle client installation that has some sort of conflict. I tried unsetting ORACLE_HOME on the machine giving me errors. It's almost as though the program is looking for the Oracle client libraries in a location other than in the location where I bundled the client files.
It seems like it should work on both machines or neither machine. I guess I'm not clear on how the Python application/cx_Oracle finds the Oracle client libraries. Again, it seems to have found them fine on a machine with a fresh operating system installation. Any ideas on why this is happening? | 0 | 1 | 1.2 | 0 | true | 39,503,038 | 0 | 306 | 1 | 0 | 0 | 39,482,504 | One thing that you may be running into is the fact that if you used the instant client RPMs when you built cx_Oracle an RPATH would have been burned into the shared library. You can examine its contents and change it using the chrpath command. You can use the special path $ORIGIN in the modified RPATH to specify a path relative to the shared library.
If an RPATH isn't the culprit, then you'll want to examine the output from the ldd command and see where it is looking and then adjust things to make it behave itself! | 1 | 0 | 0 | Why does pyinstaller generated cx_oracle application work on fresh CentOS machine but not on one with Oracle client installed? | 1 | python,pyinstaller,cx-oracle | 0 | 2016-09-14T04:30:00.000 |
I am working with RapidMiner at the moment and am trying to copy my RapidMiner results which are in xlsx files to txt files in order to do some further processing with python. I do have plain text in column A (A1-A1500) as well as the according filename in column C (C1-C1500).
Now my question:
Is there any possibility (I am thinking of the xlrd module) to read the content of every cell in column A and print this to a new created txt file with the filename being given in corresponding column C?
As I have never worked with the xlrd module before I am a bit lost at the moment... | 1 | 0 | 0 | 0 | false | 39,519,713 | 0 | 5,449 | 1 | 0 | 0 | 39,512,166 | Good day! So, I'm not sure I understand your question correctly, but have you tried a combination of Read Excel operator with the Loop Examples operator? Your loop subprocess could then use Write CSV operator or similar. | 1 | 0 | 0 | Read Excel Cells and Copy content to txt file | 3 | python-3.x,xlrd,rapidminer | 1 | 2016-09-15T13:22:00.000 |
I have a MongoDB that houses data from a web scrape that runs weekly via Scrapy. I'm going to setup a cron job to run the scrape job weekly. What I would like to do is also export a CSV out of MongoDB using mongoexport however I would like to inject the current date into the file name. I've tried a few different methods without much success. Any help would be greatly appreciated! For reference, my current export string is: mongoexport --host localhost --db glimpsedb --collection scrapedata --csv --out scrape-export.csv --fields dealerid,unitid,seller,street,city,state,zipcode,rvclass,year,make,model,condition,price
So, ideally the file name would be scrape-export-current date.csv
Thanks again! | 0 | 1 | 1.2 | 0 | true | 39,580,966 | 0 | 244 | 1 | 0 | 0 | 39,580,809 | Replace --out scrape-export.csv in your command with --out scrape-export-$(date +"%Y-%m-%d").csv
It'll create filenames in the format scrape-export-2016-09-05 | 1 | 0 | 1 | Export MongoDB to CSV Using File Name Variable | 1 | python,mongodb,scrapy,mongoexport | 0 | 2016-09-19T19:38:00.000 |
I installed python3.5 on my mac, its installation was automatically. but these days i found there was already python2 on my mac and every module i installed through pip went to /Library/Python/2.7/site-packages.
I find python3 installed location is /Library/Frameworks/Python.framework/Versions/3.5
Now download a mysql-connector-python and installed it, install location is python2.7/site-packages, when i open pycharm whose default interceptor is python3.5, hence i can not use mysql-connector, so is there any body who know this question? | 0 | 0 | 0 | 0 | false | 39,585,904 | 0 | 851 | 1 | 0 | 0 | 39,585,238 | for mysql-connector installation problem, i found the solution:
Try go to python3 bin directory and find pip method. this pip method can be override by the system python2 pip command, so if you want to install MySQL-python module to python3.x site-packages, you should cd to such bin directory and ./pip install MySQL-python, it can download such module successfully but installed error:ImportError:No module named 'ConfigParser', I google such error and find there is no such module in python3 and we can get its fork version:mysqlclient.
NOTE: In order not to be conflict with system default python2 pip command, cd and go to python3 bin directory and ./pip install mysqlclient and succeed. | 1 | 0 | 1 | mac two version python conflict | 1 | python,macos | 0 | 2016-09-20T03:25:00.000 |
I have a column-family/table in cassandra-3.0.6 which has a column named "value" which is defined as a blob data type.
CQLSH query select * from table limit 2; returns me:
id | name | value
id_001 | john | 0x010000000000000000
id_002 | terry | 0x044097a80000000000
If I read this value using cqlengine(Datastax Python Driver), I get the output something like:
{'id':'id_001', 'name':'john', 'value': '\x01\x00\x00\x00\x00\x00\x00\x00\x00'}
{'id':'id_002', 'name':'terry', 'value': '\x04@\x97\xa8\x00\x00\x00\x00\x00'}
Ideally the values in the "value" field are 0 and 1514 for row1 and row2 resp.
However, I am not sure how I can convert the "value" field values extracted using cqlengine to 0 and 1514. I tried few methods like ord(), decode(), etc but nothing worked. :(
Questions:
What is this format?
'\x01\x00\x00\x00\x00\x00\x00\x00\x00' or
'\x04@\x97\xa8\x00\x00\x00\x00\x00'?
How I can convert these arbitrary values to 0 and 1514?
NOTE: I am using python 2.7.9 on Linux
Any help or pointers would be useful.
Thanks, | 0 | 0 | 0 | 0 | false | 39,628,303 | 0 | 1,608 | 1 | 1 | 0 | 39,611,995 | Blob will be converted to a byte array in Python if you read it directly. That looks like a byte array containing the Hex value of the blob.
One way is to explicitly do the conversion in your query.
select id, name, blobasint(value) from table limit 3
There should be a conversion method with the Python driver as well. | 1 | 0 | 0 | Not able to convert cassandra blob/bytes string to integer | 1 | python,cassandra,cqlsh,cqlengine | 0 | 2016-09-21T09:02:00.000 |
In BigQuery it's possible to write to a new table the results of a query. I'd like the table to be created only whenever the query returns at least one row. Basically I don't want to end up creating empty table. I can't find an option to do that. (I am using the Python library, but I suppose the same applies to the raw API) | 0 | 1 | 0.099668 | 0 | false | 39,632,118 | 0 | 788 | 1 | 0 | 0 | 39,616,849 | There's no option to do this in one step. I'd recommend running the query, inspecting the results, and then performing a table copy with WRITE_TRUNCATE to commit the results to the final location if the intermediate output contains at least one row. | 1 | 0 | 0 | Write from a query to table in BigQuery only if query is not empty | 2 | python,google-bigquery | 0 | 2016-09-21T12:41:00.000 |
I'm attempting to transfer all data over to Neo4j, and am wondering if it would be alright to name all properties on nodes the same as in Postgres exactly. E.g id will be id, name will be name, and so on. Are there any conflicts with doing something like this? | 0 | 0 | 0 | 0 | false | 39,718,452 | 0 | 68 | 1 | 0 | 0 | 39,713,636 | No, especially if you use the one of the clients to do the migration as they will automatically escape anything that needs to be escaped, but there's nothing I've come across. | 1 | 0 | 0 | Neo4j Transferring All Data from Postgres | 1 | python,neo4j,neo4jclient,neo4jrestclient | 0 | 2016-09-26T22:59:00.000 |
I have been trying to schedule a report in SAP BO CMC. This report was initially written in Python and built into a .exe file. This .exe application runs to save the report into an .xlsx file in a local folder.
I want to utilize the convenient scheduling functions in SAP BO CMC to send the report in Emails. I tried and created a "Local Program" in CMC and linked it to the .exe file, but you can easily imagine the problem I am faced with -- the application puts the file in the folder as usual but CMC won't be able to grab the Excel file generated.
Is there a way to re-write the Python program a bit so that the output is not a file in some folder, but an object that CMC can get as an attachment to the Emails?
I have been scheduling Crystal reports in CMC and this happens naturally. The Crystal output can be sent as an attachment to the Email. Wonder if the similar could happen for a .exe , and how?
Kindly share your thoughts. Thank you very much!
P.S. Don't think it possible to re-write the report in Crystal though, as the data needs to be manipulated based on inputs from different data sources. That's where Python comes in to help. And I hope I don't need to write the program as to cover the Emailing stuff and schedule it in windows' scheduled tasks. Last option... This would be too inconvenient to maintain. We don't get access to the server easily. | 0 | 1 | 0.197375 | 0 | false | 39,727,668 | 1 | 744 | 1 | 0 | 0 | 39,726,495 | It's kind of hack-ish, but it can be done. Have the program (exe) write out the bytes of the Excel file to standard output. Then configure the program object for email destination, and set the filename to a specific name (ex. "whatever.xlsx").
When emailing a program object, the attached file will contain the standard output/error of the program. Generally this will just be text but it works for binary output as well.
As this is a hack, if the program generates any other text (such as error message) to standard out, it will be included in the .xlsx file, which will make the file invalid. I'd suggest managing program errors such that they get logged to a file and NOT to standard out/error.
I've tested this with a Java program object; but an exe should work just as well. | 1 | 0 | 0 | How can I out put an Excel file as Email attachment in SAP CMC? | 1 | python,excel,email,sap,business-objects | 0 | 2016-09-27T13:50:00.000 |
I am using Flask, Flask-SqlAlchemy and Flask-Migrate to manage my models. And I just realize that in my latest database state, when I create a new migration file, python manage.py db migrate -m'test migration, it will not create an empty migration file. Instead it tries to create and drop several unique key and foreign key constraints.
Any ideas why it behaves like this? | 1 | 4 | 1.2 | 0 | true | 39,761,658 | 1 | 1,587 | 1 | 0 | 0 | 39,744,688 | If you have made no changes to your model from the current migration, but you get a non-empty migration file generated, then it suggests for some reason your models became out of sync with the database, and the contents of this new migration are just the things that are mismatched.
If you say that the migration contains code that drops some constraints and add some other ones, it makes me think that the constraint names have probable changed, or maybe you upgraded your SQLAlchemy version to a newer version that generates constraints with different names. | 1 | 0 | 0 | Why Flask Migrate doesn't create an empty migration file? | 1 | python,flask-sqlalchemy,flask-migrate | 0 | 2016-09-28T10:20:00.000 |
I'm trying to add a text box to a chart I've generated with openpyxl, but can't find documentation or examples showing how to do so. Does openpyxl support it? | 2 | 0 | 0 | 0 | false | 39,774,351 | 0 | 3,047 | 1 | 0 | 0 | 39,773,544 | I'm not sure what you mean by "text box". In theory you can add pretty much anything covered by the DrawingML specification to a chart but the practice may be slightly different.
However, there is definitely no built-in API for this so you'd have to start by creating a sample file and working backwards from it. | 1 | 0 | 0 | Adding a text box to an excel chart using openpyxl | 2 | python,openpyxl | 0 | 2016-09-29T14:49:00.000 |
I have installed PostgreSQL Server 9.6.0 and Python 3.4.2 on Windows 2012 R2 Server.
I copied plpython3.dll to C:/Program Files/PostgreSQL/9.6/lib/
The in PostgreSQL I try running this command: CREATE EXTENSION plpython3u;
And I receive this message:
ERROR: could not load library "C:/Program Files/PostgreSQL/9.6/lib/plpython3.dll": The specified module could not be found.
Under this folder: C:\Program Files\PostgreSQL\9.6\share\extension there are plpython3u files.
How can I get PostgreSQL to recognize this Python 3 extension?
Thanks! | 3 | 0 | 0 | 0 | false | 41,320,600 | 0 | 5,532 | 3 | 0 | 0 | 39,800,075 | plpython3.dll in the official package is built against python 3.3, not python 3.4. What it expect is python33.dll in system32 folder. You need to install python 3.3 for your system.
Since py33 has been phased out, you may soon get frustrated, due to lack of pre-built binary package, lxml, pyzmq etc all needs to be built from source. If you need any binary module, make sure you have a correctly set up compiler. | 1 | 0 | 0 | Error during: CREATE EXTENSION plpython3u; on PostgreSQL 9.6.0 | 5 | python,postgresql | 0 | 2016-09-30T21:04:00.000 |
I have installed PostgreSQL Server 9.6.0 and Python 3.4.2 on Windows 2012 R2 Server.
I copied plpython3.dll to C:/Program Files/PostgreSQL/9.6/lib/
The in PostgreSQL I try running this command: CREATE EXTENSION plpython3u;
And I receive this message:
ERROR: could not load library "C:/Program Files/PostgreSQL/9.6/lib/plpython3.dll": The specified module could not be found.
Under this folder: C:\Program Files\PostgreSQL\9.6\share\extension there are plpython3u files.
How can I get PostgreSQL to recognize this Python 3 extension?
Thanks! | 3 | 1 | 0.039979 | 0 | false | 54,007,128 | 0 | 5,532 | 3 | 0 | 0 | 39,800,075 | Exactly the same situation I faced with Postgres 9.6 Windows 10.
PL/Python3U would not get through.
Worked around it:
Installed Python34 64bit Windows 10 version.
Copied Python34.dll to c:\windows\system32 as Python33.dll and it worked. | 1 | 0 | 0 | Error during: CREATE EXTENSION plpython3u; on PostgreSQL 9.6.0 | 5 | python,postgresql | 0 | 2016-09-30T21:04:00.000 |
I have installed PostgreSQL Server 9.6.0 and Python 3.4.2 on Windows 2012 R2 Server.
I copied plpython3.dll to C:/Program Files/PostgreSQL/9.6/lib/
The in PostgreSQL I try running this command: CREATE EXTENSION plpython3u;
And I receive this message:
ERROR: could not load library "C:/Program Files/PostgreSQL/9.6/lib/plpython3.dll": The specified module could not be found.
Under this folder: C:\Program Files\PostgreSQL\9.6\share\extension there are plpython3u files.
How can I get PostgreSQL to recognize this Python 3 extension?
Thanks! | 3 | 6 | 1 | 0 | false | 46,281,240 | 0 | 5,532 | 3 | 0 | 0 | 39,800,075 | Copy the python34.dll file to c:\windows\system32 and name the copy python33.dll
The create language plpython3u should then work without a problem. | 1 | 0 | 0 | Error during: CREATE EXTENSION plpython3u; on PostgreSQL 9.6.0 | 5 | python,postgresql | 0 | 2016-09-30T21:04:00.000 |
I have created a rather large CSV file (63000 rows and around 40 columns) and I want to join it with an ESRI Shapefile.
I have used ArcPy but the whole process takes 30! minutes. If I make the join with the original (small) CSV file, join it with the Shapefile and then make my calculations with ArcPy and continously add new fields and calculate the stuff it takes 20 minutes. I am looking for a faster solution and found there are other Python modules such as PySHP or DBFPy but I have not found any way for joining tables, hoping that could go faster.
My goal is already to get away from ArcPy as much as I can and preferable only use Python, so preferably no PostgreSQL and alikes either.
Does anybody have a solution for that? Thanks a lot! | 2 | 0 | 0 | 1 | false | 39,892,009 | 0 | 377 | 1 | 0 | 0 | 39,868,163 | Not exactly a programmatical solution for my problem but a practical one:
My shapefile is always static, only the attributes of the features will change. So I copy my original shapefile (only the basic files with endings .shp, .shx, .prj) to my output folder and rename it to the name I want.
Then I create my CSV-File with all calculations and convert it to DBF and save it with the name of my new shapefile to the output folder too. ArcGIS will now load the shapefile along with my own DBF file and I don't even need to do any tablejoin at all!
Now my program runs through in only 50 seconds!
I am still interested in more solutions for the table join problem, maybe I will encounter that problem again in the future where the shapefile is NOT always static. I did not really understand Nan's solution, I am still at "advanced beginner" level in Python :)
Cheers | 1 | 0 | 0 | DBF Table Join without using Arcpy? | 1 | python-2.7,dbf,arcpy,pyshp | 0 | 2016-10-05T07:45:00.000 |
I have been developing a Django project using sqlite3 as the backend and it has been working well. I am now attempting to switch the project over to use postgres as the backend but running into some issues.
After modifying my settings file, setting up postgres, creating the database and user I get the error below when running manage.py migrate
django.db.utils.ProgrammingError: relation "financemgr_rate" does not exist
financemgr is an app within the project. rate is a table within the app.
If I run this same command but specify sqlite3 as my backend it works fine.
For clarity I will repeat:
Environment Config1
Ubuntu 14.04, Django 1.10
Settings file has 'ENGINE': 'django.db.backends.sqlite3'
Run manage.py migrate
Migration runs and processes all the migrations successfully
Environment Config2
Ubuntu 14.04, Django 1.10
Settings file has 'ENGINE': 'django.db.backends.postgresql_psycopg2'
Run manage.py migrate
Migration runs and gives the error django.db.utils.ProgrammingError: relation "financemgr_rate" does not exist
Everything else is identical. I am not trying to migrate data, just populate the schema etc.
Any ideas? | 2 | 2 | 1.2 | 0 | true | 40,100,350 | 1 | 955 | 1 | 0 | 0 | 39,879,939 | This may help you :
I think you have pre-stored migration files(migrate for sqlite database).
Now you have change the database configuration but still django looking for the existing table according to migration files you have(migrated for previous database).
Better you delete all the migration files in migration folder of your app, and migrate it again, by running commands python manage.py makemigrations and python manage.py migrate it may work fine. | 1 | 0 | 0 | django migrate failing after switching from sqlite3 to postgres | 1 | python,django,postgresql,django-models,sqlite | 0 | 2016-10-05T17:06:00.000 |
im using django-tables2 in order to show values from a database query. And everythings works fine. Im now using Django-dabug-toolbar and was looking through my pages with it. More out of curiosity than performance needs. When a lokked at the page with the table i saw that the debug toolbar registerd over 300 queries for a table with a little over 300 entries. I dont think flooding the DB with so many queries is a good idea even if there is no performance impact (at least not now). All the data should be coming from only one query.
Why is this happening and how can i reduce the number of queries? | 3 | 2 | 1.2 | 0 | true | 39,882,505 | 1 | 348 | 1 | 0 | 0 | 39,882,504 | Im posting this as a future reference for myself and other who might have the same problem.
After searching for a bit I found out that django-tables2 was sending a single query for each row. The query was something like SELECT * FROM "table" LIMIT 1 OFFSET 1 with increasing offset.
I reduced the number of sql calls by calling query = list(query) before i create the table and pass the query. By evaluating the query in the python view code the table now seems to work with the evaulated data instead and there is only one database call instead of hundreds. | 1 | 0 | 0 | django-tables2 flooding database with queries | 2 | python,django,django-tables2 | 0 | 2016-10-05T19:49:00.000 |
This is quite a general question, though I’ll give the specific use case for context.
I'm using a FileMaker Pro database to record personal bird observations. For each bird on the national list, I have extracted quite a lot of base data by website scraping in Python, for example conservation status, geographical range, scientific name and so on. In day-to-day use of the database, this base data remains fixed and unchanging. However, once a year or so I will want to re-scrape the base data to pick up the most recent published information on status, range, and even changes in scientific name (that happens).
I know there are options such as PyFilemaker or bBox which should allow me to write to the FileMaker database from Python, so the update mechanism itself shouldn't be a problem.
It would be rather dangerous simply to overwrite all of last year’s base data with the newly scraped data, and I'm looking for general advice as to how best to provide visibility for the changes before manually importing them. What I have in mind is to use pandas to generate a spreadsheet using the base data, and to highlight the changed cells. Does that sound a sensible way of doing it? I suspect that this may be a very standard requirement, and if anybody could help out with comments on an approach which is straightforward to implement in Python that would be most helpful. | 0 | 1 | 1.2 | 0 | true | 39,941,551 | 1 | 46 | 1 | 0 | 0 | 39,936,352 | This is not a standard requirement and there is no easy way of doing this. The best way to track changes is a Source Control system like git, but it is not applicable to FileMaker Pro as the files are binary.
You can try your approach, or you can try to add the new records in FileMaker instead of updating them and flag them as current or use only the last record
There are some amazing guys here, but you might want to take it to one of the FileMAker forums as the FIleMAker audience there is much larger then in SO | 1 | 0 | 0 | Providing visibility of periodic changes to a database | 1 | python,filemaker | 0 | 2016-10-08T19:12:00.000 |
I have a function like the following that I want to use to compute the hash of a sqlite database file, in order to compare it to the last backup I made to detect any changes.
def get_hash(file_path):
# http://stackoverflow.com/a/3431838/1391717
hash_sha1 = hashlib.sha1
with open(file_path, "rb") as f:
for chunk in iter(lambda: f.read(4096), b""):
hash_sha1.update(chunk)
return hash_sha1.hexdigest()
I plan on locking the database, so no one can write to it while I'm computing the hash. Is it possible for me to cause any harm while doing this?
// http://codereview.stackexchange.com/questions/78643/create-sqlite-backups
connection = sqlite3.connect(database_file)
cursor = connection.cursor()
cursor.execute("begin immediate")
db_hash = get_hash(args.database) | 0 | 1 | 1.2 | 0 | true | 39,938,696 | 0 | 1,538 | 1 | 0 | 0 | 39,936,697 | The sqlite3 database files, can and may be read by many different readers at the same time. There is no problem with concurrency in that respect with sqlite3. The problems which is native to sqlite3 concerns writing to the file, only one writer is allowed.
So if you only read your fine.
If you are planning to lock the database and succeed with that, while you compute the hash, you become a writer with exclusive access. | 1 | 0 | 0 | How to compute a hash of a sqlite database file without causing harm | 1 | python,sqlite | 0 | 2016-10-08T19:49:00.000 |
I want to use a test db on my test environment, and the production db on production environment in my Python application.
How should I handle routing to two dbs? Should I have an untracked config.yml file that has the test db's connection string on my test server, and the production db's connection string on production server?
I'm using github for version control and travis ci for deployment. | 0 | 0 | 0 | 0 | false | 39,951,058 | 1 | 68 | 1 | 0 | 0 | 39,950,769 | Let's take Linux environment for example. Often, the user level configuration of an application is placed under your home folder as a dot file. So what you can do is like this:
In your git repository, track a sample configure file, e.g., config.sample.yaml, and put the configuration structure here.
When deploying, either in test environment or production environment, you can copy and rename this file as a dot-file, e.g., $HOME/.{app}.config.yaml. Then in your application, you can read this file.
If you are developing an python package, you can make the file copy operation done in the setup.py. There are some advantages:
You can always track the structure changes of your configuration file.
Separate configuration between test and production enviroment.
More secure, you do not need to code your import db connection information in the public file.
Hope this would be helpful. | 1 | 0 | 0 | Using different dbs on production and test environment | 1 | python,github,configuration,travis-ci,configuration-files | 0 | 2016-10-10T03:06:00.000 |
1) I have Spark on Bluemix platform, how do I add a library there ?
I can see the preloaded libraries but cant add a library that I want.
Any command line argument that will install a library?
pip install --package is not working there
2) I have Spark and Mongo DB running, but I am not able to connect both of them.
con ='mongodb://admin:ITCW....ssl=true'
ssl1 ="LS0tLS ....."
client = MongoClient(con,ssl=True)
db = client.mongo11
collection = db.mongo11
ff=db.sammy.find()
Error I am getting is :
SSL handshake failed: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:590) | 0 | 2 | 1.2 | 1 | true | 40,021,035 | 0 | 106 | 1 | 0 | 0 | 40,020,767 | In a Python notebook:
!pip install <package>
and then
import <package> | 1 | 0 | 0 | Add a library in Spark in Bluemix & connect MongoDB , Spark together | 1 | python,apache-spark,ibm-cloud,ibm-cloud-plugin | 0 | 2016-10-13T12:17:00.000 |
In New API(Python-Odo o) I Successfully upload Excel File.
But if second time i upload same file Data are Duplicated.
So How I upload only Unique Data.
If No Change in Excel file no changes in recored
But if Change in Data
this only recored updated reaming recored same as upload.
Thanks | 1 | 0 | 0 | 0 | false | 44,375,335 | 0 | 58 | 2 | 0 | 0 | 40,068,892 | Hello viral,
When you upload first time data in excel that time take one unique column(i.e ID)
and when you second time upload data that time check the unique column, if found so only update their data else upload data as new record. | 1 | 0 | 0 | Unique Data Uplload in Python Excel | 2 | python,openerp,odoo-8 | 0 | 2016-10-16T09:40:00.000 |
In New API(Python-Odo o) I Successfully upload Excel File.
But if second time i upload same file Data are Duplicated.
So How I upload only Unique Data.
If No Change in Excel file no changes in recored
But if Change in Data
this only recored updated reaming recored same as upload.
Thanks | 1 | 0 | 0 | 0 | false | 40,134,328 | 0 | 58 | 2 | 0 | 0 | 40,068,892 | For that you need atleast one field to identify the record to check the duplicacy. | 1 | 0 | 0 | Unique Data Uplload in Python Excel | 2 | python,openerp,odoo-8 | 0 | 2016-10-16T09:40:00.000 |
As part of a bigger set of tests I need to extract all the formulas within an uploaded Excel workbook. I then need to parse each formula into its respective range references and dump those references into a simple database. For example, if Cell A1 has a formula =B1 + C1 then my database would record B1 and C1 as referenced cells.
Currently I read formulas one at a time using openpyxl and then parse them. This is fine for smaller workbooks, but for large workbooks it can be very slow. It feels entirely inefficient.
Could pandas or a similar module extract Excel formulas faster? Or is there perhaps a better way to extract all workbook formulas than reading it one cell at a time?
Any advice would be highly appreciated. | 2 | 2 | 1.2 | 0 | true | 40,126,930 | 0 | 994 | 2 | 0 | 0 | 40,117,180 | What do you mean by "extracting the formulae faster"? They are stored with each cell so you have to go cell by cell. When it comes to parsing, openpyxl includes a tokeniser which you might find useful. In theory this would allow you to read the worksheet XML files directly and only parse the nodes with formulae in them. However, you'd also have to handle the "shared formulae" that some applications use. openpyxl automatically converts such formulae into per-cell ones.
Internally Pandas relies on xlrd to read the files, so the ETL of getting the stuff into Pandas won't be faster than working directly with worksheet objects. | 1 | 0 | 0 | Fastest way to parse all Excel formulas using Python 3.5 | 2 | excel,python-3.x,pandas,openpyxl | 0 | 2016-10-18T20:07:00.000 |
As part of a bigger set of tests I need to extract all the formulas within an uploaded Excel workbook. I then need to parse each formula into its respective range references and dump those references into a simple database. For example, if Cell A1 has a formula =B1 + C1 then my database would record B1 and C1 as referenced cells.
Currently I read formulas one at a time using openpyxl and then parse them. This is fine for smaller workbooks, but for large workbooks it can be very slow. It feels entirely inefficient.
Could pandas or a similar module extract Excel formulas faster? Or is there perhaps a better way to extract all workbook formulas than reading it one cell at a time?
Any advice would be highly appreciated. | 2 | 0 | 0 | 0 | false | 40,118,989 | 0 | 994 | 2 | 0 | 0 | 40,117,180 | Don't know about python, but a fast approach to the problem is:
get all the formulas in R1C1 mode into an array using specialcells
feed into a collection/dictionary to get uniques
then parse the uniques | 1 | 0 | 0 | Fastest way to parse all Excel formulas using Python 3.5 | 2 | excel,python-3.x,pandas,openpyxl | 0 | 2016-10-18T20:07:00.000 |
I am storing all the static files in AWS S3 Bucket and I am using Docker containers to run my application. This way, whenever I want to deploy the changes, I create a new container using a new image.
I am running ./manage.py collectstatic on every deployment because sometimes I add libraries to the project that have static files; and it takes forever to reupload them to S3 on every deployment. Is there a way I can keep a list of static files uploaded to S3 in my database, so that collectstatic only uploads to the added files. | 0 | 0 | 0 | 0 | false | 40,185,281 | 1 | 228 | 1 | 0 | 0 | 40,184,760 | Clean solution would be to read the source for collectstatic and write your own management command that would do the same thing, but would write a file list into the database. A quick and dirty way would be to pipe the output of collectstatic into a script of some sort that would reformat it as SQL and pipe it through a database client. | 1 | 0 | 0 | How can I make django to write static files list to database when using collectstatic | 1 | python,django,amazon-web-services,amazon-s3,django-staticfiles | 0 | 2016-10-21T20:13:00.000 |
How do I exit dbshell (SQLite 3) on the command line when using Django?
It's my first time to use the command. I watch a book and am practicing Django at the same time. After I run this command, I have no idea how to leave the environment since I have never learned SQL before. | 3 | 1 | 0.039979 | 0 | false | 40,205,233 | 1 | 5,764 | 3 | 0 | 0 | 40,205,197 | Just typing quit does the work. | 1 | 0 | 0 | How do I exit dbshell (SQLite 3) on the command line when using Django? | 5 | python,django,sqlite | 0 | 2016-10-23T16:36:00.000 |
How do I exit dbshell (SQLite 3) on the command line when using Django?
It's my first time to use the command. I watch a book and am practicing Django at the same time. After I run this command, I have no idea how to leave the environment since I have never learned SQL before. | 3 | 1 | 0.039979 | 0 | false | 47,724,071 | 1 | 5,764 | 3 | 0 | 0 | 40,205,197 | You can just hit the key combination Ctrl + C. | 1 | 0 | 0 | How do I exit dbshell (SQLite 3) on the command line when using Django? | 5 | python,django,sqlite | 0 | 2016-10-23T16:36:00.000 |
How do I exit dbshell (SQLite 3) on the command line when using Django?
It's my first time to use the command. I watch a book and am practicing Django at the same time. After I run this command, I have no idea how to leave the environment since I have never learned SQL before. | 3 | 3 | 0.119427 | 0 | false | 51,351,884 | 1 | 5,764 | 3 | 0 | 0 | 40,205,197 | You can type .exit in thew shell to exit. For more information about commands, type .help.
It raises an error and exits ... it was helpful :) | 1 | 0 | 0 | How do I exit dbshell (SQLite 3) on the command line when using Django? | 5 | python,django,sqlite | 0 | 2016-10-23T16:36:00.000 |
I'm trying to figure out how to convert the entire column of a spreadsheet from an int to a string. The problem I'm having is that I have a bunch of excell spreadsheets whose values I want to upload to our database. Our numbers are 10 digits long and being converted to scientific notation though, so I want to convert all of our numbers from ints into strings before our upload.
I've been trying to do some research, but I can't find any libraries that would convert an entire column -- do I need to iterate row by row converting the numbers to strings?
Thank you. | 0 | 0 | 0 | 0 | false | 57,348,868 | 0 | 2,713 | 1 | 0 | 0 | 40,266,970 | You could try:
df['column_name'] = df['column_name'].astype(str) | 1 | 0 | 1 | Converting Excel Column Type From Int to String in Python | 1 | python,excel | 0 | 2016-10-26T15:58:00.000 |
I am using python for reading Unicode data and then Preprocessing it and storing it in a database (Postgres)
Now the database has 3 tables with 4 attributes and 700,000 tuples each. I read the data and map it to python dictionary and list according to the way I need to use it.
Now I have to iterate through all these tuples, do some calculations and write again in the database.
I have to do 1000 iteration like these. The problem is 1 iteration takes about 50 minutes which makes it impossible to make those many iterations.
Is there anyway I can make these iterations faster?
Any new idea is welcome. Not necessary in python. | 0 | 0 | 0 | 0 | false | 40,284,042 | 0 | 334 | 1 | 0 | 0 | 40,282,987 | You don't say what updates each "iteration" performs, but clearly you are reading and writing 7 million rows. Would it be possible to use the database to perform the updates? | 1 | 0 | 1 | Data Preprocessing with python | 1 | python-3.x,postgresql-9.1 | 0 | 2016-10-27T11:03:00.000 |
I have a database with time series data split up into even sized chunks stored as arrays in postgres.
I need to arbitrarily extract ranges of them and concatenate the returned set into a single array. They have an offset field so given a start offset and length you can find any part of the set you are looking for.
Which is better:
To write queries that return each individual array and concatenate in software
or
Use a stored procedure that takes a start point and length and does the concatenation internally before returning the entire array | 0 | 0 | 0 | 0 | false | 40,320,130 | 0 | 25 | 1 | 0 | 0 | 40,316,891 | “Better” is a rather unspecific adjective in this case.
If you are asking for aesthetic judgement, simplicity of code and mantainability, I don't feel in a position to pronounce a clear judgement. My gut feeling is that both are similar.
If you are asking about good performance, I'd advise you to run a simple test. But even without a test I'd say that both solutions are not optimal and you should write it as a single SQL statement.
If you are asking about portability, the answer depends on whether it is more important to port to another database (that would favor the application software solution) or to port to a different programming language (in that case, the solution in the database is preferable). | 1 | 0 | 0 | stored procedure versus queries for array concatanation | 1 | arrays,postgresql,python-3.x | 0 | 2016-10-29T07:07:00.000 |
I'm designing a trade management system. I want to be able to enter in values into excel and have python do some computation (rather than excel). Is this even possible?
With openpyxl I have to enter in the value to excel, save, close, run the script, reopen excel. This is an unacceptable in terms of the design criteria.
Can any one recommend a better way to have a live interface which updates when values are changed in the cells ? Ideally I would like to remain with excel | 1 | 0 | 0 | 0 | false | 40,381,980 | 0 | 1,126 | 1 | 0 | 0 | 40,381,804 | I don't know if that is even possible, but it will be at least difficult to do. Because Excel locks the sheet file when it is read, it cannot be modified by other processes while it is opened.
So that leave only the possibility to have the Excel process modify the file. And scripting in Excel can only be done in VB as far as I know (but I don't know much about that). | 1 | 0 | 0 | How to update an Excel worksheet live without closing and reopening python | 1 | python,excel | 0 | 2016-11-02T13:58:00.000 |
Is there any difference between cursor in cx_oracle api which is used in python and cursor of plsql in database?
If there is a difference please elaborate on it.
I am using python 3.5 and oracle 11g database and eclipse ide to use api and connect.
Thanks in advance, | 0 | 0 | 0 | 0 | false | 40,425,912 | 0 | 95 | 1 | 0 | 0 | 40,406,598 | They both reference the same underlying concepts and methods. What you can do in the one you should be able to do in the other. There are limitations, of course, but these are due to the differences in the languages being used in each case. If you have a specific question, please update your question accordingly! | 1 | 0 | 0 | Difference between cx_oracle.cursor in python and cursor in database | 1 | oracle11g,cursor,python-3.5,cx-oracle | 0 | 2016-11-03T16:23:00.000 |
I have web application which dynamically deployed on EC2 instances (scalable). Also I have RDS mysql instance which dynamically created by python with boto3. Now port 3306 of RDS is public, but I want to allow connection only from my EC2's from specific VPC. Can I create RDS on specific VPC (same one with EC2 instances)? What is best practice to create such set EC2 + RDS ? | 0 | 1 | 0.099668 | 0 | false | 40,433,014 | 1 | 214 | 1 | 0 | 1 | 40,426,863 | It is certainly best practice to have your Amazon EC2 instances in the same VPC as the Amazon RDS database. Recommended security is:
Create a Security Group for your web application EC2 instances (Web-SG)
Launch your Amazon RDS instance in a private subnet in the same VPC
Configure the Security Group on the RDS instance to allow incoming MySQL (3306) traffic from the Web-SG security group
If your RDS instance is currently in a different VPC, you can take a snapshot and then create a new database from the snapshot.
If you are using an Elastic Load Balancer, you could even put your Amazon EC2 instances in a private subnet since all access will be via the Load Balancer. | 1 | 0 | 0 | Create AWS RDS on specific VPC | 2 | python,amazon-web-services,deployment,amazon-ec2,boto3 | 0 | 2016-11-04T15:50:00.000 |
I was using Zodb for large data storage which was in form of typical dictionary format (key,value).
But while storing in ZODB i got following warning message:
C:\python-3.5.2.amd64\lib\site-packages\ZODB\Connection. py:550:
UserWarning: The object
you're saving is large. (510241658 bytes.)
Perhaps you're storing media which should be stored in blobs.
Perhaps you're using a non-scalable data structure, such as a
PersistentMapping or PersistentList.
Perhaps you're storing data in objects that aren't persistent at all.
In cases like that, the data is stored in the record of the containing
persistent object.
In any case, storing records this big is probably a bad idea.
If you insist and want to get rid of this warning, use the
large_record_size option of the ZODB.DB constructor (or the
large-record-size option in a configuration file) to specify a larger
size.
warnings.warn(large_object_message % (obj.class, len(p)))
please suggest how can i store large data in ZODB or suggest any other library for this purpose | 2 | 1 | 0.099668 | 0 | false | 40,549,472 | 0 | 901 | 1 | 0 | 0 | 40,548,608 | You must store the object on the filesystem and add reference to it in the zodb like using a regular database. | 1 | 0 | 1 | ZODB or other database for large data storage in python | 2 | python,database,dataset,zodb,object-oriented-database | 0 | 2016-11-11T13:01:00.000 |
I am using sql to pull in values from 'lookup' table. I will use
cursor and fetchall and then loop through values and place them into
dictionary. I do not see reason to keep querying database(open conn,
query, close conn) for every lookup performed when a dictionary of
subset of data should suffice. Is this 'standard' practice to use dictionary in-lieu of table ?
Is there a way to test this with different sets of values without connecting to database? I would prefer at least unit testing without connecting to data store. Some framework or some pattern? Not sure what to investigate. | 1 | 0 | 0 | 0 | false | 40,551,282 | 0 | 56 | 1 | 0 | 0 | 40,550,998 | I do a lot of this. And although it sounds like a cop out, the answer is "it depends":
If the dataset is very large I would keep referring back to the database as loading it in to memory could be a resource issue.
If the dataset is not to large then loading it in to a memory and referring to it can really improve performance.
I tend to test and see what the performance is like. | 1 | 0 | 0 | Python Testing without sql connection | 1 | python,sql,testing | 0 | 2016-11-11T15:21:00.000 |
I'm developing some project in Django, something to manage assets in warehouses. I want to use two databases to this. First is sqlite database, which contains any data about users. Second is mongoDB database,in which want to store all the data related to assets. The question is, how to tell to my model classes, which database they should use (models responsible for user registration etc - sqlite, models responsible for managing assets data - mongoDB)? I read about DATABASE_ROUTERS and using Meta classes, but it's solutions for supported databases by Django (or maybe I don't know something), I dont know if it's good and possible to integrate it with mongoengine.
Thanks for any tip! | 0 | 0 | 1.2 | 0 | true | 40,752,618 | 1 | 1,468 | 1 | 0 | 0 | 40,602,640 | I found a solution, It's very simple. If you want your model to use mongoDB database, just create model class witch Document parameter (or EmbeddedDocument, for example class Magazine(Document):). but if you prefer default database type defined, just create class, like in django documentation (example class Person(models.Model):). | 1 | 0 | 0 | Managing databases in Django models, sqlite and mongoengine | 2 | python,django,mongodb,sqlite,mongoengine | 0 | 2016-11-15T05:28:00.000 |
I'm working on a project that I inherited, and I want to add a table to my database that is very similar to one that already exists. Basically, we have a table to log users for our website, and I want to create a second table to specifically log users that our site fails to do a task for.
Since I didn't write the site myself, and am pretty new to both SQL and Django, I'm a little paranoid about running a migration (we have a lot of really sensitive data that I'm paranoid about wiping).
Instead of having a django migration create the table itself, can I create the second table in MySQL, and the corresponding model in Django, and then have this model "recognize" the SQL table? without explicitly using a migration? | 0 | 0 | 0 | 0 | false | 40,616,612 | 1 | 38 | 1 | 0 | 0 | 40,616,036 | SHORT ANSWER: Yes.
MEDIUM ANSWER: Yes. But you will have to figure out how Django would have created the table, and do it by hand. That's not terribly hard.
Django may also spit out some warnings on startup about migrations being needed...but those are warnings, and if the app works, then you're OK.
LONG ANSWER: Yes. But for the sake of your sanity and sleep quality, get a completely separate development environment and test your backups. (But you knew that already.) | 1 | 0 | 0 | Do I Need to Migrate to Link my Database to Django | 1 | python,mysql,django | 0 | 2016-11-15T17:24:00.000 |
I am interesting in building databases and have been reading about SQL engines and Pandas framework, and how they interact, but am still confused about the difference between a database and a data framework.
I wonder if somebody could point me to links which clarify the distinction between them, and which is the best starting point for a data analysis project. | 0 | 1 | 1.2 | 0 | true | 40,658,564 | 1 | 66 | 1 | 0 | 0 | 40,658,287 | Database is place where you store data collection. You can manipulate data by DML statement and some statements can be more difficult (like pivots or functions). Data framework is tool to make your computations, pivots and other manipulating much more easier (for example with drag and drop option). | 1 | 0 | 0 | Database and Data Framework | 1 | python,sql,database,pandas | 0 | 2016-11-17T15:05:00.000 |
I built a simple statement to run a load data local infile on a MySQL table. I'm using Python to generate the string and also run the query. So I'm using the Python package pymysql.
This is the line to build the string. Assume metadata_filename is a proper string:
load_data_statement = """load data local infile """ + metadata_filename + """INTO TABLE table1 FIELDS TERMINATED BY ',' ENCLOSED BY '\'';"""
I originally had string substitution, and wanted to see if that was the issue, but it isn't. If I edit the statement above and commend out the ENCLOSED BY part, it is able to run, but not properly load data since I need the enclosed character
If I print(load_data_statement), I get what appears to be proper SQL code, but it doesn't seem to be read by the SQL connector. This is what's printed:
load data local infile 'filename.txt' INTO TABLE table1 FIELDS TERMINATED BY ',' ENCLOSED BY ''';
That all appears to be correct, but the Mysql engine is not taking it. What should I edit in Python to escape the single quote or just write it properly?
Edit:
I've been running the string substitution alternative, but still getting issues: load_data_statement = """load data local infile 'tgt_metadata_%s.txt' INTO TABLE table1 FIELDS TERMINATED BY ',' ENCLOSED BY '\'';""" % metadata_filename
Also tried raw strings:load_data_statement = r"""load data local infile 'tgt_metadata_%s.txt' INTO TABLE table1 FIELDS TERMINATED BY ',' ENCLOSED BY '\'';""" % metadata_filename
Also tried raw strings:load_data_statement = r"""load data local infile 'tgt_metadata_%s.txt' INTO TABLE table1 FIELDS TERMINATED BY ',' ENCLOSED BY '\'';""" % metadata_filename
And tried adding extra escapes: Also tried raw strings:load_data_statement = r"""load data local infile 'tgt_metadata_%s.txt' INTO TABLE table1 FIELDS TERMINATED BY ',' ENCLOSED BY '\\'';""" % metadata_filename
Also tried raw strings:load_data_statement = r"""load data local infile \'tgt_metadata_%s.txt\' INTO TABLE table1 FIELDS TERMINATED BY ',' ENCLOSED BY '\'';""" % metadata_filename
The execute line is simple `cur.execute(load_data_statement)
And the error I'm getting is odd: `pymysql.err.ProgrammingError: (1064, "You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'tgt_metadta_mal.txt'.txt' INTO table tgt_metadata FIELDS TERMINATED BY ','; ENC' at line 1")
I don't understand why the message starts at 'tgt_metadata_mal.txt and shows only the first 3 letters of ENCLOSED BY... | 1 | 0 | 0 | 0 | false | 40,672,954 | 0 | 4,539 | 2 | 0 | 0 | 40,672,551 | I think the problem is with the SQL statement you print. The single quote in ''' should be escaped: '\''. Your backslash escapes the quote at Python level, and not the MySQL level. Thus the Python string should end with ENCLOSED BY '\\'';
You may also use the raw string literal notation:
r"""INTO TABLE table1 FIELDS TERMINATED BY ',' ENCLOSED BY '\'';""" | 1 | 0 | 0 | In Python how do I escape a single quote within a string that will be used as a SQL statement? | 2 | python,mysql,escaping,load-data-infile,pymysql | 0 | 2016-11-18T08:36:00.000 |
I built a simple statement to run a load data local infile on a MySQL table. I'm using Python to generate the string and also run the query. So I'm using the Python package pymysql.
This is the line to build the string. Assume metadata_filename is a proper string:
load_data_statement = """load data local infile """ + metadata_filename + """INTO TABLE table1 FIELDS TERMINATED BY ',' ENCLOSED BY '\'';"""
I originally had string substitution, and wanted to see if that was the issue, but it isn't. If I edit the statement above and commend out the ENCLOSED BY part, it is able to run, but not properly load data since I need the enclosed character
If I print(load_data_statement), I get what appears to be proper SQL code, but it doesn't seem to be read by the SQL connector. This is what's printed:
load data local infile 'filename.txt' INTO TABLE table1 FIELDS TERMINATED BY ',' ENCLOSED BY ''';
That all appears to be correct, but the Mysql engine is not taking it. What should I edit in Python to escape the single quote or just write it properly?
Edit:
I've been running the string substitution alternative, but still getting issues: load_data_statement = """load data local infile 'tgt_metadata_%s.txt' INTO TABLE table1 FIELDS TERMINATED BY ',' ENCLOSED BY '\'';""" % metadata_filename
Also tried raw strings:load_data_statement = r"""load data local infile 'tgt_metadata_%s.txt' INTO TABLE table1 FIELDS TERMINATED BY ',' ENCLOSED BY '\'';""" % metadata_filename
Also tried raw strings:load_data_statement = r"""load data local infile 'tgt_metadata_%s.txt' INTO TABLE table1 FIELDS TERMINATED BY ',' ENCLOSED BY '\'';""" % metadata_filename
And tried adding extra escapes: Also tried raw strings:load_data_statement = r"""load data local infile 'tgt_metadata_%s.txt' INTO TABLE table1 FIELDS TERMINATED BY ',' ENCLOSED BY '\\'';""" % metadata_filename
Also tried raw strings:load_data_statement = r"""load data local infile \'tgt_metadata_%s.txt\' INTO TABLE table1 FIELDS TERMINATED BY ',' ENCLOSED BY '\'';""" % metadata_filename
The execute line is simple `cur.execute(load_data_statement)
And the error I'm getting is odd: `pymysql.err.ProgrammingError: (1064, "You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'tgt_metadta_mal.txt'.txt' INTO table tgt_metadata FIELDS TERMINATED BY ','; ENC' at line 1")
I don't understand why the message starts at 'tgt_metadata_mal.txt and shows only the first 3 letters of ENCLOSED BY... | 1 | 3 | 0.291313 | 0 | false | 40,672,606 | 0 | 4,539 | 2 | 0 | 0 | 40,672,551 | No need for escaping that string.
cursor.execute("SELECT * FROM Codes WHERE ShortCode = %s", text)
You should use %s instead of your strings and then (in this case text) would be your string. This is the most secure way of protecting from SQL Injection | 1 | 0 | 0 | In Python how do I escape a single quote within a string that will be used as a SQL statement? | 2 | python,mysql,escaping,load-data-infile,pymysql | 0 | 2016-11-18T08:36:00.000 |
I am building a social network where each user has 3 different profiles - Profile 1, Profile 2 and Profile 3.
This is my use case:
User A follows Users B, C and D in Profile 1. User A follows Users C, F and G in Profile 2. User C follows Users A and E in profile 3.
Another question is that any user on each of these profiles would need to see the latest or (say top N) posts of the users they are following on their respective profiles (whether it is profile 1, 2 or 3).
How can we best store the above information?
Context:
I am c using Django framework and a Postgres DB to store user’s profile information. User’s posts are being stored on and retrieved from a Cloud CDN.
Which is the best way to go implementing these use cases i.e. the choice of technologies to best suit this scenario? Scalability is an other important factor that comes into play here. | 0 | 0 | 0 | 0 | false | 40,814,518 | 1 | 169 | 1 | 0 | 0 | 40,813,514 | Neo4J is a graph database, which is good for multiple-hops relation search. Say, you want to get the top N posts of A's brother's friend's sister... AFAIK, it's a standalone instance, you CANNOT partition your data on several nodes. Otherwise, the relation between two person might cross machines.
Redis is a key-value store, which is good for searching by key. Say you want to get the friend list of A, or get the top N list of A. You can have a Redis cluster to distribute your data on several machines.
Which is better? It depends on your scenario. It seems that you don't need multiple-hops relation search. So Redis might be better.
You can have a SET to save the friend list of each person. and have a LIST to save the post ids of each person. When you need show posts for user A, call SMEMBERS or SSCAN to get the friend list, and then call LRANGE for each friend to get the top N post ids. | 1 | 0 | 0 | Should I use Redis or Neo4J for the following use case? | 1 | python,django,postgresql,neo4j,redis | 0 | 2016-11-25T23:41:00.000 |
I have a Python program that I am running as a Job on a Kubernetes cluster every 2 hours. I also have a webserver that starts the job whenever user clicks a button on a page.
I need to ensure that at most only one instance of the Job is running on the cluster at any given time.
Given that I am using Kubernetes to run the job and connecting to Postgresql from within the job, the solution should somehow leverage these two. I though a bit about it and came with the following ideas:
Find a setting in Kubernetes that would set this limit, attempts to start second instance would then fail. I was unable to find this setting.
Create a shared lock, or mutex. Disadvantage is that if job crashes, I may not unlock before quitting.
Kubernetes is running etcd, maybe I can use that
Create a 'lock' table in Postgresql, when new instance connects, it checks if it is the only one running. Use transactions somehow so that one wins and proceeds, while others quit. I have not yet thought this out, but is should work.
Query kubernetes API for a label I use on the job, see if there are some instances. This may not be atomic, so more than one instance may slip through.
What are the usual solutions to this problem given the platform choice I made? What should I do, so that I don't reinvent the wheel and have something reliable? | 0 | 1 | 1.2 | 0 | true | 40,968,608 | 0 | 1,116 | 1 | 1 | 0 | 40,958,107 | A completely different approach would be to run a (web) server that executes the job functionality. At a high level, the idea is that the webserver can contact this new job server to execute functionality. In addition, this new job server will have an internal cron to trigger the same functionality every 2 hours.
There could be 2 approaches to implementing this:
You can put the checking mechanism inside the jobserver code to ensure that even if 2 API calls happen simultaneously to the job server, only one executes, while the other waits. You could use the language platform's locking features to achieve this, or use a message queue.
You can put the checking mechanism outside the jobserver code (in the database) to ensure that only one API call executes. Similar to what you suggested. If you use a postgres transaction, you don't have to worry about your job crashing and the value of the lock remaining set.
The pros/cons of both approaches are straightforward. The major difference in my mind between 1 & 2, is that if you update the job server code, then you might have a situation where 2 job servers might be running at the same time. This would destroy the isolation property you want. Hence, database might work better, or be more idiomatic in the k8s sense (all servers are stateless so all the k8s goodies work; put any shared state in a database that can handle concurrency).
Addressing your ideas, here are my thoughts:
Find a setting in k8s that will limit this: k8s will not start things with the same name (in the metadata of the spec). But anything else goes for a job, and k8s will start another job.
a) etcd3 supports distributed locking primitives. However, I've never used this and I don't really know what to watch out for.
b) postgres lock value should work. Even in case of a job crash, you don't have to worry about the value of the lock remaining set.
Querying k8s API server for things that should be atomic is not a good idea like you said. I've used a system that reacts to k8s events (like an annotation change on an object spec), but I've had bugs where my 'operator' suddenly stops getting k8s events and needs to be restarted, or again, if I want to push an update to the event-handler server, then there might be 2 event handlers that exist at the same time.
I would recommend sticking with what you are best familiar with. In my case that would be implementing a job-server like k8s deployment that runs as a server and listens to events/API calls. | 1 | 0 | 0 | Ensuring at most a single instance of job executing on Kubernetes and writing into Postgresql | 1 | python,postgresql,mutex,kubernetes,distributed-system | 0 | 2016-12-04T11:28:00.000 |
I currently have a Python program which reads a local file (containing a pickled database object) and saves to that file when it's done. I'd like to branch out and use this program on multiple computers accessing the same database, but I don't want to worry about synchronizing the local database files with each other, so I've been considering cloud storage options. Does anyone know how I might store a single data file in the cloud and interact with it using Python?
I've considered something like Google Cloud Platform and similar services, but those seem to be more server-oriented whereas I just need to access a single file on my own machines. | 0 | 0 | 0 | 0 | false | 41,031,479 | 0 | 57 | 1 | 0 | 0 | 41,031,326 | You could install gsutil and the boto library and use that. | 1 | 0 | 1 | Access a Cloud-stored File using Python 3? | 1 | python,database,file,cloud,storage | 0 | 2016-12-08T03:33:00.000 |
i have written MicroServices like for auth, location, etc.
All of microservices have different database, with for eg location is there in all my databases for these services.When in any of my project i need a location of user, it first looks in cache, if not found it hits the database. So far so good.Now when location is changed in any of my different databases, i need to update it in other databases as well as update my cache.
currently i made a model (called subscription) with url as its field, whenever a location is changed in any database, an object is created of this subscription. A periodic task is running which checks for subscription model, when it finds such objects it hits api of other services and updates location and updates the cache.
I am wondering if there is any better way to do this? | 3 | 2 | 0.379949 | 0 | false | 41,049,986 | 1 | 426 | 1 | 0 | 0 | 41,043,800 | I am wondering if there is any better way to do this?
"better" is entirely subjective. if it meets your needs, it's fine.
something to consider, though: don't store the same information in more than one place.
if you need an address, look it up from the service that provides address, every time.
this may be a performance hit, but it eliminates the problem of replicating the data everywhere.
another option would be a more proactive approach, as suggested in comments.
instead of creating a task list for changes, and doing that periodically, send a message across rabbitmq immediately when the change happens. let every service that needs to know, get a copy of the message and update it's own cache of info.
just remember, though. every time you have more than one copy of the information, you reduce the "correctness" of the system, as a whole. it will always be possible for the information found in one of your apps to be out of date, because it did not get an update from the official source. | 1 | 0 | 0 | microservices and multiple databases | 1 | python,django,rabbitmq,celery | 0 | 2016-12-08T16:06:00.000 |
I want to show some SQL queries inside a notebook. I neither need nor want them to run. I'd just like them to be well formatted. At the very least I want them to be indented properly with new lines, though keyword highlighting would be nice too. Does a solution for this exist already? | 16 | 1 | 0.066568 | 0 | false | 48,311,365 | 0 | 10,031 | 1 | 0 | 0 | 41,046,955 | I found that this fixed the issue I was having.
``` sql
Produced styled code in edit mode but not when the cell was run.
``` mysql
Produced correct styling | 1 | 0 | 1 | Formatting SQL Query Inside an IPython/Jupyter Notebook | 3 | sql,ipython-notebook,jupyter-notebook,code-formatting | 0 | 2016-12-08T19:03:00.000 |
I normally am able to run long queries using pyscopg2+sql magic in a Notebook, but lately my Notebooks seems to lose their connection and stall. When I look at my redshift logs, I can see that the queries completed successfully, but my Notebook never gets any data back and just keeps waiting.
What might be going on? | 1 | 0 | 1.2 | 0 | true | 41,090,841 | 0 | 206 | 1 | 0 | 0 | 41,051,422 | Adding this connection options to my connection string seems to have fixed my problem: keepalives=1&keepalives_idle=60 | 1 | 0 | 0 | iPython Notebook Unresponsive Over Long SQL Queries | 2 | ipython-notebook,amazon-redshift,psycopg2,jupyter-notebook | 0 | 2016-12-09T00:54:00.000 |
I have a huge log file ~4 GB. I have to parse a log file line by line, for each line I should query database, and also connect to another csv files and join data from different sources.
Execution time is near 2 days. However unfortunately for any reasons like lost connection to MySQL server during query, i've lost all the parsing so far and i have to run the script again and again. Then, During last one week I have executed this script several time and i've lost all the previous parsing. the script had to write the final result into csv file. I am looking for a solution to avoid of this problem, what can I do?
Is there any way to keep the last status of the process somewhere in order I re-execute from the last point of process rather that running from the beginning each time? or any other solution that can avoid of this interruption. | 0 | 0 | 0 | 0 | false | 41,058,203 | 0 | 43 | 1 | 0 | 0 | 41,056,503 | To solve above problem regarding to above discussion I have queried once db, I keep it as dictionary and then each time I get key in dictionary. It speeds up the time execution, more over lost connection is not going to affect anymore the processing. I would like to mention that Time execution is reduced to 20 min. It's incredible!! Thanks to dictionary .. | 1 | 0 | 0 | Python db lost connection middle of processing and parsing a huge log file | 1 | python,connection | 0 | 2016-12-09T08:52:00.000 |
I've tried everything recommended & still can't get openpyxl to work in Python 3. I've tried both the pip3 and "sudo apt-get install python3-openpyxl" installation methods & they seem to work fine, but when I open the python3 interpreter & type "import openpyxl", I still get the
ImportError: No module named 'openpyxl'.
It works fine in the python2 interpreter, but I just can't get get it installed for python3 & need to write my programs in python3.
I'm using Ubuntu 16.04 LTS Xenial Xerus & Python version 3.5.2. I've tried uninstalling & reinstalling the python3-openpyxl module but still get the error. Any help out there?
Thanks | 2 | 0 | 0 | 0 | false | 41,833,582 | 0 | 798 | 1 | 0 | 0 | 41,077,133 | Since no one answered this, I will share my eventual work around. I backed up my files & installed Ubuntu 16.10 operating system from a bootable USB. Used Synaptic Package Manager to install openpyxl for Python3 & it is now working.
Not sure this is a bona fide solution, but it worked for my purposes. | 1 | 0 | 1 | Openpyxl ImportError in Python 3.5.2 | 2 | python-3.x,python-3.5,ubuntu-16.04,openpyxl | 0 | 2016-12-10T15:29:00.000 |
I get this error
"ProgrammingError at /admin/
relation "django_admin_log" does not exist
LINE 1: ..."."app_label", "django_content_type"."model" FROM "django_ad..."
django_admin_log table does not exist in the database. Does anyone know how I can create it? I am not worried about deleting the data for my app.
when i try './manage.py sqlmigrate admin 0001' or './manage.py sqlmigrate admin 0001'
i get
"
BEGIN;
-- Create model LogEntry
CREATE TABLE "django_admin_log" ("id" serial NOT NULL PRIMARY KEY, "action_time" timestamp with time zone NOT NULL, "object_id" text NULL, "object_repr" varchar(200) NOT NULL, "action_flag" smallint NOT NULL CHECK ("action_flag" >= 0), "change_message" text NOT NULL, "content_type_id" integer NULL, "user_id" integer NOT NULL);
ALTER TABLE "django_admin_log" ADD CONSTRAINT "django_admin_content_type_id_c4bce8eb_fk_django_content_type_id" FOREIGN KEY ("content_type_id") REFERENCES "django_content_type" ("id") DEFERRABLE INITIALLY DEFERRED;
ALTER TABLE "django_admin_log" ADD CONSTRAINT "django_admin_log_user_id_c564eba6_fk_auth_user_id" FOREIGN KEY ("user_id") REFERENCES "auth_user" ("id") DEFERRABLE INITIALLY DEFERRED;
CREATE INDEX "django_admin_log_417f1b1c" ON "django_admin_log" ("content_type_id");
CREATE INDEX "django_admin_log_e8701ad4" ON "django_admin_log" ("user_id");
COMMIT;"
but i still get the same error? i use postgresql if anyone cares. | 1 | 0 | 0 | 0 | false | 47,960,936 | 1 | 2,274 | 2 | 0 | 0 | 41,094,926 | After ./manage.py sqlmigrate admin 0001, please run python manage.py migrate. | 1 | 0 | 0 | I accidentally deleted django_admin_log and now i can not use the django admin | 3 | python,django,django-admin | 0 | 2016-12-12T05:56:00.000 |
I get this error
"ProgrammingError at /admin/
relation "django_admin_log" does not exist
LINE 1: ..."."app_label", "django_content_type"."model" FROM "django_ad..."
django_admin_log table does not exist in the database. Does anyone know how I can create it? I am not worried about deleting the data for my app.
when i try './manage.py sqlmigrate admin 0001' or './manage.py sqlmigrate admin 0001'
i get
"
BEGIN;
-- Create model LogEntry
CREATE TABLE "django_admin_log" ("id" serial NOT NULL PRIMARY KEY, "action_time" timestamp with time zone NOT NULL, "object_id" text NULL, "object_repr" varchar(200) NOT NULL, "action_flag" smallint NOT NULL CHECK ("action_flag" >= 0), "change_message" text NOT NULL, "content_type_id" integer NULL, "user_id" integer NOT NULL);
ALTER TABLE "django_admin_log" ADD CONSTRAINT "django_admin_content_type_id_c4bce8eb_fk_django_content_type_id" FOREIGN KEY ("content_type_id") REFERENCES "django_content_type" ("id") DEFERRABLE INITIALLY DEFERRED;
ALTER TABLE "django_admin_log" ADD CONSTRAINT "django_admin_log_user_id_c564eba6_fk_auth_user_id" FOREIGN KEY ("user_id") REFERENCES "auth_user" ("id") DEFERRABLE INITIALLY DEFERRED;
CREATE INDEX "django_admin_log_417f1b1c" ON "django_admin_log" ("content_type_id");
CREATE INDEX "django_admin_log_e8701ad4" ON "django_admin_log" ("user_id");
COMMIT;"
but i still get the same error? i use postgresql if anyone cares. | 1 | 2 | 0.132549 | 0 | false | 60,692,503 | 1 | 2,274 | 2 | 0 | 0 | 41,094,926 | experienced the same issue, the best way is to copy the CREATE TABLE log, login to your database ./manage.py dbshell and Paste the content over there without adding the last line (COMMIT ) it will solve the problem and manually create the table for you. | 1 | 0 | 0 | I accidentally deleted django_admin_log and now i can not use the django admin | 3 | python,django,django-admin | 0 | 2016-12-12T05:56:00.000 |
I am using the python mysql connector in a little script. The problem I'm facing is: when executing a select statement that returns 0 rows, I'm unable to close the cursor. When closing the cursor, "mysql.connector.errors.InternalError: Unread result found" is triggered. However, calling fetchall() results in an "mysql.connector.errors.InterfaceError: No result set to fetch from." error.
So basically, I'm unable to close the cursor because of some unread data and I'm unable to read any data because there is no data to read. | 1 | 1 | 0.066568 | 0 | false | 47,078,137 | 0 | 2,753 | 1 | 0 | 0 | 41,101,246 | It's been a while to this question but I'll post the solution I found anyway.
Using the official mysql connector for python, the rows are not fetched from the server until requested. As a result, if there are remaining rows, trying to close the connection throws an exception. One option is to use buffered cursor. A buffered cursor reads all rows from the server and the cursor can be closed at any point. However, this method comes with a memory cost.
There is a hard limit of one open cursor per connection for the mysql connector. Trying to open another cursor raised an exception. | 1 | 0 | 0 | How to properly handle empty data set with python mysql connector? | 3 | python,mysql | 0 | 2016-12-12T12:44:00.000 |
How to get current_date - 1 day in sparksql, same as cur_date()-1 in mysql. | 21 | 1 | 0.039979 | 0 | false | 53,483,109 | 0 | 78,284 | 1 | 0 | 0 | 41,114,875 | Yes, the date_sub() function is the right for the question, anyway, there's an error in the selected answer:
Return type: timestamp
The return type should be date instead, date_sub() function will trim any hh:mm:ss part of the timestamp, and returns only a date. | 1 | 0 | 0 | How to get today -"1 day" date in sparksql? | 5 | java,python,scala,apache-spark,apache-spark-sql | 0 | 2016-12-13T06:28:00.000 |
I have an issue which may have two possible approaches to getting a solution, im open to either.
I use a 3rd party application to download data daily into pandas dataframes, which I then write into a local postgres database. The dataframes are large, but since the database is local I simply use df.to_sql and it completes in a matter of seconds.
The problem is that now I have moved the database to a remote linux server (VPS). The same to_sql now takes over an hour. I have tried various values for chunksize but that doesn't help much.
This wouldn't be an issue if I could simply install the 3rd party app on that remote server, but the server OS does not use a GUI. Is there a way to run that 3rd party app on the server even though it requires a GUI? (note: it is a Windows app so I use wine to run it on my local linux machine and would presumably need to do that on the server as well).
If there is no way to run that app which requires a GUI on the VPS, then how should I go about writing these dataframes to the VPS from my local machine in a way that doesn't take over an hour? Im hoping there's some way to write the dataframes in smaller pieces or using something other than to_sql more suited to this.
A really clunky, inelegant solution would be to write the dataframes to csv files, upload them to the server using ftp, then run a separate python script on the server to save the data to the db. I guess that would work but it's certainly not ideal. | 0 | 0 | 1.2 | 0 | true | 41,165,424 | 0 | 107 | 1 | 1 | 0 | 41,117,150 | After investigating countless possible solutions:
Creating a tunnel to forward a port from my local machine to the server so it can access the 3rd party app.
modifying all my python code to manually insert the data from my local machine to the server using psycopg2 instead of pandas to_sql
Creating a docker container for the 3rd party app that can be run on the server
and several other dead ends or convoluted less than ideal solutions
In the end, the solution was to simply install the 3rd party app on the server using wine but then ssh into it using the -X flag. I can therefore access the gui on my local machine while it is running on the server. | 1 | 0 | 0 | Writing data to remote VPS database | 1 | python,postgresql,pandas,vps | 0 | 2016-12-13T09:05:00.000 |
I used to use Varchar to text string of dynamical string length. Recently I saw people also use String with length to define it.
What is the difference between them? Which one is better to use? | 8 | 10 | 1 | 0 | false | 41,136,521 | 0 | 29,995 | 1 | 0 | 0 | 41,136,136 | From what I know,
Use varchar if you want to have a length constraint
Use string if you don't want to restrict the length of the text
The length field is usually required when the String type is used
within a CREATE TABLE statement, as VARCHAR requires a length on most
databases
Parameters
length - optional, a length for the column for use in DDL and CAST expressions.
May be safely omitted if no CREATE TABLE will be issued. Certain
databases may require a length for use in DDL, and will raise an
exception when the CREATE TABLE DDL is issued if a VARCHAR with no
length is included. Whether the value is interpreted as bytes or
characters is database specific.
SQLAlchemy Docs | 1 | 0 | 1 | What is the difference between Varchar and String in sqlalchemy's data type? | 1 | python,sqlalchemy,sqldatatypes | 0 | 2016-12-14T06:27:00.000 |
So the issue is that apparently Django uses the sqlite3 that is included with python, I have sqlite3 on my computer and it works fine on its own. I have tried many things to fix this and have not found a solution yet.
Please let me know how I can fix this issue so that I can use Django on my computer.
:~$ python
Python 3.5.2 (default, Nov 6 2016, 14:10:16)
[GCC 6.2.0 20161005] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import sqlite3
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python3.5/sqlite3/__init__.py", line 23, in <module>
from sqlite3.dbapi2 import *
File "/usr/local/lib/python3.5/sqlite3/dbapi2.py", line 27, in <module>
from _sqlite3 import *
ImportError: No module named '_sqlite3'
>>> exit() | 0 | 1 | 0.197375 | 0 | false | 41,196,939 | 1 | 197 | 1 | 0 | 0 | 41,177,692 | I figured out that this error was caused by me changing my python path to 3.5 from the default of 2.7. | 1 | 0 | 0 | How do I configure the sqlite3 module to work with Django 1.10? | 1 | linux,django,python-3.x,ubuntu,sqlite | 0 | 2016-12-16T05:17:00.000 |
I am using openpyxl to write to a workbook. But that workbook needs to be closed in order to edit it. Is there a way to write to an open Excel sheet? I want to have a button that runs a Python code using the commandline and fills in the cells.
The current process that I have built is using VBA to close the file and then Python writes it and opens it again. But that is inefficient. That is why I need a way to write to open files. | 9 | 2 | 0.07983 | 0 | false | 41,207,062 | 0 | 18,268 | 2 | 0 | 0 | 41,191,394 | No this is not possible because Excel files do not support concurrent access. | 1 | 0 | 0 | How to write to an open Excel file using Python? | 5 | python,python-3.x,openpyxl | 0 | 2016-12-16T19:39:00.000 |
I am using openpyxl to write to a workbook. But that workbook needs to be closed in order to edit it. Is there a way to write to an open Excel sheet? I want to have a button that runs a Python code using the commandline and fills in the cells.
The current process that I have built is using VBA to close the file and then Python writes it and opens it again. But that is inefficient. That is why I need a way to write to open files. | 9 | 2 | 0.07983 | 0 | false | 41,194,215 | 0 | 18,268 | 2 | 0 | 0 | 41,191,394 | Generally, two different processes shouldn't not be writing to the same file because it will cause synchronization issues.
A better way would be to close the existing file in parent process (aka VBA code) and pass the location of the workbook to python script.
The python script will open it and write the contents in the cell and exit. | 1 | 0 | 0 | How to write to an open Excel file using Python? | 5 | python,python-3.x,openpyxl | 0 | 2016-12-16T19:39:00.000 |
The application should read data from a serial port every 15 minutes (using the Modbus Protocol) and put them into a database. The data can then be viewed and manipulated in a web interface. I'm using Windows (no server) with a RAID system to prevent data loss.
My current setup looks like this:
using pyserial and minimalmodbus for reading the data and putting them into a MySQL database
setting a cron job to run the script every 15 minutes (alternatives?)
using Django in order to have a neat interface where one can view stats and download the data as a *.csv file
My questions are:
Does this setup makes sense concerning reliability; do you have any improvements?
How can i detect if the system has experienced a shutdown and i lost some data? | 0 | 0 | 0 | 0 | false | 41,216,333 | 1 | 120 | 1 | 0 | 0 | 41,208,473 | For detecting system shutdown compare time stamps of last reading taken to current reading's time stamp. If they differ by more than 15 minutes than something went wrong during the operation. | 1 | 0 | 0 | Python reliable project setup: write data from a serial port to a database in a certain time interval | 1 | python,django,cron,serial-port | 0 | 2016-12-18T12:48:00.000 |
I want to change mysql's sql_mode to 'NO_UNSIGNED_SUBTRACTION' , when using pyspark. Is there any way? | 0 | 0 | 1.2 | 0 | true | 41,235,443 | 0 | 156 | 1 | 0 | 0 | 41,235,051 | I find a solution:
add &sessionVariables=sql_mode='NO_UNSIGNED_SUBTRACTION' to jdbc url. | 1 | 0 | 0 | Is there any way to set mysql's sql_mode in pyspark? | 1 | python,mysql,apache-spark,pyspark | 0 | 2016-12-20T04:46:00.000 |
I want to generate excel spreadsheets by python, the first few tabs are exactly same, all are refer to the last page, so how to insert the last page by openpyxl? because the first few tabs are too complex so load_workbook is always failed, is there have any other way to insert tabs without loading? | 0 | 0 | 0 | 0 | false | 43,180,932 | 0 | 198 | 1 | 0 | 0 | 41,252,907 | As far as i know, openpyxl does not allow you to access only one cell, or a limited number of cells for that matter. In order to access any information in a given worksheet, openpyxl will create one in the memory. This is the reason for which you will be unable to add a Sheet without opening the entire document in memory and overwriting it at the end. | 1 | 0 | 0 | How to use openpyxl to insert one sheet to a template? | 1 | python,excel,openpyxl | 0 | 2016-12-20T23:33:00.000 |
Is closing a cursor needed when the shortcut conn.execute is used in place of an explicitly named cursor in SQLite? If so how is this done? Also, is closing a cursor only need for SELECT, when a recordset is returned, or is it also needed for UPDATE, etc.? | 1 | 0 | 0 | 0 | false | 41,485,462 | 0 | 127 | 1 | 0 | 0 | 41,263,383 | The close() method allows you to close a cursor object before it is garbage collected.
The connection's execute() method is exactly the same as conn.cursor().execute(...), so the return value is the only reference to the temporary cursor object. When you just ignore it, CPython will garbage-collect the object immediately (other Python implementations might work differently). | 1 | 0 | 0 | How does closing SQLite cursor apply when conn.execute is used in place of named cursor | 1 | python,sqlite,cursor | 0 | 2016-12-21T12:59:00.000 |
I am wondering if there is any way to create a new model table in SQLite with Django 1.10 (like writing general python code) without having to specify in the models.py. The situation is if there is a new member registered on my website, then I will create a new model table for them to hold their data. Specifically:
step 1: John Doe registered on my site
step 2: The system create a model table named db_johnDoe (with same set of fields as of the others)
step 3: The system can insert and edit data of the db_johnDoe according to John's behavior on the website.
Any idea? Thanks a lot. | 0 | 1 | 0.099668 | 0 | false | 41,338,383 | 1 | 292 | 1 | 0 | 0 | 41,338,208 | I think it's not a good idea to create a table for each user. This may cause bad performance and low security. Why don't you create a table named userInfo and put user.userID as a foreign key? | 1 | 0 | 0 | Django 1.10 Create Model Tables Automatically | 2 | database,sqlite,python-3.x,django-models | 0 | 2016-12-27T02:16:00.000 |
We're using SQLAlchemy and Alembic (along with Flask-SQLAlchemy and Flask-Migrate). How to check if there are pending migrations?
I tried to check both Alembic's and Flask-Migrate's documentation but failed to find the answer. | 8 | 7 | 1 | 0 | false | 41,357,149 | 1 | 4,164 | 1 | 0 | 0 | 41,343,316 | You can figure out if your project as at the latest migration with the current subcommand:
Example output when you are at the latest migration:
(venv) $ python app.py db current
f4b4aa1dedfd (head)
The key thing is the (head) that appears after the revision number. That tells you that this is the most recent migration.
Here is how things change after I add a new migration, but before I upgrade the database:
(venv) $ python app.py db current
f4b4aa1dedfd
And after I run db upgrade I get:
(venv) $ python app.py db current
f3cd9734f9a3 (head)
Hope this helps! | 1 | 0 | 0 | How to check if there are pending migrations when using SQLAlchemy/Alembic? | 3 | python,sqlalchemy,flask-sqlalchemy,alembic,flask-migrate | 0 | 2016-12-27T10:24:00.000 |
I'm trying to save a snapshot of several tables programatically in python, instead of all DB.
I couldn't find the API (in boto/boto3) to do that.
Is it possible to do? | 0 | 1 | 0.197375 | 0 | false | 41,363,444 | 1 | 540 | 1 | 0 | 0 | 41,361,091 | This is not possible using the AWS RDS snapshot mechanism, and it isn't possible using the AWS SDK. It is possible using the API for the specific database engine you are using. You would need to specify what database you are using for further help. | 1 | 0 | 0 | AWS RDS save snapshot of selected tables | 1 | python,amazon-web-services,snapshot,boto3,rds | 0 | 2016-12-28T11:25:00.000 |
I have been using the mysql.connector module with Python 2.7 and testing locally using XAMPP. Whenever I upload my script to the server, I am getting an import error for the mysql.connector module. I am assuming this is because, unlike my local machine, I have not installed the mysql.connector module on the server.
My question is: can I somehow use the mysql.connector module on the server or is this something only for local development? I have looked into it, and apparently do not have SSH access for my server, only for the database. As well, if I cannot use the mysql.connector module, how do I connect to my MySQL database from my Python script on the server? | 0 | 0 | 0 | 0 | false | 41,454,481 | 0 | 95 | 1 | 0 | 0 | 41,454,355 | You can use mysql.connector on the server. However, you will have to install it first. Do you have root (admin) access? If no, you might need help from the server admin. | 1 | 0 | 0 | Can you use Python mysql.connector on actual Server? | 2 | python,mysql | 0 | 2017-01-04T00:17:00.000 |
I am using MongoDB 3.4 and Python 2.7. I have retrieved a document from the database and I can print it and the structure indicates it is a Python dictionary. I would like to write out the content of this document as a JSON file. When I create a simple dictionary like d = {"one": 1, "two": 2} I can then write it to a file using json.dump(d, open("text.txt", 'w'))
However, if I replace d in the above code with the the document I retrieve from MongoDB I get the error
ObjectId is not JSON serializable
Suggestions? | 5 | 1 | 0.099668 | 0 | false | 41,470,959 | 0 | 5,356 | 1 | 0 | 0 | 41,465,836 | The issue is that “_id” is actually an object and not natively deserialized. By replacing the _id with a string as in mydocument['_id'] ='123 fixed the issue. | 1 | 0 | 1 | Create JSON file from MongoDB document using Python | 2 | json,mongodb,python-2.7 | 0 | 2017-01-04T14:10:00.000 |
Having some odd trouble scheduling a task for a python script. Specifically this script and the problem is intermittent, which made me hesitant to pose the question because I'm very confused. I have other scheduled scripts that run fine. This one is the only one modifying a SQLite database though.
I call the script daily, I've done this several ways with the same result. I finally settled on Action "start a program", Program/script: "python" (it is in my path, but i've also directly called py.exe and pyw.exe, with the same result). Add arguments: "scriptname.py". Start in "location of script and database file" which the account I'm using in the scheduler has full read/write/execute access to. And I've instructed this to work whether or not the user is logged in.
I use this same operation for several other scripts and they are fine, this one just doesn't work sometimes. It always runs, but every few days it exits with code 2147942401 instead of 0. On these days the database is not updated, so I suppose it had trouble writing? I'm not sure. It seems this error code in windows is associated with invalid function, but I can manually run the script and everything is fine. And half the days (not exactly half, seemingly randomly), it doesn't work. This never happened until about 3 weeks ago. Nothing changed that I'm aware of, everything has been running fine for months and then bam, exit code 2147942401. It did it several days in a row, and then no problems for a few days. Never a problem running task (or script) manually. It is set to run with highest privileges.
Anyone seen anything like this? | 2 | 3 | 0.53705 | 0 | false | 41,575,621 | 0 | 7,724 | 1 | 0 | 0 | 41,471,776 | Turns out it was my script breaking. This is the error code (oddly enough there's not much documentation) you get when your python program ends with code -1 (exits without finishing properly or has some unhandled exception). It was intermittent because I was checking a web page and sometimes that web server just didn't respond for any number of reason. Leaving this here for posterity. If you get this error code in task scheduler, write some logging and error handling into your script because it may be a weird problem you didn't think of. | 1 | 0 | 0 | Windows Task Scheduler, python script, code 2147942401 | 1 | python,sqlite,task,scheduler,exit | 0 | 2017-01-04T19:26:00.000 |
Looking for an Excel library for Django and Python with specific requirements.
There looks to be a number of libraries for Django and Python that enable the user to upload an Excel document into the database.
What I am wondering is if there is a library that allows you to create an Excel document and export with conditional formatting, live formulas, creating tabs, and VLOOKUPS?
The company I work for produces Excel reports for our analysts to review that requires these types of things. Researching as we are exploring other solutions than using Access, which is it pretty easy to control Excel from. | 0 | 1 | 0.099668 | 0 | false | 41,498,821 | 1 | 199 | 1 | 0 | 0 | 41,498,803 | I think a combination of Pandas and openpyxl will do the trick! | 1 | 0 | 0 | Django/Python Library for importing and producing Excel documents? | 2 | python,django,excel | 0 | 2017-01-06T04:11:00.000 |
I currently have a database setup where there are 5 columns set as the composite primary key which could uniquely identify a row. Should I still have an ID column to identify each row? It seems redundant, although I am not sure of what is standard.
I am using SQLAlchemy. I noticed that when I had the 5 columns as the composite primary key, the table was significantly slower inserting the data from a CSV file, as compared to if I had an ID column. It was half the speed with the column (not sure if this is relevant).
To be clear: My question is, Should I have an ID column alongside the composite primary key, even though the ID column would be redundant? | 1 | -1 | -0.197375 | 0 | false | 41,530,033 | 0 | 199 | 1 | 0 | 0 | 41,529,910 | Yes, you should always have a separate rowid (either int based one-up or UUID). Especially when you get into other aspects of mysql or database DevOps, having that ID field is a lifesaver (e.g., replication or galera clustering). It also makes working with frameworks like django much easier. | 1 | 0 | 0 | Can I composite primary key be used in place of an ID primary key? | 1 | python,mysql,sqlalchemy | 0 | 2017-01-08T05:50:00.000 |
I am new to Python.
I use putty to manage some servers. I want to use Python to create a Excel file on each server, for that I think if I can use some orders like ssh ip "python abc.py" to create the file. It is possible to write a bash script to manage all servers. This is the trouble I meet:
The servers can't use internet.
And it is not allowed to use any third party libraries. When a linux(redhat 6.5) was installed, is there any library in python that can be used to create Excel immediately?
Please help me, thanks. | 4 | 0 | 0 | 0 | false | 60,421,964 | 0 | 15,636 | 1 | 0 | 0 | 41,550,060 | I am not sure if this is what the OP was looking for,but if you have to manipulate data in python without installing any modules (just standard library), you can try the sqlite3 module, which of course allows you to interact with sqlite files (a Relational Database Management System).
These databases are conceptually similar to an Excel file. If an excel file is basically a collection of sheets, with each sheet being a matrix where you can put data, sqlite databases are the same (but each "matrix" is called a table instead).
This format is scripting friendly, as you can read and write data using SQL, but it does not follow the client-server model other DBMS are based on. The whole database is contained in a single file that you can email to a colleague, and you can also install a GUI that gives you a spreadsheet like interface to make it more user-friendly (DB browser for SQLite is avaible for Windows, Linux and Mac).
This allows you to include SQL code in your python scripts, which adds a lot of data processing capabilities, and it is an excellent way to achieve data persistence for simple programs. | 1 | 0 | 0 | how to create a excel file only with python standard library? | 5 | python,excel | 0 | 2017-01-09T14:21:00.000 |
I am developing a Cloud based data analysis tool, and I am using Django(1.10) for that.
I have to add columns to the existing tables, create new tables, change data-type of columns(part of data-cleaning activity) at the run time and can't figure out a way to update/reflect those changes, in run time, in the Django model, because those changes will be required in further analysis process.
I have looked into 'inspectdb' and 'syncdb', but all of these options would require taking the portal offline and then making those changes, which I don't want.
Please can you suggest a solution or a work-around of how to achieve this.
Also, is there a way in which I can select what database I want to work from the list of databases on my MySQL server, after running Django. | 0 | 0 | 0 | 0 | false | 41,592,978 | 1 | 43 | 1 | 0 | 0 | 41,591,079 | Django's ORM might not be the right tool for you if you need to change your schema (or db) online - the schema is defined in python modules and loaded once when Django's web server starts.
You can still use Django's templates, forms and other libraries and write your own custom DB access layer that manipulates a DB dynamically using python. | 1 | 0 | 0 | Changing Database in run time and making the changes reflect in Django in run time | 1 | python,django,django-1.10 | 0 | 2017-01-11T12:32:00.000 |
My main problem is that I would like to check if someone with the same SSN has multiple accounts with us. Currently all personally identifiable info is encrypted and decryption takes a non-trivial amount of time.
My initial idea was to add a ssn column to the user column in the database. Then I could simply do a query where I get all users with the ssn or user A.
I don't want to store the ssn in plaintext in the database. I was thinking of just salting and hashing it somehow.
My main question is, is this secure (or how secure is it)? What is there a simple way to salt and hash or encrypt and ssn using python?
Edit: The SSN's do not need to be displayed.
This is using a MySQL database. | 2 | -1 | -0.066568 | 0 | false | 41,599,436 | 0 | 2,477 | 2 | 0 | 0 | 41,599,285 | Your question doesn't make it clear if you need to display those SSNs. I'm going to assume you do not. Store the SSN in a SHA2 hash. You can then do a SQL query to search against those hashed values. Store only the last 4 digits encrypted for display. | 1 | 0 | 0 | How to securely and efficiently store SSN in a database? | 3 | python,sql,security,hash | 0 | 2017-01-11T19:35:00.000 |
My main problem is that I would like to check if someone with the same SSN has multiple accounts with us. Currently all personally identifiable info is encrypted and decryption takes a non-trivial amount of time.
My initial idea was to add a ssn column to the user column in the database. Then I could simply do a query where I get all users with the ssn or user A.
I don't want to store the ssn in plaintext in the database. I was thinking of just salting and hashing it somehow.
My main question is, is this secure (or how secure is it)? What is there a simple way to salt and hash or encrypt and ssn using python?
Edit: The SSN's do not need to be displayed.
This is using a MySQL database. | 2 | 4 | 1.2 | 0 | true | 41,600,634 | 0 | 2,477 | 2 | 0 | 0 | 41,599,285 | Do not encrypt SSNs, when the attacker gets the DB he will also get the encryption key.
Just using a hash function is not sufficient and just adding a salt does little to improve the security.
Basically handle the SSNs inthe same mannor as passwords.
Instead iIterate over an HMAC with a random salt for about a 100ms duration and save the salt with the hash. Use functions such as PBKDF2 (aka Rfc2898DeriveBytes), password_hash/password_verify, Bcrypt and similar functions. The point is to make the attacker spend a lot of time finding passwords by brute force. Protecting your users is important, please use secure password methods. | 1 | 0 | 0 | How to securely and efficiently store SSN in a database? | 3 | python,sql,security,hash | 0 | 2017-01-11T19:35:00.000 |
I want to write a lot of data to a lmdb data base with several named (sub) data bases. I run into the following problem:
To write to one named data base, I need to open a transaction for this named data base.
This implies: To write to another named data base, I need to open a different transaction.
Two write transaction inside the same main data base cannot exist at the same time.
This implies: I need to commit and close a transaction each time I want to switch from writing to one named data base to writing to another named data base.
Creating and committing write transactions is a really slow operation.
I rather would like to keep one long-running write transaction for all write operations and commit it once --- when all the work is done.
Is this possible with lmdb (if yes, at which point did I err in my analysis)? | 0 | 1 | 0.099668 | 0 | false | 42,486,747 | 0 | 1,421 | 1 | 0 | 0 | 41,616,426 | You can open as many named databases within the same write transaction as you like.
So:
Open write transaction
Open named databases as required and write to them
Commit your transaction
As long as you take into account that you can only ever have one write-transaction at a time (read-only transactions are no problem), and that your other transactions will only see the result of your write-transaction once you commit, you can of course have one long-running write transaction. | 1 | 0 | 0 | lmdb: Can I access different named databases in the same transaction? | 2 | python,python-3.x,lmdb | 0 | 2017-01-12T15:03:00.000 |
I wrote a python script to download some files from an s3 bucket. The script works just fine on one machine, but breaks on another.
Here is the exception I get: botocore.exceptions.ClientError: An error occurred (403) when calling the HeadObject operation: Forbidden.
I am pretty sure it's related to some system configurations, or something related to the registry, but don't know what exactly. Both machines are running Windows 7 and python 3.5.
Any suggestions. | 6 | 8 | 1 | 0 | false | 41,682,857 | 0 | 8,191 | 1 | 0 | 1 | 41,646,514 | The issue was actually being caused by the system time being incorrect. I fixed the system time and the problem is fixed. | 1 | 0 | 0 | Trying to access a s3 bucket using boto3, but getting 403 | 2 | python,windows,amazon-web-services,amazon-s3,boto3 | 0 | 2017-01-14T03:29:00.000 |
I am using Python openpyxl package to read values from Excel cells. Cells with formulas always return formula strings instead of the calculated values. I'd rather avoid using 'data_only=True' when loading the workbook as it wipes out all the formulas and I do need to retain some of them. Seemingly a problem not so difficult but turns out to be quite challenging. Appreciate it very much if anyone can shed some lights on this. Thanks a lot! | 0 | 0 | 0 | 0 | false | 41,767,777 | 0 | 1,063 | 1 | 0 | 0 | 41,708,061 | Sorted it out myself and like to share with you guys. xlwings actually does the job while openpyxl/xlrd seem to have failed around this issue. | 1 | 0 | 0 | Python to read Excel cell with the value calculated by the formulas but not the formula strings themselves | 1 | excel,python-2.7 | 0 | 2017-01-17T22:20:00.000 |
I am trying to install MySql Server on Win 8 and as I go through the process, the installer requires Python 3.4 so I installed it manually with the given link of the installer.
I have installed Python 3.4.6 as it is the latest version but still it is not recognized and the installer returns an error message saying the "The requirement is still failing".
Should I install Python 3.4 instead of 3.4.6? | 0 | 0 | 0 | 0 | false | 41,790,010 | 0 | 131 | 1 | 0 | 0 | 41,789,489 | I have installed Phyton 3.4 and uninstalled the 3.4.6 version and it worked, I don't know why but it worked | 1 | 0 | 0 | MySql Server installation dont recognize python 3.4.6 | 1 | php,python,mysql | 0 | 2017-01-22T09:51:00.000 |
I must load the Oracle "instant client" libraries as part of my AWS lambda python deployment zip file.
Problem is, many of the essential libraries (libclntsh.so.12.1 is 57MB libociei.so is 105MB) and Amazon only allows deployment zip files under 50MB.
I tried: my script cannot connect to Oracle using cx_Oracle without that library in my local ORACLE_HOME and LD_LIBRARY_PATH.
How can I get that library into Lambda considering their zip file size limitation? Linux zip just doesn't compress them enough. | 1 | 3 | 1.2 | 0 | true | 41,837,986 | 0 | 846 | 1 | 1 | 0 | 41,833,790 | If you can limit yourself to English error messages and a restricted set of character sets (which does include Unicode), then you can use the "Basic Lite" version of the instant client. For Linux x64 that is only 31 MB as a zip file. | 1 | 0 | 0 | AWS python Lambda script that can access Oracle: Driver too big for 50MB limit | 1 | python,oracle,amazon-web-services,lambda,cx-oracle | 1 | 2017-01-24T16:48:00.000 |
I have some employee data in which there are 3 different roles. Let's say CEO, Manager and Developer.
CEO can access the whole graph, managers can only access data of some people (their team) and developers can not access employee data.
How should I assign subgraph access to user roles and implement this using Python?
There are good solutions and comprehensive libraries and documentations but only in Java! | 2 | 1 | 1.2 | 0 | true | 42,622,083 | 0 | 304 | 1 | 0 | 1 | 41,850,411 | At the moment it is not possible to write procedures for custom roles to implement subgraph access control using Python. It is only possible in Java.
A workaround might be to indirektly implement it using phyton by adding properties to nodes and relationship storing the security levels for these nodes and relationships. Checking the secutiry level of a user it might be possible to use a phyton visualization that checks the properties to only display nodes and relationships that are in agreement with the user security level. | 1 | 0 | 0 | Authorization (subgraph access control) in Neo4j with python driver | 2 | python,neo4j,authorization,graph-databases,py2neo | 0 | 2017-01-25T11:25:00.000 |
I'm trying to figure out how to download a file from google cloud storage bucket.
My use-case is to run a scheduled script which downloads a .csv file once a day and save it to a SQL DB.
I considered doing it using python and the google SDK but got lost with all the options and which one is the right for me.
Could someone can explain the difference between cloud storage client, boto, gsutil, and google cloud SDK?
Thanks! | 0 | 2 | 0.197375 | 0 | false | 41,862,332 | 0 | 736 | 1 | 0 | 0 | 41,862,312 | Look into gcs-fuse: Makes like a lot easier since you then can use the GCS as just a standard file system. | 1 | 0 | 0 | Programatically download file from google cloud storage bucket | 2 | python,google-cloud-storage,boto,google-developer-tools | 0 | 2017-01-25T21:51:00.000 |
How can I write an excel file(xls/xlsx) using xlrd module alone?
I tried from xlrd import xlsx, but couldn't find anything that will really help me. | 1 | 2 | 1.2 | 0 | true | 41,904,256 | 0 | 4,503 | 1 | 0 | 0 | 41,886,791 | xlrd only reads excel files. To write them, look up xlwt, xlutils, xlsxwriter, or openpyxl - all of these packages can write binary files excel can read. Excel can also read csv files, which the csv package (included with Python) can write (and read). | 1 | 0 | 0 | How to use xlrd for writing an excel file | 1 | python,excel,python-3.x,module,xlrd | 0 | 2017-01-27T04:04:00.000 |
Hi Stack Overflow community, making some architectural decisions & trying to figure out the best strategy to store locations of 50k users who are moving around, in an environment where we care about read & write speed a lot, but don't mind occasionally losing data.
Should one
use an in-memory datastore like Redis or Memcached, or
use Postgres, with an index on the user_id so that it's fast to insert &
remove, or
use the filesystem directly, have a file for each
user_id, and write to it or read from it to store new locations, or
just store the locations in memory, in a Python program which
maintains an ordered list of (user_id, location) tuples
What are the advantages/ disadvantages of each? | 0 | 1 | 1.2 | 0 | true | 41,916,446 | 0 | 180 | 1 | 0 | 0 | 41,916,404 | I've had tremendous luck with MySQL and SQLAlchemy. 50k writes per day is nothing. I write my logs to it, I log my threads (think about that, I write logs to it and log each thread) and I process 2.5 million records per day, each generating about 100 logs each. | 1 | 0 | 0 | Best database to store locations of users as they move, priority given to read & write speed? | 1 | python,postgresql,memory,redis,memcached | 0 | 2017-01-29T00:29:00.000 |
How beneficial will it be to use Python/PHP Nonpersistent array for storing 6GB+ data with 800+ million rows in RAM, rather than using MySQL/MongoDB/Cassandra/BigTable/BigData(Persistence Database) database when it comes to speed/latency in simple query execution?
For example, finding one name in 800+ million rows within 1 second: is it possible? Does anyone have experience of dealing with a dataset of more than 1-2 billion rows and getting the result within 1 second for a simple search query?
Is there a better, proven methodology to deal with billions of rows? | 16 | 0 | 0 | 0 | false | 45,958,112 | 0 | 472 | 2 | 0 | 0 | 41,935,280 | You can still take advantage of RAM based lookup,and still having extra functionalities that specialized databases provide as compared to a plain hashmap/array in RAM.
Your objective with ram based lookups is faster lookups, and avoid network overhead. However both can be achieved by hosting the database locally, or network would not even be a overhead for small data payloads like names.
By the RAM array method, the apps' resilience decreases as you have a single point of failure, no easy snapshotting i.e. you would have to do some data warming everytime your app changes or restarts, and you will always be restricted to single querying pattern and may not be able to evolve in future.
Equally good alternatives with reasonably comparable throughput can be redis in a cluster or master-slave configuration, or aerospike on SSD machines. You get advantage of constant snapshots,high throughput,distribution and resilience through sharding/clustering i.e. 1/8 of data in 8 instances so that there is no single point of failure. | 1 | 0 | 0 | Persistence Database(MySQL/MongoDB/Cassandra/BigTable/BigData) Vs Non-Persistence Array (PHP/PYTHON) | 3 | python,mongodb,optimization,query-optimization,bigdata | 0 | 2017-01-30T11:53:00.000 |
How beneficial will it be to use Python/PHP Nonpersistent array for storing 6GB+ data with 800+ million rows in RAM, rather than using MySQL/MongoDB/Cassandra/BigTable/BigData(Persistence Database) database when it comes to speed/latency in simple query execution?
For example, finding one name in 800+ million rows within 1 second: is it possible? Does anyone have experience of dealing with a dataset of more than 1-2 billion rows and getting the result within 1 second for a simple search query?
Is there a better, proven methodology to deal with billions of rows? | 16 | 4 | 0.26052 | 0 | false | 41,935,572 | 0 | 472 | 2 | 0 | 0 | 41,935,280 | It should be very big different, around 4-5 orders of magnitude faster. The database stores records in 4KB blocks (usually), and has to bring each such block into memory it needs some milliseconds. Divide the size of your table with 4KB and get the picture. In contrast, corrresponding times for in-memory data are usually nanoseconds. There is no question that memory is faster, the real question is if you have enough memory and how long you can keep your data there.
However, the above holds for a select * from table query. If you want a select * from table where name=something, you can create an index on the name, so that the database does not have to scan the whole file, and the results should be much, much better, probably very satisfying for practical use. | 1 | 0 | 0 | Persistence Database(MySQL/MongoDB/Cassandra/BigTable/BigData) Vs Non-Persistence Array (PHP/PYTHON) | 3 | python,mongodb,optimization,query-optimization,bigdata | 0 | 2017-01-30T11:53:00.000 |
We are looking for a solution which uses minimum read/write units of DynamoDB table for performing full backup, incremental backup and restore operations. Backup should store in AWS S3 (open to other alternatives). We have thought of few options such as:
1) Using python multiprocessing and boto modules we were able to perform Full backup and Restore operations, it is performing well, but is taking more DynamoDB read/write Units.
2) Using AWS Data Pipeline service, we were able to perform Full backup and Restore operations.
3) Using Dynamo Streams and kinesis Adapter/ Dynamo Streams and Lambda function, we were able to perform Incremental backup.
Are there other alternatives for Full backup, Incremental backup and Restore operations. The main limitation/need is to have a scalable solution by utilizing minimal read/write units of DynamoDb table. | 3 | 1 | 0.099668 | 0 | false | 42,009,940 | 1 | 1,135 | 1 | 0 | 0 | 41,973,955 | Option #1 and #2 are almost the same- both do a Scan operation on the DynamoDB table, thereby consuming maximum no. of RCUs.
Option #3 will save RCUs, but restoring becomes a challenge. If a record is updated more than once, you'll have multiple copies of it in the S3 backup because the record update will appear twice in the DynamoDB stream. So, while restoring you need to pick the latest record. You also need to handle deleted records correctly.
You should choose option #3 if the frequency of restoring is less, in which case you can run an EMR job over the incremental backups when needed. Otherwise, you should choose #1 or #2. | 1 | 0 | 0 | How to perform AWS DynamoDB backup and restore operations by utilizing minimal read/write units? | 2 | python-2.7,amazon-web-services,amazon-dynamodb,amazon-dynamodb-streams | 1 | 2017-02-01T07:14:00.000 |
I'm using S3 instead of KMS to store essentially a credentials file, and Python to read the file's contents.
I manually set the file encrypted by clicking on it in S3, going to Properties - Details - Server Side Encryption:AES-256
And in my Python script, I read the key without making changes from when I read the file when it was unencrypted. And I was also able to download the file and open it without having to do anything like decrypting it. I was expecting to have to decrypt it, so I'm a little confused.
I'm just unable to understand what server-side encryption protects against. Would anyone already with access to S3 or the S3 bucket with the key/file be able to read the file? Who wouldn't be able to open the file? | 1 | 7 | 1.2 | 0 | true | 41,987,427 | 1 | 3,317 | 1 | 0 | 0 | 41,987,133 | The "server-side" encryption you have enabled turns on encryption at rest. Which means the file is encrypted while it's sitting on S3. But S3 will decrypt the file before it sends you the data when you download the file.
So there is no change to how you handle the file when downloading it if the file is encrypted or not.
This type of encryption does not protect the file if the file is downloaded via valid means, such as when using the API. It only protects the file from reading if someone were to circumvent the S3 data center or something like that.
If you need to protect the file, such that it must be decrypted when downloaded, then you need to encrypt it client-side, before uploading it to S3.
You can use any client-side encryption scheme you deem worthy: AES256, etc. But S3 won't do it for you. | 1 | 0 | 0 | Do I ever have to decrypt S3-encrypted files? | 1 | python,amazon-web-services,encryption,amazon-s3 | 0 | 2017-02-01T18:34:00.000 |
How to insert datetime string like this "2017-10-13T10:53:53.000Z" into mongo db as ISODate?
I get a string in mongodb when I insert:
datetime.strptime("2017-10-13T10:53:53.000Z", "%Y-%m-%dT%H:%M:%S.000Z") | 25 | 2 | 0.197375 | 0 | false | 41,999,635 | 0 | 43,049 | 1 | 0 | 0 | 41,999,094 | use dateutil
dateutil.parser.parse("2017-10-13T10:53:53.000Z")
will return datetime.datetime(2017, 10, 13, 10, 53, 53, tzinfo=tzutc()) | 1 | 0 | 1 | How to insert datetime string into Mongodb as ISODate using pymongo | 2 | python,pymongo | 0 | 2017-02-02T09:58:00.000 |
I was wondering if there is a way to allow a user to export a SQLite database as a .csv file, make some changes to it in a program like Excel, then upload that .csv file back to the table it came from using a record UPDATE method.
Currently I have a client that needed an inventory and pricing management system for their e-commerce store. I designed a database system and logic in Python 3 and SQLite. The system from a programming standpoint works flawlessly.
The problem I have is that there are some less then technical office staff that need to edit things like product markup within the database. Currently, I have them setup with SQLite DB Browser, from there they can edit products one at a time and write the changes to the database. They can also export tables to a .csv file for data manipulation in Excel.
The main issue is getting that .csv file back into the table it was exported from using an UPDATE method. When importing a .csv file to a table in SQLite DB Browser there is no way to perform an update import. It can only insert new rows by default and do to my table constraints that is a problem.
I like SQLite DB Browser because it is clean and simple and does exactly what I need. However, as soon as you have to edit more then one thing at a time and filter information in more complicated ways it starts to lack the functionality needed.
Is there a solution out there for SQLite DB Browser to tackle this problem? Is there a better software option all together to interact with a SQLite database that would give me that last bit of functionality? | 0 | 0 | 1.2 | 0 | true | 42,173,826 | 0 | 572 | 1 | 0 | 0 | 42,008,720 | So after researching some off the shelf options I found that the Devart Excel Add Ins did exactly what I needed. They are paid add ins, however, they seem to support almost all modern databases including SQlite. Once the add in is installed you can connect to a database and manipulate the data returned just like normal in Excel including bulk edits and advanced filtering, all changes are highlighted and can easily be written to the database with one click.
Overall I thought it was a pretty solid solution and everyone seems to be very happy with it as it made interacting with a database intuitive and non threatening to the more technically challenged. | 1 | 0 | 0 | User friendly SQLite database csv file import update solution | 2 | python,database,sqlite,csv | 0 | 2017-02-02T17:34:00.000 |
I'm using query like this:
user = User.query.options(load_only("email", "name")).filter(and_(User.id == id, User.status == 1)).first()
I want to get only email and name column as an User object. But it returns all columns.
I can't find any solutions. Can anybody help? Thanks | 3 | 1 | 0.197375 | 0 | false | 42,028,145 | 1 | 1,278 | 1 | 0 | 0 | 42,019,810 | If you're using a database session, you can simply specify the columns directly.
session.query(User.email, User.name).filter(and_(User.id == id, User.status == 1)).first() | 1 | 0 | 0 | SQLAlchemy ORM Load Cols Only not working | 1 | python,sqlalchemy | 0 | 2017-02-03T08:27:00.000 |
I want to insert document to the collection from json file it says bson.errors.InvalidDocument: key '$oid' must not start with '$'
How can I solve it?
example of document:
[{"name": "Company", "_id": {"$oid": "1234as123541gsdg"}, "info": {"email": "[email protected]"}}] | 5 | -1 | -0.099668 | 0 | false | 63,628,554 | 0 | 9,580 | 1 | 0 | 0 | 42,089,045 | Try removing all white space in the files (\n, spaces outside string quotes). It may work like miracle | 1 | 0 | 1 | bson.errors.InvalidDocument: key '$oid' must not start with '$' trying to insert document with pymongo | 2 | python,pymongo | 0 | 2017-02-07T11:44:00.000 |
I'm building a platform with a PostgreSQL database (first time) but I've experience with Oracle and MySQL databases for a few years now.
My question is about the UUID data type in Postgres.
I am using an UUIDv4 uuid to indentify a record in multiple tables, so the request to /users/2df2ab0c-bf4c-4eb5-9119-c37aa6c6b172 will respond with the user that has that UUID. I also have an auto increment ID field for indexing.
My query is just a select with a where clause on UUID. But when the user enters an invalid UUID like this 2df2ab0c-bf4c-4eb5-9119-c37aa6c6b17 (without the last 2) then the database responds with this error: Invalid input syntax for UUID.
I was wondering why it returned this because when you select on a integer-type with a string-type it does work.
Now I need to set a middleware/check on each route that has an UUID-type parameter in it because otherwise the server would crash.
Btw I'm using Flask 0.12 (Python) and PostgreSQL 9.6 | 0 | 0 | 0 | 0 | false | 51,623,094 | 0 | 1,782 | 1 | 0 | 0 | 42,096,970 | The database is throwing an error because you're trying to match in a UUID-type column with a query that doesn't contain a valid UUID. This doesn't happen with integer or string queries because leaving off the last character of those does result in a valid integer or string, just not the one you probably intended.
You can either prevent passing invalid UUIDs to the database by validating your input (which you should be doing anyway for other reasons) or somehow trap on this error. Either way, you'll need to present a human-readable error message back to the user.
Also consider whether users should be typing in URLs with UUIDs in the first place, which isn't very user-friendly; if they're just clicking links rather than typing them, as usually happens, then how did that error even happen? There's a good chance that it's an attack of some sort, and you should respond accordingly. | 1 | 0 | 0 | PostgreSQL UUID date type | 2 | python,postgresql,uuid | 0 | 2017-02-07T18:11:00.000 |
I am making a database with data in it. That database has two customers: 1) a .NET webserver that makes the data visible to users somehow someway. 2) a python dataminer that creates the data and populates the tables.
I have several options. I can use the .NET Entity Framework to create the database, then reverse engineer it on the python side. I can vice versa that. I can just write raw SQL statements in one or the other systems, or both. What are possible pitfalls of doing this one way or the other? I'm worried, for example, that if I use the python ORM to create the tables, then I'm going to have a hard time in the .NET space... | 0 | 0 | 0 | 0 | false | 42,144,831 | 1 | 79 | 1 | 0 | 0 | 42,144,698 | I love questions like that.
Here is what you have to consider, your web site has to be fast, and the bottleneck of most web sites is a database. The answer to your question would be - make it easy for .NET to work with SQL. That will require little more work with python, like specifying names of the table, maybe row names. I think Django and SQLAlchemy are both good for that.
Another solution could be to have a bridge between database with gathered data and database to display data. On a background you can have a task/job to migrate collected data to your main database. That is also an option and will make your job easier, at least all database-specific and strange code will go to the third component.
I've been working with .NET for quite a long time before I switched to python, and what you should know is that whatever strategy you chose it will be possible to work with data in both languages and ORMs. Do the hardest part of the job in the language your know better. If you are a Python developer - pick python to mess with the right names of tables and rows. | 1 | 0 | 1 | Sharing an ORM between languages | 1 | python,.net,database,orm | 0 | 2017-02-09T18:53:00.000 |
I am trying to enter about 1 millions records to PostgreSql since I create table dynamically I don't have any models associated with it so I cant perform bulk_insert of django
How is there any method of inserting data in a efficient manner.
I am trying using single insert statement but this is very time consuming and too slow | 1 | -1 | -0.197375 | 0 | false | 42,154,977 | 1 | 311 | 1 | 0 | 0 | 42,153,732 | Your problem is not about django. You better carry the data(not necesarry but could be good) to the server that you want to insert and create a simple python program or sth. else to insert the data.
Avoid to insert a data at this size by using an http server. | 1 | 0 | 0 | Insert bulk data django using raw query | 1 | python,django,postgresql | 0 | 2017-02-10T07:24:00.000 |
I have created a table in MySQL command line and I'm able to interact with it using python really well. However, I wanted to be able to change values in the table more easily so I installed MySQL workbench to do so.
I have been able to connect to my server but when I try to change any values after selecting a table, it doesn't let me edit it. I tried making a new table within MySQL Workbench and I could edit it then.
So, I started to use that table. However, trying to edit the table python stopped working, so I made another table with command line again and it works!
Does anyone know how to fix either of these problems? It seems MySQL Workbench can only edit tables that have been created with Workbench, and not with Command Line. There must be a configuration option somewhere that is limiting this.
Thanks in advance! | 0 | 0 | 0 | 0 | false | 42,220,939 | 0 | 94 | 1 | 0 | 0 | 42,211,463 | Editing a table means to be able to write back data in a way that reliably addresses the records that have changed. In MySQL Workbench there are certain conditions which must be met to make this possible. A result set:
must have a primary key
must not have any aggregates or unions
must not contain subselects
When you do updates in a script you have usually more freedom by writing a WHERE clause that limits changes to a concrete record. | 1 | 0 | 0 | MySQL Workbench can't edit a table that was created using Command Line | 1 | python,mysql,mysql-workbench | 0 | 2017-02-13T18:55:00.000 |