Title
stringlengths
15
150
A_Id
int64
2.98k
72.4M
Users Score
int64
-17
470
Q_Score
int64
0
5.69k
ViewCount
int64
18
4.06M
Database and SQL
int64
0
1
Tags
stringlengths
6
105
Answer
stringlengths
11
6.38k
GUI and Desktop Applications
int64
0
1
System Administration and DevOps
int64
1
1
Networking and APIs
int64
0
1
Other
int64
0
1
CreationDate
stringlengths
23
23
AnswerCount
int64
1
64
Score
float64
-1
1.2
is_accepted
bool
2 classes
Q_Id
int64
1.85k
44.1M
Python Basics and Environment
int64
0
1
Data Science and Machine Learning
int64
0
1
Web Development
int64
0
1
Available Count
int64
1
17
Question
stringlengths
41
29k
App Engine serving old version intermittently
38,788,820
1
0
190
0
python,google-app-engine
You have multiple layers of caches beyond memcache, Googles edge cache will definitely cache static content especially if you app is referenced by your domain and not appspot.com . You will probably need to use some cache busting techniques. You can test this by requesting the url that is presenting old content with the same url but appending something like ?x=1 to the url. If you then get current content then the edge cache is your problem and therefore the need to use cache busting techniques.
0
1
0
0
2016-08-03T10:43:00.000
1
1.2
true
38,741,327
0
0
1
1
I've deployed a new version which contains just one image replacement. After migrating traffic (100%) to the new version I can see that only this version now has active instances. However 2 days later and App engine is still intermittently serving the old image. So I assume the previous version. When I ping the domain I can see that the latest version has one IP address and the old version has another. My question is how do I force App Engine to only server the new version? I'm not using traffic splitting either. Any help would be much appreciated Regards, Danny
How to use a Seafile generated upload-link w/o authentication token from command line
38,743,242
9
5
2,271
0
python,curl,urllib2,http-upload,seafile-server
needed 2 hours to find a solution with curl, it needs two steps: make a get-request to the public uplink url with the repo-id as query parameter as follows: curl 'https://cloud.seafile.com/ajax/u/d/98233edf89/upload/?r=f3e30b25-aad7-4e92-b6fd-4665760dd6f5' -H 'Accept: application/json' -H 'X-Requested-With: XMLHttpRequest' The answer is (json) a id-link to use in next upload-post e.g.: {"url": "https://cloud.seafile.com/seafhttp/upload-aj/c2b6d367-22e4-4819-a5fb-6a8f9d783680"} Use this link to initiate the upload post: curl 'https://cloud.seafile.com/seafhttp/upload-aj/c2b6d367-22e4-4819-a5fb-6a8f9d783680' -F file=@./tmp/index.html -F filename=index.html -F parent_dir="/my-repo-dir/" The answer is json again, e.g. [{"name": "index.html", "id": "0a0742facf24226a2901d258a1c95e369210bcf3", "size": 10521}] done ;)
0
1
1
0
2016-08-03T11:54:00.000
2
1
false
38,742,893
0
0
1
1
With Seafile one is able to create a public upload link (e.g. https://cloud.seafile.com/u/d/98233edf89/) to upload files via Browser w/o authentication. Seafile webapi does not support any upload w/o authentication token. How can I use such kind of link from command line with curl or from python script?
Python 2.7 on OS X: TypeError: 'frozenset' object is not callable on each command
38,767,427
4
2
8,871
0
python,python-2.7,hashlib,frozenset
Removal of this package have helped me: sudo rm -rf /Library/Python/2.7/site-packages/hashlib-20081119-py2.7-macosx-10.11-intel.egg
0
1
0
0
2016-08-03T19:08:00.000
2
1.2
true
38,751,800
0
0
0
1
I have this error on each my command with Python: ➜ /tmp sudo easy_install pip Traceback (most recent call last): File "/usr/bin/easy_install-2.7", line 11, in load_entry_point('setuptools==1.1.6', 'console_scripts', 'easy_install')() File "/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/pkg_resources.py", line 357, in load_entry_point return get_distribution(dist).load_entry_point(group, name) File "/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/pkg_resources.py", line 2394, in load_entry_point return ep.load() File "/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/pkg_resources.py", line 2108, in load entry = __import__(self.module_name, globals(),globals(), ['__name__']) File "/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/setuptools/__init__.py", line 11, in from setuptools.extension import Extension File "/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/setuptools/extension.py", line 5, in from setuptools.dist import _get_unpatched File "/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/setuptools/dist.py", line 15, in from setuptools.compat import numeric_types, basestring File "/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/setuptools/compat.py", line 17, in import httplib File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/httplib.py", line 80, in import mimetools File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/mimetools.py", line 6, in import tempfile File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/tempfile.py", line 35, in from random import Random as _Random File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/random.py", line 49, in import hashlib as _hashlib File "build/bdist.macosx-10.11-intel/egg/hashlib.py", line 115, in """ TypeError: 'frozenset' object is not callable What can I do with this?
Find out result size of unpacked archive without unpacking it. Or stop unpacking when certain size is exceed
38,768,586
0
0
1,601
0
python,linux,archive,7zip,rar
I can't give you a native python answer, but, if you need to fall back on os.system, the command-line utilities for handling all four formats have switches which can be used to list the contents of the archive including the size of each file and possibly a total size: rar: unrar l FILENAME.rar lists information on each file and the total size. zip: unzip -l FILENAME.zip lists size, timestamp, and name of each file, along with the total size. 7z: 7z l FILENAME.7z lists the details of each file and the total size. tar: tar -tvf FILENAME.tar or tar -tvzf FILENAME.tgz (or .tar.gz) lists details of each file including file size. No total size is provided, so you'll need to add them up yourself. If you're looking at native python libraries, you can also check for whether they have a "list" or "test" function. Those are the terms used by the command-line tools to describe the switches I mentioned above, so the same names are likely to have been used by the library authors.
0
1
0
0
2016-08-04T12:39:00.000
2
0
false
38,767,584
0
0
0
1
I need to validate result size of unpacked archive without unpacking it, so that to prevent huge archives to store on my server. Or start unpacking and when size is exceeded certain size, then stop unpacking. I have already tried lib pyunpack, but it allows only unpacking archives. Need to validate such archive extensions: rar, zip, 7z, tar. Maybe I can do it with using some linux features by calling them by os.system.
Dask worker persistent variables
38,772,791
1
2
340
0
python,dask
The workers themselves are just Python processes, so you could do tricks with globals(). However, it is probably cleaner to emit values and pass these between tasks. Dask retains the right to rerun functions and run them on different machines, so depending on global state or worker-specific state can easily get you into trouble.
0
1
0
0
2016-08-04T16:10:00.000
1
1.2
true
38,772,455
1
0
0
1
Is there a way with dask to have a variable that can be retrieved from one task to another. I mean a variable that I could lock in the worker and then retrieve in the same worker when i execute another task.
How to calculate the width of the Windows CMD shell in pixels?
38,778,425
0
2
206
0
python,linux,shell,cmd
You cant but the default command prompt dimensions are 677, 343. The height being 677 and the width being 343, hope this helps.
0
1
0
0
2016-08-04T21:57:00.000
1
0
false
38,778,079
0
0
0
1
I'm working on a Python shell script that is supposed to fill a percentage of the user's screen. The shell's width, however, is calculated in characters instead of pixels, and I find it difficult to compare them to the screen resolution (which is obviously in pixels). How can I effectively calculate the width in characters with only the screen pixels while still being able to support both Windows and Linux? For the sake of the question, let's assume none of the users have changed their shell settings from the default ones.
Can a LP created on a Windows platform be run on a Linux platform?
38,780,526
0
1
51
0
python,linux,windows,matlab,gurobi
Yes, you can write Gurobi Python code on one system, then copy it and run it on another. You can go from Windows to Linux, Mac to Windows, etc. Alternately, if you have Gurobi Compute Server, your Windows computer can be a client of your Linux server.
0
1
0
0
2016-08-05T02:33:00.000
2
0
false
38,780,223
1
0
0
1
I have a huge MILP in Matlab, which I want to re-program in Gurobi using python language, on a Windows desktop. But after that I want to run it on a super computer which has a Linux os. I know python is cross-platform. Does this mean anything I create in Gurobi on Windows will run on Linux too? If this question is dumb I'm sorry, I just want to know for sure.
Pointing bash to a python installed on windows
38,797,912
0
11
7,716
0
python,windows,bash,windows-subsystem-for-linux
You have at least four options: Specify the complete absolute path to the python executable you want to use. Define an alias in your .bashrc file Modify the PATH variable in your .bashrc file to include the location of the python version you wish to use. Create a symlink in a directory which is already in your PATH.
0
1
0
0
2016-08-05T17:35:00.000
3
0
false
38,794,937
1
0
0
2
I am using Windows 10 and have Python installed. The new update brought bash to windows, but when I call python from inside bash, it refers to the Python installation which came with the bash, not to my Python installed on Windows. So, for example, I can't use the modules which I have already installed on Windows and would have to install them separately on the bash installation. How can I (and can I?) make bash point to my original Windows Python installation? I see that in /usr/bin I have a lot of links with "python" inside their name, but I am unsure which ones to change, and if changing them to Windows directories would even work because of different executable formats.
Pointing bash to a python installed on windows
40,900,477
5
11
7,716
0
python,windows,bash,windows-subsystem-for-linux
As of Windows 10 Insider build #14951, you can now invoke Windows executables from within Bash. You can do this by explicitly calling the absolute path to an executable (e.g. c:\Windows\System32\notepad.exe), or by adding the executable's path to the bash path (if it isn't already), and just calling, for example, notepad.exe. Note: Be sure to append the .exe to the name of the executable - this is how Linux knows that you're invoking something foreign and routes the invocation request to the registered handler - WSL in this case. So, in your case, if you've installed Python 2.7 on Windows at C:\, you might invoke it using a command like this from within bash: $ /mnt/c/Python2.7/bin/python.exe (or similar - check you have specified each folder/filename case correctly, etc.) HTH.
0
1
0
0
2016-08-05T17:35:00.000
3
1.2
true
38,794,937
1
0
0
2
I am using Windows 10 and have Python installed. The new update brought bash to windows, but when I call python from inside bash, it refers to the Python installation which came with the bash, not to my Python installed on Windows. So, for example, I can't use the modules which I have already installed on Windows and would have to install them separately on the bash installation. How can I (and can I?) make bash point to my original Windows Python installation? I see that in /usr/bin I have a lot of links with "python" inside their name, but I am unsure which ones to change, and if changing them to Windows directories would even work because of different executable formats.
how to install hypothesis Python package?
38,796,484
1
0
1,175
0
python,wing-ide,python-packaging
pip install hypothesis Assuming you have pip. If you want to install it from the downloaded package just open command prompt and cd to the directory where you downloaded it and do python setup.py install
0
1
0
0
2016-08-05T19:20:00.000
2
0.099668
false
38,796,441
1
0
0
1
I'm using Wing IDE, how do I install hypothesis Python package to my computer? I have already download the zip file, do I use command prompt to install it or there is an option in Wing IDE to do it?
Where can I get pycharm-debug.egg for Idea?
55,854,571
4
9
6,168
0
python,debugging,intellij-idea,pycharm,remote-debugging
I just contacted JetBrains and was informed that their documentation is out of date and that it's now located in /Users/<user>/Library/Application Support/<product_version>/python.
0
1
0
0
2016-08-06T21:03:00.000
4
0.197375
false
38,808,690
1
0
0
1
I can't find pycharm-debug.egg in IntelliJ Idea (2016.2) installation directory, where can I get it?
Why does the Google App Engine NDB datastore have both "—" and "null" for unkown data?
38,815,611
4
1
184
1
python,google-app-engine,null,google-cloud-datastore,app-engine-ndb
You have to specifically set the value to NULL, otherwise it will not be stored in the Datastore and you see it as missing in the Datastore viewer. This is an important distinction. NULL values can be indexed, so you can retrieve a list of entities where date of birth, for example, is null. On the other hand, if you do not set a date of birth when it is unknown, there is no way to retrieve a list of entities with date of birth property missing - you'll have to iterate over all entities to find them. Another distinction is that NULL values take space in the Datastore, while missing values do not.
0
1
0
0
2016-08-07T13:35:00.000
1
1.2
true
38,814,666
0
0
1
1
I recently updated an entity model to include some extra properties, and noticed something odd. For properties that have never been written, the Datastore query page shows a "—", but for ones that I've explicitly set to None in Python, it shows "null". In SQL, both of those cases would be null. When I query an entity that has both types of unknown properties, they both read as None, which fits with that idea. So why does the NDB datastore viewer differentiate between "never written" and "set to None", if I can't differentiate between them programatically?
Getting the targetdir variable must be provided when invoking this installer while installing python 3.5
38,842,351
89
25
28,893
0
python-3.x,window
Hey just right click on exe file and run as a administrator.It worked for me :)
0
1
0
0
2016-08-08T11:19:00.000
3
1.2
true
38,827,889
1
0
0
2
I have Python 2.7 on my Window 7. Problem is with python 3.5 and 3.6 version only.
Getting the targetdir variable must be provided when invoking this installer while installing python 3.5
52,044,332
2
25
28,893
0
python-3.x,window
There are 2-3 ways to solve the issue: As suggested above, Right-click on exe file and run as administrator. Open command prompt in administrator mode. Just take a note of where your setup file location is present. Use cd C:\Users\ABC\Downloads Type C:\>python-3.7.0.exe TargetDir=C:\Python37 Note: my setup file was python-3.7.0.exe Follow the steps 3.Please try to do the custom installation and choose a clean folder location. In custom installation, we can tick or un-tick some option. Choose only 1/2 option which are required. Leave rest. Sometimes this troubleshoot step also help to install. 4.Go to properties of python Setup file. Go to advance setting and change the owner to administrator. Also go to compatibility and tick on "Run as administrator"
0
1
0
0
2016-08-08T11:19:00.000
3
0.132549
false
38,827,889
1
0
0
2
I have Python 2.7 on my Window 7. Problem is with python 3.5 and 3.6 version only.
Access Jupyter notebook running on Docker container
38,936,551
85
96
99,109
0
python,docker,jupyter-notebook
You need to run your notebook on 0.0.0.0: jupyter notebook -i 0.0.0.0. Running on localhost make it available only from inside the container.
0
1
0
0
2016-08-08T13:33:00.000
11
1
false
38,830,610
0
0
0
7
I created a docker image with python libraries and Jupyter. I start the container with the option -p 8888:8888, to link ports between host and container. When I launch a Jupyter kernel inside the container, it is running on localhost:8888 (and does not find a browser). I used the command jupyter notebook But from my host, what is the IP address I have to use to work with Jupyter in host's browser ? With the command ifconfig, I find eth0, docker, wlan0, lo ... Thanks !
Access Jupyter notebook running on Docker container
48,486,958
0
96
99,109
0
python,docker,jupyter-notebook
In the container you can run the following to make it available on your local machine (using your docker machine's ip address). jupyter notebook --ip 0.0.0.0 --allow-root You may not need to provide the --allow-root flag depending on your container's setup.
0
1
0
0
2016-08-08T13:33:00.000
11
0
false
38,830,610
0
0
0
7
I created a docker image with python libraries and Jupyter. I start the container with the option -p 8888:8888, to link ports between host and container. When I launch a Jupyter kernel inside the container, it is running on localhost:8888 (and does not find a browser). I used the command jupyter notebook But from my host, what is the IP address I have to use to work with Jupyter in host's browser ? With the command ifconfig, I find eth0, docker, wlan0, lo ... Thanks !
Access Jupyter notebook running on Docker container
51,117,257
2
96
99,109
0
python,docker,jupyter-notebook
You can use the command jupyter notebook --allow-root --ip[of your container] or give access to all ip using option --ip0.0.0.0.
0
1
0
0
2016-08-08T13:33:00.000
11
0.036348
false
38,830,610
0
0
0
7
I created a docker image with python libraries and Jupyter. I start the container with the option -p 8888:8888, to link ports between host and container. When I launch a Jupyter kernel inside the container, it is running on localhost:8888 (and does not find a browser). I used the command jupyter notebook But from my host, what is the IP address I have to use to work with Jupyter in host's browser ? With the command ifconfig, I find eth0, docker, wlan0, lo ... Thanks !
Access Jupyter notebook running on Docker container
62,352,225
0
96
99,109
0
python,docker,jupyter-notebook
docker run -i -t -p 8888:8888 continuumio/anaconda3 /bin/bash -c "/opt/conda/bin/conda install jupyter -y --quiet && mkdir /opt/notebooks && /opt/conda/bin/jupyter notebook --notebook-dir=/opt/notebooks --ip='*' --port=8888 --no-browser --allow-root" i had to add --allow-root to the command and now its running
0
1
0
0
2016-08-08T13:33:00.000
11
0
false
38,830,610
0
0
0
7
I created a docker image with python libraries and Jupyter. I start the container with the option -p 8888:8888, to link ports between host and container. When I launch a Jupyter kernel inside the container, it is running on localhost:8888 (and does not find a browser). I used the command jupyter notebook But from my host, what is the IP address I have to use to work with Jupyter in host's browser ? With the command ifconfig, I find eth0, docker, wlan0, lo ... Thanks !
Access Jupyter notebook running on Docker container
71,815,877
0
96
99,109
0
python,docker,jupyter-notebook
Go in the Docker and check cat /etc/jupyter/jupyter_notebook_config.py : You should see / add this line : c.NotebookApp.allow_origin = 'https://colab.research.google.com'
0
1
0
0
2016-08-08T13:33:00.000
11
0
false
38,830,610
0
0
0
7
I created a docker image with python libraries and Jupyter. I start the container with the option -p 8888:8888, to link ports between host and container. When I launch a Jupyter kernel inside the container, it is running on localhost:8888 (and does not find a browser). I used the command jupyter notebook But from my host, what is the IP address I have to use to work with Jupyter in host's browser ? With the command ifconfig, I find eth0, docker, wlan0, lo ... Thanks !
Access Jupyter notebook running on Docker container
48,986,548
65
96
99,109
0
python,docker,jupyter-notebook
Host machine: docker run -it -p 8888:8888 image:version Inside the Container : jupyter notebook --ip 0.0.0.0 --no-browser --allow-root Host machine access this url : localhost:8888/tree‌​ When you are logging in for the first time there will be a link displayed on the terminal to log on with a token.
0
1
0
0
2016-08-08T13:33:00.000
11
1
false
38,830,610
0
0
0
7
I created a docker image with python libraries and Jupyter. I start the container with the option -p 8888:8888, to link ports between host and container. When I launch a Jupyter kernel inside the container, it is running on localhost:8888 (and does not find a browser). I used the command jupyter notebook But from my host, what is the IP address I have to use to work with Jupyter in host's browser ? With the command ifconfig, I find eth0, docker, wlan0, lo ... Thanks !
Access Jupyter notebook running on Docker container
46,086,088
12
96
99,109
0
python,docker,jupyter-notebook
To get the link to your Jupyter notebook server: After your docker run command, a hyperlink should be automatically generated. It looks something like this: http://localhost:8888/?token=f3a8354eb82c92f5a12399fe1835bf8f31275f917928c8d2 :: /home/jovyan/work If you want to get the link again later down the line, you can type docker exec -it <docker_container_name> jupyter notebook list.
0
1
0
0
2016-08-08T13:33:00.000
11
1
false
38,830,610
0
0
0
7
I created a docker image with python libraries and Jupyter. I start the container with the option -p 8888:8888, to link ports between host and container. When I launch a Jupyter kernel inside the container, it is running on localhost:8888 (and does not find a browser). I used the command jupyter notebook But from my host, what is the IP address I have to use to work with Jupyter in host's browser ? With the command ifconfig, I find eth0, docker, wlan0, lo ... Thanks !
pcapy.PcapError: eth1: You don't have permission to capture on that device
38,832,386
0
0
1,739
0
python,scapy,pcap
If you're running on linux or OS X try running as root or with sudo, otherwise if you're on windows try running as administrator.
0
1
1
0
2016-08-08T14:50:00.000
1
1.2
true
38,832,347
0
0
0
1
I am trying to run pcapy_sniffer.py but i get this pcapy.PcapError: eth1: You don't have permission to capture on that device (socket: Operation not permitted)
What is the advantage of running python script using command line?
38,909,330
0
0
1,721
0
python,command-line,emacs
People use different tools for different purposes. An important question about the interface into any program is who is the user? You, as a programmer, will use the interpreter to test a program and check for errors. Often times, the user doesn't really need to access the variables inside because they are not interacting with the application/script with an interpreter. For example, with Python web applications, there is usually a main.py script to redirect client HTTP requests to appropriate handlers. These handlers execute a python script automatically when a client requests it. That output is then displayed to the user. In Python web applications, unless you are the developer trying to eliminate a bug in the program, you usually don't care about accessing variables within a file like main.py (in fact, giving the client access to those variables would pose a security issue in some cases). Since you only need the output of a script, you'd execute that script function in command line and display the result to the client. About best practices: again, depends on what you are doing. Using the python interpreter for computation is okay for smaller testing of isolated functions but it doesn't work for larger projects where there are more moving parts in a python script. If you have a python script reaching a few hundred lines, you won't really remember or need to remember variable names. In that case, it's better to execute the script in command-line, since you don't need access to the internal components. You want to create a new script file if you are fashioning that script for a single set of tasks. For example with the handlers example above, the functions in main.py are all geared towards handling HTTP requests. For something like defining x, defining y, and then adding it, you don't really need your own file since you aren't creating a function that you might need in the future and adding two numbers is a built-in method. However, say you have a bunch of functions you've created that aren't available in a built-in method (complicated example: softmax function to reduce K dimension vector to another K dimension vector where every element is a value between 0 and 1 and all the elements sum to 1), you want to capture in a script file and cite that script's procedure later. In that case, you'd create your own script file and cite it in a different python script to execute.
0
1
0
1
2016-08-08T22:04:00.000
1
1.2
true
38,839,215
0
0
0
1
I am beginner to python and programming in general. As I am learning python, I am tying to develop a good habit or follow a good practice. So let me first explain what I am currently doing. I use Emacs (prelude) to execute python scripts. The keybinding C-c C-c evaluates the buffer which contains the python script. Then I get a new buffer with a python interpreter with >>> prompt. In this environment all the variables used in the scripts are accessible. For example, if x and y were defined in the script, I can do >>> x + y to evaluate it. I see many people (if not most) around me using command line to execute the python script (i.e., $ python scriptname.py). If I do this, then I return to the shell prompt, and I am not able to access the variables x and y to perform x + y. So I wasn't sure what the advantage of running python scripts using the command line. Should I just use Emacs as a editor and use Terminal (I am using Mac) to execute the script? What is a better practice? Thank you!
Running an R script from command line (to execute from python)
38,856,331
1
4
3,887
0
python,r,shell,command-line
You probably already have R, since you can already run your script. All you have to do is find its binaries (the Rscript.exe file). Then open windows command line ([cmd] + [R] > type in : "cmd" > [enter]) Enter the full path to R.exe, followed by the full path to your script.
0
1
0
0
2016-08-09T16:42:00.000
3
1.2
true
38,856,271
0
0
0
2
I'm currently trying to run an R script from the command line (my end goal is to execute it as the last line of a python script). I'm not sure what a batch file is, or how to make my R script 'executable'. Currently it is saved as a .R file. It works when I run it from R. How do I execute this from the windows command prompt line? Do i need to download something called Rscript.exe? Do I just save my R script as an .exe file? Please advise on the easiest way to achieve this. R: version 3.3 python: version 3.x os: windows
Running an R script from command line (to execute from python)
38,856,393
3
4
3,887
0
python,r,shell,command-line
You already have Rscript, it came with your version of R. If R.exe, Rgui.exe, ... are in your path, then so is Rscript.exe. Your call from Python could just be Rscript myFile.R. Rscript is much better than R BATCH CMD ... and other very old and outdated usage patterns.
0
1
0
0
2016-08-09T16:42:00.000
3
0.197375
false
38,856,271
0
0
0
2
I'm currently trying to run an R script from the command line (my end goal is to execute it as the last line of a python script). I'm not sure what a batch file is, or how to make my R script 'executable'. Currently it is saved as a .R file. It works when I run it from R. How do I execute this from the windows command prompt line? Do i need to download something called Rscript.exe? Do I just save my R script as an .exe file? Please advise on the easiest way to achieve this. R: version 3.3 python: version 3.x os: windows
How can I send HTTP broadcast message with tornado?
38,866,860
1
1
544
0
python-2.7,http,tornado,broadcast
Short answer: you might be interested in WebSockets. Tornado seems to have support for this. Longer answer: I assume you're referring to broadcast from the server to all the clients. Unfortunately that's not doable conceptually in HTTP/1.1 because of the way it's thought out. The client asks something of the server, and the server responds, independently of all the others. Furthermore, while there is no request going on between a client and a server, that relationship can be said to not exist at all. So if you were to broadcast, you'd be missing out on clients not currently communicating with the server. Granted, things are not as simple. Many clients keep a long-lived TCP connection when talking to the server, and pipeline HTTP requests for it on that. Also, a single request is not atomic, and the response is sent in packets. People implemented server-push/long-polling before WebSockets or HTTP/2 with this approach, but there are better ways to go about this now.
0
1
0
0
2016-08-10T07:18:00.000
2
1.2
true
38,866,649
0
0
0
1
I have a tornado HTTP server. How can I implement broad-cast message with the tornado server? Is there any function for that or I just have to send normal HTTP message all clients looping. I think if I send normal HTTP message, the server should wait for the response. It seems not the concept of broad-cast. Otherwise, I need another third-part option for broad-cast? Please give me any suggestion to implement broad-cast message.
Execution of a Python Script from PHP
38,870,240
1
1
173
0
php,android,python,linux,exec
First Check your python PATH using "which python" command and check result is /usr/bin/python. Check your "TestCode.py" if you have written #!/usr/bin/sh than replace it with #!/usr/bin/bash. Than run these commands exec('/usr/bin/python /var/www/html/Source/TestCode.py', $result); echo $result
0
1
0
1
2016-08-10T09:34:00.000
2
0.099668
false
38,869,507
0
0
0
1
I am making an android application in which I am first uploading the image to the server and on the server side, I want to execute a Python script from PHP. But I am not getting any output. When I access the Python script from the command prompt and run python TestCode.py it runs successfully and gives the desired output. I'm running Python script from PHP using the following command: $result = exec('/usr/bin/python /var/www/html/Source/TestCode.py'); echo $result However, if I run a simple Python program from PHP it works. PHP has the permissions to access and execute the file. Is there something which I am missing here?
Is there a way to schedule sending an e-mail through Google App Engine Mail API (Python)?
38,884,139
2
0
197
0
python,email,google-app-engine,cron
You can easily accomplish what you need with Task API. When you create a task, you can set an ETA parameter (when to execute). ETA time can be up to 30 days into the future. If 30 days is not enough, you can store a "send_email" entity in the Datastore, and set one of the properties to the date/time when this email should be sent. Then you create a cron job that runs once a month (week). This cron job will retrieve all "send_email" entities that need to be send the next month (week), and create tasks for them, setting ETA to the exact date/time when they should be executed.
0
1
0
1
2016-08-10T18:04:00.000
2
1.2
true
38,880,555
0
0
1
1
I want to be able to schedule an e-mail or more of them to be sent on a specific date, preferably using GAE Mail API if possible (so far I haven't found the solution). Would using Cron be an acceptable workaround and if so, would I even be able to create a Cron task with Python? The dates are various with no specific pattern so I can't use the same task over and over again. Any suggestions how to solve this problem? All help appreciated
How to get Python 3.5 and Anaconda 3.5 running on ubuntu 16.04?
46,602,056
0
1
3,184
0
python,python-2.7,python-3.x,ubuntu,anaconda
Use anaconda version Anaconda3-4.2.0-Linux-x86_64.sh from the anaconda installer archive.This comes with python 3.5. This worked for me.
0
1
0
0
2016-08-10T20:22:00.000
2
0
false
38,882,845
1
0
0
1
Anaconda for python 3.5 and python 2.7 seems to install just as a drop in folder inside my home folder on Ubuntu. Is there an installed version of Anaconda for Ubuntu 16? I'm not sure how to ask this but do I need python 3.5 that comes by default if I am also using Anaconda 3.5? It seems like the best solution is docker these days. I mean I understand virtualenv and virtualenvwrapper. However, sometimes I try to indicate in my .bashrc that I want to use python 3.5 and yet I'll use the command mkvirtualenv and it will start installing the python 2.7 versions of python. Should I choose either Anaconda or the version of python installed with my OS from python.org or is there an easy way to manage many different versions of Python? Thanks, Bruce
Data analysis of log files – How to find a pattern?
38,886,144
0
0
2,344
0
python,logging,windows-ce,data-analysis
There is no input data at all to this problem so this answer will be basically pure theory, a little collection of ideas you could consider. To analize patterns out of a bunch of many logs you could definitely creating some graphs displaying relevant data which could help to narrow the problem, python is really very good for these kind of tasks. You could also transform/insert the logs into databases, that way you'd be able to query the relevant suspicious events much faster and even compare massively all your logs. A simpler approach could be just focusing on a simple log showing the crash, instead wasting a lot of efforts or resources trying to find some kind of generic pattern, start by reading through one simple log in order to catch suspicious "events" which could produce the crash. My favourite approach for these type of tricky problems is different from the previous ones, instead of focusing on analizing or even parsing the logs I'd just try to reproduce the bug/s in a deterministic way locally (you don't even need to have the source code). Sometimes it's really difficult to replicate the production environment in your the dev environment but definitely is time well invested. All the effort you put into this process will help you to solve not only these bugs but improving your software much faster. Remember, the more times you're able to iterate the better. Another approach could just be coding a little script which would allow you to replay logs which crashed, not sure if that'll be easy in your environment though. Usually this strategy works quite well with production software using web-services where there will be a lot of tuples with data-requests and data-retrieves. In any case, without seeing the type of data from your logs I can't be more specific nor giving much more concrete details.
0
1
0
0
2016-08-11T01:30:00.000
2
0
false
38,885,944
0
0
0
1
My company has slightly more than 300 vehicle based windows CE 5.0 mobile devices that all share the same software and usage model of Direct Store Delivery during the day then doing a Tcom at the home base every night. There is an unknown event(s) that results in the device freaking out and rebooting itself in the middle of the day. Frequency of this issue is ~10 times per week across the fleet of computers that all reboot daily, 6 days a week. The math is 300*6=1800 boots per week (at least) 10/1800= 0.5%. I realize that number is very low, but it is more than my boss wants to have. My challenge, is to find a way to scan through several thousand logfille.txt files and try to find some sort of pattern. I KNOW there is a pattern here somewhere. I’ve got a couple ideas of where to start, but I wanted to throw this out to the community and see what suggestions you all might have. A bit of background on this issue. The application starts a new log file at each boot. In an orderly (control) log file, you see the app startup, do its thing all day, and then start a shutdown process in a somewhat orderly fashion 8-10 hours later. In a problem log file, you see the device startup and then the log ends without any shutdown sequence at all in a time less than 8 hours. It then starts a new log file which shares the same date as the logfile1.old that it made in the rename process. The application that we have was home grown by windows developers that are no longer with the company. Even better, they don’t currently know who has the source at the moment. I’m aware of the various CE tools that can be used to detect memory leaks (DevHealth, retail messages, etc..) and we are investigating that route as well, however I’m convinced that there is a pattern to be found, that I’m just not smart enough to find. There has to be a way to do this using Perl or Python that I’m just not seeing. Here are two ideas I have. Idea 1 – Look for trends in word usage. Create an array of every unique word used in the entire log file and output a count of each word. Once I had a count of the words that were being used, I could run some stats on them and look for the non-normal events. Perhaps the word “purple” is being used 500 times in a 1000 line log file ( there might be some math there?) on a control and only 4 times on a 500 line problem log? Perhaps there is a unique word that is only seen in the problem files. Maybe I could get a reverse “word cloud”? Idea 2 – categorize lines into entry-type and then look for trends in the sequence of type of entry type? The logfiles already have a predictable schema that looks like this = Level|date|time|system|source|message I’m 99% sure there is a visible pattern here that I just can’t find. All of the logs got turned up to “super duper verbose” so there is a boatload of fluff (25 logs p/sec , 40k lines per file) that makes this even more challenging. If there isn’t a unique word, then this has almost got to be true. How do I do this? Item 3 – Hire a windows CE platform developer Yes, we are going down that path as well, but I KNOW there is a pattern I’m missing. They will use the tools that I don’t have) or make the tools that we need to figure out what’s up. I suspect that there might be a memory leak, radio event or other event that platform tools I’m sure will show. Item 4 – Something I’m not even thinking of that you have used. There have got to be tools out there that do this that aren’t as prestigious as a well-executed python script, and I’m willing to go down that path, I just don’t know what those tools are. Oh yeah, I can’t post log files to the web, so don’t ask. The users are promising to report trends when they see them, but I’m not exactly hopeful on that front. All I need to find is either a pattern in the logs, or steps to duplicate So there you have it. What tools or techniques can I use to even start on this?
Run a python script from bamboo
40,619,044
0
2
5,082
0
python,python-2.7,bamboo
I run a lot of python tasks from bamboo, so it is possible. Using the Script task is generally painless... You should be able to use your script task to run the commands directly and have stdout written to the logs. Since this is true, you can run: 'which python' -- Output the path of which python that is being ran. 'pip list' -- Output a list of which modules are installed with pip. You should verify that the output from the above commands matches the output when ran from the server. I'm guessing they won't match up and once that is addressed, everything will work fine. If not, comment back and we can look at a few other things. For the future, there are a handful of different ways you can package things with python which could assist with this problem (e.g. automatically installing missing modules, etc).
0
1
0
1
2016-08-11T22:06:00.000
2
0
false
38,906,844
1
0
0
1
I'm trying to run a python script from bamboo. I created a script task and wrote inline "python myFile.py". Should I be listing the full path for python? I changed the working directory to the location of myFile.py so that is not a problem. Is there anything else I need to do within the configuration plan to properly run this script? It isn't running but I know it should be running because the script works fine from terminal on my local machine. Thanks
Implications of flushing stdout after each print
38,926,942
0
1
373
0
python,python-2.7
Basically, the only drawback is that it's potentially slower. The buffering on stdin allows your program to run ahead of the physical I/O which is slow. However, if you're sending it to less, you're operating at human speeds anyway -- it's not going to make a difference.
0
1
0
0
2016-08-12T21:48:00.000
1
0
false
38,926,917
1
0
0
1
I have a script whose output is piped to less, and I would like the script to print it's statements into less as they come, rather than all at once. I found that if I flush stdout (via sys.stdout.flush()) after each print, the line is displayed in less when flushed (obviously). My question is: Are there any drawbacks to doing this? My script has hundreds of thousands of lines being printed, would flushing after each line cause problems? My impression is yes, because you take extra resources up for displaying each time you flush, as well as completely circumventing the idea of buffered output
How to make a port forward rule in Python 3 in windows?
38,932,875
0
3
707
0
python,sockets,batch-file,portforwarding
I'm not sure if that's possible, as much as I know, ports aren't actually a thing their just some abstraction convention made by protocols today and supported by your operating system that allows you to have multiple connections per one machine, now sockets are basically some object provided to you by the operating system that implements some protocol stack and allows you to communicate with other systems, the API provides you some very nice API called the socket API which allows you use it's functionality in order to communicate with other computers, Port forwarding is not an actual thing, it just means that when the operating system of the router when receiving incoming packets that are destined to some port it will drop them if the port is not open, think of your router as some bouncer or doorman, standing in the entrance of a building, the building is your LAN, your apartment is your machine and rooms within your apartment are ports, some package or mail arrives to your doorman under the port X, a port rule means on IP Y and Port X of the router -> forward to IP Z and port A of some computer within the LAN ( provides and implements the NAT/PAT ) so what happens if we'll go back to my analogy is something such as this: doorman receives mail destined to some port, and checks if that port is open, if not it drops the mail if it is it allows it to go to some room within some apartment.. (sounds complex I know apologize) my point is, every router chooses to implement port rules or port blocking a little bit different and there is no standard protocol for doing, socket is some object that allows you program to communicate with others, you could create some server - client with sockets but that means that you'll need to create or program your router, and I'm not sure if that's possible, what you COULD do is: every router provides some http client ( web client ) that is used to create and forward ports, maybe if you read about your router you could get access to that client and write some python http script that forwards ports automatically another point I've forgot is that you need to make sure you're own firewall isn't blocking ports, but there's no need for sockets / python to do so, just manually config it
0
1
1
0
2016-08-13T09:00:00.000
2
0
false
38,931,064
0
0
0
2
Purpose: I'm making a program that will set up a dedicated server (software made by game devs) for a game with minimal effort. One common step in making the server functional is port forwarding by making a port forward rule on a router. Me and my friends have been port forwarding through conventional means for many years with mixed results. As such I am hoping to build a function that will forward a port on a router when given the internal ip of the router, the internal ip of the current computer,the port and the protocol. I have looked for solutions for similar problems, but I found the solutions difficult to understand since i'm not really familiar with the socket module. I would prefer not to use any programs that are not generally installed on windows since I plan to have this function work on systems other than my own. Approaches I have explored: Creating a bat file that issues commands by means of netsh, then running the bat. Making additions to the settings in a router found under Network -> Network Infrastructure (I do not know how to access these settings programmaticly). (I'm aware programs such as GameRanger do this) Using the Socket Module. If anyone can shed some light how I can accomplish any of the above approaches or give me some insight on how I can approach this problem another way I would greatly appreciate it. Thank you. Edit: Purpose
How to make a port forward rule in Python 3 in windows?
38,932,807
0
3
707
0
python,sockets,batch-file,portforwarding
You should read first some sort of informations about UPnP (Router Port-Forwarding) and that it's normally disabled. Dependent of your needs, you could also try a look at ssh reverse tunnels and at ssh at all, as it can solve many problems. But you will see that working with windows and things like adavanced network things is a bad idea. At least you should use cygwin. And when you really interessted in network traffic at all, wireshark should be installed.
0
1
1
0
2016-08-13T09:00:00.000
2
0
false
38,931,064
0
0
0
2
Purpose: I'm making a program that will set up a dedicated server (software made by game devs) for a game with minimal effort. One common step in making the server functional is port forwarding by making a port forward rule on a router. Me and my friends have been port forwarding through conventional means for many years with mixed results. As such I am hoping to build a function that will forward a port on a router when given the internal ip of the router, the internal ip of the current computer,the port and the protocol. I have looked for solutions for similar problems, but I found the solutions difficult to understand since i'm not really familiar with the socket module. I would prefer not to use any programs that are not generally installed on windows since I plan to have this function work on systems other than my own. Approaches I have explored: Creating a bat file that issues commands by means of netsh, then running the bat. Making additions to the settings in a router found under Network -> Network Infrastructure (I do not know how to access these settings programmaticly). (I'm aware programs such as GameRanger do this) Using the Socket Module. If anyone can shed some light how I can accomplish any of the above approaches or give me some insight on how I can approach this problem another way I would greatly appreciate it. Thank you. Edit: Purpose
Spotify - access token from command line
39,049,945
0
0
422
0
python,command-line,spotify,spotipy
Copy and paste the entire redirect URI from your browser to the terminal (when prompted) after successful authentication. Your access token will be cached in the directory (look for .cache.<username>)
0
1
1
0
2016-08-14T04:19:00.000
1
0
false
38,939,085
0
0
0
1
I am testing my app using the terminal, which is quite handy in a pre-development phase. so far, I have used spotipy.Spotify(client_credentials_manager=client_credentials_manager) within my python scripts in order to access data. SpotifyClientCredentials() requires client_idand client_secret as parameters. now I need to access analysis_url data, which requires an access token. Is there a way to include this access token requirement via my python script ran at command line or do I have to build an app on the browser just to do a simple test? many thanks on advance.
monitoring jboss process with icinga/nagios
39,367,798
0
0
1,024
0
python,shell,jboss,nagios,icinga
I did this by monitored jboss process using ps aux | grep "\-D\[Standalone\]" for standalone mode and ps aux | grep "\-D\[Server" for domain mode.
0
1
0
1
2016-08-14T18:26:00.000
3
1.2
true
38,945,299
0
0
1
1
I want to monitor jboss if its running or not through Icinga. I don't want to check /etc/inid.d/jboss status as sometimes service is up but some of the jboss is killed or hang & jboss doesn't work properly. I would like to create a script to monitor all of its process from ps output. But few servers are running in standalone mode, domain(master,slave) and processes are different for each case. I'm not sure from where do I start. Anyone here who did same earlier? Just looking for the idea to do this.
bash on Ubuntu on windows Linux, folder recognition, and running Python scripts
39,723,038
0
0
970
0
python,linux,bash,ubuntu,windows-subsystem-for-linux
Looks like you are having permissions issues. To see everything on your home folder try ls -al to change permissions check out the chmod command
0
1
0
0
2016-08-15T00:57:00.000
3
0
false
38,948,021
0
0
0
2
I'm new to Linux. I recently downloaded Bash on Ubuntu on Windows 10 (after the Anniversary edition update to Windows 10). Since this update is relatively new, there is not much online regarding troubleshooting. There are two things I need help on: (1) When I go to the home folder, which seems to be "C:\Users\user\AppData\Local\lxss\home\user" and I add a new folder through Windows, this folder does not show up in Linux with the "ls" command. But when I add a directory using "mkdir" in Linux, the "ls" command shows this folder. Why is it behaving like this? Am I limited to creating folders through "mkdir" when working in this folder? (2) I have a Python script sitting in that same folder that I'm trying to run and again it is not being found by Linux or the Python interpreter started in Bash on Ubuntu on Windows. I have Python 3 installed (Anaconda) and I'm able to type commands directly in the Python interpreter and it's working. However, I would like to run scripts in files. Please let me know if more information is needed. Thanks.
bash on Ubuntu on windows Linux, folder recognition, and running Python scripts
51,154,550
1
0
970
0
python,linux,bash,ubuntu,windows-subsystem-for-linux
The reason why ls is not showing anything is that it shows the Linux directory structure. Try setting it to the Windows directory, in this example the c drive: cd /mnt/c Does ls show a folder structure now?
0
1
0
0
2016-08-15T00:57:00.000
3
0.066568
false
38,948,021
0
0
0
2
I'm new to Linux. I recently downloaded Bash on Ubuntu on Windows 10 (after the Anniversary edition update to Windows 10). Since this update is relatively new, there is not much online regarding troubleshooting. There are two things I need help on: (1) When I go to the home folder, which seems to be "C:\Users\user\AppData\Local\lxss\home\user" and I add a new folder through Windows, this folder does not show up in Linux with the "ls" command. But when I add a directory using "mkdir" in Linux, the "ls" command shows this folder. Why is it behaving like this? Am I limited to creating folders through "mkdir" when working in this folder? (2) I have a Python script sitting in that same folder that I'm trying to run and again it is not being found by Linux or the Python interpreter started in Bash on Ubuntu on Windows. I have Python 3 installed (Anaconda) and I'm able to type commands directly in the Python interpreter and it's working. However, I would like to run scripts in files. Please let me know if more information is needed. Thanks.
Connect to a Bluetooth LE device using bluez python dbus interface
38,997,649
0
3
3,227
0
python,dbus,bluez,gatt
See 'test/example-gatt-client' from bluez package
0
1
0
0
2016-08-15T10:11:00.000
2
0
false
38,953,175
0
0
0
1
I would like to connect to a Bluetooth LE device and receive notifications from it in python. I would like to use the Bluez dbus API, but can't find an example I can understand. :-) With gatttool, I can use the following command: gatttool -b C4:8D:EE:C8:D2:D8 --char-write-req -a 0x001d -n 0100 –listen How can I do the same in python, using the dbus API of Bluez?
Pycharm edu terminal plugin missing
53,610,811
0
1
628
0
python,django,intellij-idea,ide,pycharm
Go to File > Settings > Plugins > Browse repositories > Search and Install Native Terminal This will install a terminal which will use the Windows Native terminal. A small black button will appear on the tool bar. If you did not enable the tool bar, here is the trick: View | toolbar check this toolbar option and the cmd button will be shown on the bar
0
1
0
0
2016-08-16T03:16:00.000
2
0
false
38,966,114
1
0
0
2
First time posting, let me know how I can improve my questions. I have installed PyCharm Edu 3.0 and Anaconda 3 on an older laptop. I am attempting to access the embedded terminal in the IDE and I am unable to launch it. I have searched through similar questions here and the JetBrains docs, and the common knowledge seems to be installing the "Terminal" Plugin. My version of PyCharm does not have this plugin, and I am unable to find it in the JetBrains plugin list or community repositories. If anyone has experienced this before or knows where I am going wrong attempting to launch the terminal I would appreciate the feedback.
Pycharm edu terminal plugin missing
43,782,480
-1
1
628
0
python,django,intellij-idea,ide,pycharm
Click preferences and choose plugin. Next click install Jetbrains plugin and choose Command line Tool Support. I hope this will help you
0
1
0
0
2016-08-16T03:16:00.000
2
-0.099668
false
38,966,114
1
0
0
2
First time posting, let me know how I can improve my questions. I have installed PyCharm Edu 3.0 and Anaconda 3 on an older laptop. I am attempting to access the embedded terminal in the IDE and I am unable to launch it. I have searched through similar questions here and the JetBrains docs, and the common knowledge seems to be installing the "Terminal" Plugin. My version of PyCharm does not have this plugin, and I am unable to find it in the JetBrains plugin list or community repositories. If anyone has experienced this before or knows where I am going wrong attempting to launch the terminal I would appreciate the feedback.
Should I create a volume for project files when using Docker with git?
38,978,850
0
0
86
0
python,git,docker,docker-compose,devops
The best solution is B except you will not use a volume in production. Docker-compose will also allow you to easily mount your code as a volume but you only need this for dev. In production you will COPY your files into the container.
0
1
0
0
2016-08-16T14:57:00.000
1
0
false
38,978,228
1
0
0
1
I want to Dockerize a project. I have my project files in a git repo. The project is written in python, and requires a virtual environment to be activated and a pip installation of requirements after the git clone from the repo. The container is going to be a development container, so I would need a basic set of software. Of course I also need to modify the project files, push and pull to git as I prefer. Solution A This solution build everything on runtime, and nothing is kept if the container is restarted. It would require the machine to install all the requirements, clone the project every time the container is started. Use the Dockerfile for installing python, virtualenv, etc. Use the Dockerfile for cloning the project from git, installing pip requirements. Use docker compose for setting up the environment, memory limits, cpu shares, etc. Solution B This solution clones the project from git once manually, then the project files are kept in a volume, and you can freely modify them, regardless of container state. Use the Dockerfile for installing python, virtualenv, etc. Use docker compose for setting up the environment, memory limits, cpu shares, etc. Create a volume that is mounted on the container Clone the project files into the volume, set up everything once. Solution C There might be a much better solution that I have not thought of, if there is, be sure to tell.
IDLE's subprocess didn't make connection. Either IDLE can't start a subprocess or personal firewall software is blocking the connection
42,425,154
0
0
5,220
0
python,macos,pygame,osx-elcapitan
Okie Dokie I figured it out. You have to download the version of idle that is 2.7.(any) IMPORTANT!!! the version has to be 32 bit not 64 bit. Just search on the idle website idle 2.7.12 32 bit. Then download that. Finally download pygame 2.7. Thanks everyone that helped out! P.S. Some idle versions didn't work for me. However, 2.7.8 and 2.7.13 did
0
1
0
0
2016-08-17T02:41:00.000
2
0
false
38,987,289
0
0
0
2
I started programing a game on a Mac. Then, I brought the same EXACT code to another Mac. I got many, many different errors with Pygame saying it wasn't installed, EVEN THOUGH IT WAS! Anyway, I fixed those errors, then I went to go run the module and the window appeared then it crashed and gave me this message: IDLE's subprocess didn't make connection. Either IDLE can't start a subprocess or personal firewall software is blocking the connection I never got this message before. However, it continues to crash. I have killed idle using the Activity Monitor. There weren't any files in the directory. I have deleted all of the Python files that I have created. Trashed every .pyc file. The Mac I am using is on El Captain; Python is at 2.7.12. Like I said, the code has not changed AT ALL from the first computer. However, games that are pre-installed with IDLE work perfectly. I have moved the program to the same folders as the games. I copied the content from my program to another file, still nothing. All help is appreciated, thank you :)
IDLE's subprocess didn't make connection. Either IDLE can't start a subprocess or personal firewall software is blocking the connection
39,058,369
0
0
5,220
0
python,macos,pygame,osx-elcapitan
The most likely reason that your getting this error, is because you're not the administrator of the computer and you're trying to run a script from your local disk. There are a few things that you could do to solve this. 1. Move the .py file: Before jumping to the method below, simply try moving your python file to a different location on your drive. Then try running the script with the python IDLE. If Your script still won't run, or you must have the script on your local drive, see the second method below 2. Run the script from the command prompt\terminal: To run the script from your command prompt\terminal, first find the path to your python executable. In my case mine is: C:\Users[insert user name here]\AppData\Local\Programs\Python\Python35\python.exe Copy and paste the entire path this into your command prompt\terminal window. Next, find the path to your python file. For an example, the path to my script is: C:\test.py It is important to note that your path to your python executable cannot contain spaces. Next copy and paste the path tot your python file into your command prompt\terminal window. When finished, the command your made should like something like this: C:\Users[insert user name here]\AppData\Local\Programs\Python\Python35\python.exe C:\test.py Next press enter, and watch your python script run.
0
1
0
0
2016-08-17T02:41:00.000
2
0
false
38,987,289
0
0
0
2
I started programing a game on a Mac. Then, I brought the same EXACT code to another Mac. I got many, many different errors with Pygame saying it wasn't installed, EVEN THOUGH IT WAS! Anyway, I fixed those errors, then I went to go run the module and the window appeared then it crashed and gave me this message: IDLE's subprocess didn't make connection. Either IDLE can't start a subprocess or personal firewall software is blocking the connection I never got this message before. However, it continues to crash. I have killed idle using the Activity Monitor. There weren't any files in the directory. I have deleted all of the Python files that I have created. Trashed every .pyc file. The Mac I am using is on El Captain; Python is at 2.7.12. Like I said, the code has not changed AT ALL from the first computer. However, games that are pre-installed with IDLE work perfectly. I have moved the program to the same folders as the games. I copied the content from my program to another file, still nothing. All help is appreciated, thank you :)
SSIS Execute Process Task Hanging when Standard Error Variable Provided
38,997,349
1
0
589
0
python,ssis
This is actually now solved - or rather, never actually broken; I was writing to a parent package variable (i.e. by creating the variable in the child package, configuring the task, setting delay validation to true and then deleting the variable) - it appears when I do this, it takes SSIS a long time to write to it! If i use a child package variable, it completes straight away but it takes 1-2 minutes for the parent package variable to be written to. At least it's completing.
0
1
0
0
2016-08-17T10:58:00.000
1
0.197375
false
38,994,725
1
0
0
1
I am using SSIS's Execute Process Task to execute a compiled python script. The script executes as expected and completes as expected with either success or failure. However, when I configure a variable to catch Standard Error or Standard Output, the application hangs. The command prompt flashes up and down indicating that the execution has completed but then the SSIS task itself never completes. To reiterate, when I don't configure the variable, there is no issue and the task finishes as expected. I have also debugged the execution of the script independently and I can verify that: Status code is 0 when success. Standard error contains text. Any ideas what is causing the task to hang?
How open python2.7 in spyder, jupyter, qtconsole. from Anaconda Navigator installed with python3? (OS X)
39,654,422
1
0
761
0
python,anaconda
Type the following commands in the terminal: source activate python2 spyder Spyder will be launched with the python2 environment. With this method you do not use the anaconda navigator, but at least you can use spyder with your python2 environment.
0
1
0
0
2016-08-17T19:31:00.000
1
1.2
true
39,004,849
1
0
0
1
Under OS X (10.11.6) I installed the current Python 3.5 version of Anaconda. Anaconda Navigator then works just fine to launch sypder, jupyter,or qtconsole with python 3.5.2 running. At the command line I also created a python 2.7 environment (conda create --name python2 python=2.7 anaconda). But now when I open Anaconda Navigator, go to Environments in the left pane, and select my python2 environment, still if I go back to Home and launch sypder, jupyter, qtconsole, the python version shown is still 3.5.2. I tried closing Anaconda Navigator, executing "source activate python2" at the command line, and reopening Anaconda Navigator, and again selecting python2 from Environments there. But still sypder, jupyter, qtconsole open with python 3.5.2. How do I launch with python 2.7?
When using Docker Containers, where are shared Python libraries stored?
39,005,477
4
2
684
0
python,docker,virtualenv
Just like everything else in a Docker Container, your libraries are inside the container. Unless you mount a host volume, or a volume from another container of course. On the plus side, though, they're copy-on-write, so if you're not making changes to the libraries in your container (why would you do that anyway?) then you can have 100 running containers from the same image and they don't require any extra disk space. Some people advocate for using a virtualenv within the container - there are pros and cons to the approach, and I don't think there's a one-sized-fits-all answer, though I would lean for not having a virtualenv.
0
1
0
0
2016-08-17T20:06:00.000
1
1.2
true
39,005,380
1
0
0
1
In an environment where Docker Containers are used for each application, where are Python's shared libraries stored? Are they stored separately within each Docker Container, or shared by the host O/S? Additionally I'm wondering if it would be best practice to use a virtual environment regardless?
Running multiple Python scripts in Aptana 3
42,376,440
1
0
81
0
python,aptana
a very belated response but it sounds like your issue is that you have the 'Show Console When Standard Out Changes' option selected. Hope that helps, or that you found the solution on your own. Cheers!
0
1
0
0
2016-08-18T07:15:00.000
1
0.197375
false
39,012,046
1
0
0
1
If I run two different Python scripts simultaneously, I see a console window which shows output from each of the scripts alternately, switching back and forth every second or so. If I open a second console window before running the second script, the same thing happens - both console windows switch between the 2 scripts. How can I get each script to output to its own console window?
Where to place PDCurses for use with UniCurses
50,429,379
0
1
283
0
python,pdcurses,unicurses
This is impossible, because it was a build for Python 3.4!
0
1
0
0
2016-08-18T07:34:00.000
1
1.2
true
39,012,383
1
0
0
1
I want to use UniCurses on Windows. For this, I downloaded various ZIP-archives. I downloaded pdc34dll.zip, pdc34dlls.zip, pdc34dllu.zip, pdc34dllw.zip and pdcurses34.zip. The last was just the source. I tried to place the files within the pdc34dll-folder, extracted from pdc34dll.zip, to the main directory of the Python 3.5.2 installation folder, to the directory where Unicurses is installed (C:\programming\python\352.lib.site-packages\unicurses) and in the System32-directory (C:\windows\system32). But I still get the message that pdcurses.dll cannot be found. What do I wrong and what should I do to solve this problem properly? Thanks for the help.
(centos6.6) before updating python2.7.3 ,it is python 2.6.6. When running pybot --version,errors came out
39,104,313
0
0
127
0
python,linux,robotframework
I installed the zlib-devel and python-level with the help of yum, and recompiled the python, finally completed the test of installation. Thank you for your answer.
0
1
0
1
2016-08-18T09:33:00.000
2
0
false
39,014,670
0
0
0
2
(centos6.6) before updating python2.7.3 ,it is python 2.6.6. When running pybot --version, errors came out as follows. I want to install the test environment of python 2.7.3 and robot framework 2.7.6 and paramiko-1.7.4 and pycrypto-2.6 [root@localhost robotframework-2.7.6]# pybot --version Traceback (most recent call last): File "/usr/bin/pybot", line 4, in from robot import run_cli File "/usr/lib/python2.7/site-packages/robot/__init__.py", line 22, in from robot.rebot import rebot, rebot_cli File "/usr/lib/python2.7/site-packages/robot/rebot.py", line 268, in from robot.conf import RebotSettings File "/usr/lib/python2.7/site-packages/robot/conf/__init__.py", line 17, in from .settings import RobotSettings, RebotSettings File "/usr/lib/python2.7/site-packages/robot/conf/settings.py", line 17, in from robot import utils File "/usr/lib/python2.7/site-packages/robot/utils/__init__.py", line 23, in from .compress import compress_text File "/usr/lib/python2.7/site-packages/robot/utils/compress.py", line 25, in import zlib ImportError: No module named zlib
(centos6.6) before updating python2.7.3 ,it is python 2.6.6. When running pybot --version,errors came out
39,037,542
0
0
127
0
python,linux,robotframework
Reasons could be any of the following: Either the python files (at least one) have lost the formatting. Python is prone to formatting errors At least one installation (python, Robo) doesn't have administrative privileges. Environment variables (PATH, CLASSPATH, PYTHON PATH) are not set fine. What does python --version print? If this throws errors, installation has issues.
0
1
0
1
2016-08-18T09:33:00.000
2
0
false
39,014,670
0
0
0
2
(centos6.6) before updating python2.7.3 ,it is python 2.6.6. When running pybot --version, errors came out as follows. I want to install the test environment of python 2.7.3 and robot framework 2.7.6 and paramiko-1.7.4 and pycrypto-2.6 [root@localhost robotframework-2.7.6]# pybot --version Traceback (most recent call last): File "/usr/bin/pybot", line 4, in from robot import run_cli File "/usr/lib/python2.7/site-packages/robot/__init__.py", line 22, in from robot.rebot import rebot, rebot_cli File "/usr/lib/python2.7/site-packages/robot/rebot.py", line 268, in from robot.conf import RebotSettings File "/usr/lib/python2.7/site-packages/robot/conf/__init__.py", line 17, in from .settings import RobotSettings, RebotSettings File "/usr/lib/python2.7/site-packages/robot/conf/settings.py", line 17, in from robot import utils File "/usr/lib/python2.7/site-packages/robot/utils/__init__.py", line 23, in from .compress import compress_text File "/usr/lib/python2.7/site-packages/robot/utils/compress.py", line 25, in import zlib ImportError: No module named zlib
Windows Python 64 & 32 bit versions and pip
44,056,779
0
1
6,739
0
python,windows,pip,pyinstaller
I had a similar problem with both 32 and 64-bit versions of Python installed. I found if I ran the pip install in the command prompt from the location of pip.exe it worked fine. In my case, the file path was the following: C:\Program Files\Python\3.5\Scripts
0
1
0
0
2016-08-18T10:07:00.000
1
0
false
39,015,410
1
0
0
1
I have made a simple python script and built a 64-bit Windows executable from it via pyinstaller. However, most computers at my office run 32-bit Windows operating systems, thus my program does not work. From what I have read, it is possible to make an executable for 32-bit systems as long as I use the 32-bit version of python. So I went ahead and installed the 32-bit version of python 3.5, but I can't find the way to link pip to the 32-bit version of python so I can install all the necessary modules. Every time I call pip it displays all the modules that are installed on the 64-bit version, even though by default I am running the 32-bit version python.
Django and celery on different servers and celery being able to send a callback to django once a task gets completed
39,065,804
0
5
1,298
0
python,django,asynchronous,rabbitmq,celery
I've used the following set up on my application: Task is initiated from Django - information is extracted from the model instance and passed to the task as a dictionary. NB - this will be more future proof as Celery 4 will default to JSON encoding Remote server runs task and creates a dictionary of results Remote server then calls an update task that is only listened for by a worker on the Django server. Django worker read results dictionary and updates model. The Django worker listens to a separate queue, those this isn't strictly necessary. Results backend isn't used - data needed is just passed to the task
0
1
0
0
2016-08-18T11:59:00.000
2
0
false
39,017,678
0
0
1
1
I have a django project where I am using celery with rabbitmq to perform a set of async. tasks. So the setup i have planned goes like this. Django app running on one server. Celery workers and rabbitmq running from another server. My initial issue being, how to do i access django models from the celery tasks resting on another server? and assuming I am not able to access the Django models, is there a way once the tasks gets completed, i can send a callback to the Django application passing values, so that i get to update the Django's database based on the values passed?
How do Luigi parameters work?
39,028,831
1
0
425
0
python,luigi
In general, you would not need to pass the parameters for Task A to Task B, but Task B would then need to generate the values of those parameters for Task A. If Task B can not generate those parameters, you would have to setup Task B to take those parameters in from the command line, and then pass them through to the Task A constructor in the requires method.
0
1
0
0
2016-08-18T14:18:00.000
2
0.099668
false
39,020,591
0
0
0
1
So I have two tasks (let's say TaskA and TaskB). I want both tasks to run hourly, but TaskB requires TaskA. TaskB does not have any parameters, but TaskA has two parameters for the day and the hour. If I run TaskB on the command line, would I need to pass it arguments?
How can I find an installed drivers version in Python under Windows?
39,023,289
0
0
1,524
0
python-2.7,driver,wmi
This is going to sound different but I know the powershell command will get you the driver version. strCommand = r"powershell.exe ""Get-WmiObject Win32_PnPSignedDriver | select devicename, driverversion | ConvertTo-CSV""" Then you can parse each line in your output. Each line is csv delimited so you have the Driver Name, and the Driver Version. I wrote a quick demo but since I am still a bit new here my code did not look right. But that is my suggestion.
0
1
0
0
2016-08-18T15:37:00.000
1
0
false
39,022,296
0
0
0
1
I am trying to call a python [module] method to find the version of a newly installed driver on a Windows computer. Tried with WMI_SystemDriver but it does not provide the version, only other fields not needed by me at this time.Is there a way to see something like: Question also posted on a Google group - not answered version x.y.z.t Thank you
pycaffe windows - cannot open python27.lib
42,746,031
1
1
1,017
0
python,caffe,pycaffe
I've got the same error while building matcaffe interface with python 3.5, so I downgraded Anaconda and Python to 2.7 version and it successed.
0
1
0
0
2016-08-19T09:16:00.000
1
0.197375
false
39,035,360
1
0
0
1
I am trying to compile pycaffe in Windows 7 using Anaconda 3 and Visual studio 2013. I have set the anaconda path and lib path correctly. When I try to build I am getting the following error: "Error 1 error LNK1104: cannot open file 'python27.lib' D:\caffe-master\windows\caffe\LINK caffe" I am using Python 3.6 but not sure why the build is looking for 2.7 lib. How do I make build pick the correct python lib? Thanks
Giving input to terminal in python
39,064,972
0
0
338
0
python,python-2.7,terminal
You would need to have python implemented into the software. Also, I believe this is a task for GCSE Computing this year as I was privileged enough to choose what test we are doing and there was a question about serial numbers.
0
1
0
0
2016-08-21T13:36:00.000
2
0
false
39,064,796
1
0
0
1
I'm writing a code to read serial input. Once the serial input has been read, I have to add a time stamp below it and then the output from a certain software. To get the output from the software, I want python to write a certain command to the terminal, and then read the output that comes on the terminal. Could you suggest how do I go about doing the last step: namely, writing to the terminal then reading the output? I'm a beginner in python, so please excuse me if this sounds trivial.
Can OpenWhisk trigger Docker actions in my Bluemix registry?
39,074,811
1
1
128
0
python,ibm-cloud,openwhisk
This is not currently possible. OpenWhisk can only create Actions from Docker images stored in the external Docker Hub registry.
0
1
0
0
2016-08-22T08:28:00.000
1
0.197375
false
39,074,638
0
0
0
1
I pushed my Docker image to my Bluemix registry; I ran the container on Bluemix just fine; I have also set up a skeleton OpenWhisk rule which triggers a sample Python action but wish to trigger the image in my Bluemix registry as the action. But, as far as I can see from the OpenWhisk documents, it is only possible to trigger Docker actions hosted on Docker Hub. (Per the wsk idk install docker skeleton). Can OpenWhisk trigger Docker actions in my Bluemix registry?
python: read beyond end of file
39,086,415
5
0
1,809
0
python
You can't read more bytes than is in the file. "End of file" literally means exactly that.
0
1
0
0
2016-08-22T18:29:00.000
2
1.2
true
39,086,368
1
0
0
1
I'm trying to read beyond the EOF in Python, but so far I'm failing (also tried to work with seek to position and read fixed size). I've found a workaround which only works on Linux (and is quite slow, too) by working with debugfs and subprocess, but this is to slow and does not work on windows. My Question: is it possible to read a file beyond EOF in python (which works on all platforms)?
Using requests package to make request
39,086,692
1
0
32
0
python,python-requests
requests is a HTTP request library, while Spark's wordcount example provides a raw socket server, so no, requests is not the right package to communicate with your Spark app.
0
1
1
0
2016-08-22T18:32:00.000
1
1.2
true
39,086,420
0
0
0
1
I have an application (spark based service), which when starts..works like following. At localhost:9000 if I do nc -lk localhost 9000 and then start entering the text.. it takes the text entered in terminal as an input and do a simple wordcount computation on it. how do i use the requests library to programmatically send the text, instead of manually writing them in the terminal. Not sure if my question is making sense..
Import Pydev Project into Eclipse on a new machine
39,088,428
0
0
35
0
eclipse,python-3.x,pydev
I figured out that I was opening Eclipse in the wrong workspace. When I found the correct workspace for that project (by looking for the .metadata file on my C drive) everything was all set (and I didn't have to import the project at all). I was going to delete the question, but figured instead I'd answer in case this helps someone else.
0
1
0
1
2016-08-22T19:09:00.000
1
0
false
39,087,037
1
0
0
1
I need to install an existing pydev project into Eclipse on a new machine. (Actually it is the same machine, but re-imaged.) The new machine has Eclipse Neon. I was using an older version previously. My data has all been copied over. I have the folder where the project lived on my old machine, which includes the .project and .pydevproject files. I used the Import wizard to import it, but I don't see my run configurations, pythonpath, etc. Where might those be stored on my old machine, and can I recover them easily without setting them up again by hand?
How do I add the path to a directory to the environment variables using python
39,096,747
1
0
53
0
python,environment-variables
While using bash add this ~/.bashrc export PYTHONPATH="${PYTHONPATH}:/Home/dev/path Make sure the directory you point to has at the topmost init.py file in your directory structure
0
1
0
0
2016-08-23T08:43:00.000
1
0.197375
false
39,096,384
1
0
0
1
I want to know if it is possible to add the path to a directory to the environment variables permanently using python. I have seen other questions that relate to mine but the answers there only add the path temporarily,I want to know if there's a way to add it permanently
Run pip through jenkins-plugin?
39,174,711
0
0
639
0
python,linux,jenkins,pip
Not a specific plug-in like you might want, but as was said, you can create a virtual environment in one of a few ways to get the functionality you're after. Docker can handle this, you can create a small script to build a docker image that will have access to pip and there are Jenkins plug-ins for docker.
0
1
0
0
2016-08-23T21:29:00.000
2
0
false
39,110,980
0
0
0
1
If pip is not installed on the jenkins linux-box, is there any jenkins-plugin that lets me run pip, without installing it at the os-level?
Virtualenv gives different versions for different os
39,124,070
0
0
89
0
python,django,virtualenv
Thanks to @Oliver and @Daniel's comments that lead me to the answer why it did not work. I started the virtual environment on my Debian with python 3. virtualenv made the virtual environment but it was specifically for Debian. When I used it for mac, since it could not run the python executable in the virtual environment (since it is only compatible with Debian), hence, it used my Mac's system python, which is Python 2.7.10. In summary, as virtualenv uses the python executable on the system, when the python executable is run on another system, it will not work.
0
1
0
0
2016-08-24T12:43:00.000
2
0
false
39,123,699
0
0
1
1
I am working on a django project on two separate systems, Debian Jessie and Mac El Capitan. The project is hosted on github where both systems will pull from or push to. However, I noticed that on my Debian, when I run python --version, it gives me Python 3.4.2 but on my Mac, it gives me Python 2.7.10 despite being in the same virtual environment. Moreover, when I run django-admin --version on my Debian, it gives me 1.10 while on my Mac, 1.8.3. This happens even when I freshly clone the projects from github and run the commands. Why is it that the virtual environment does not keep the same version of python and django?
Python write to ram file when using command line, ghostscript
39,147,540
1
1
574
0
python,file,cmd,ghostscript,ram
You can't use RAM for the input and output file using the Ghostscript demo code, it doesn't support it. You can pipe input from stdin and out to stdout but that's it for the standard code. You can use the Ghostscript API to feed data from any source, and you can write your own device (or co-opt the display device) to have the page buffer (which is what the input is rendered to) made available elsewhere. Provided you have enough memory to hold the entire page of course. Doing that will require you to write code to interface with the Ghostscript shared object or DLL of course. Possibly the Python library does this, I wouldn't know not being a Python developer. I suspect that the pointer from John Coleman is sufficient for your needs though.
0
1
0
0
2016-08-25T11:39:00.000
1
0.197375
false
39,144,281
1
0
0
1
I want to run this command from python: gs.exe -sDEVICE=jpeg -dTextAlphaBits=4 -r300 -o a.jpg a.pdf Using ghostscript, to convert pdf to series of images. How do I use the RAM for the input and output files? Is there something like StringIO that gives you a file path? I noticed there's a python ghostscript library, but it does not seem to give much more over the command line
brew python versus non-brew ipython
39,149,676
0
0
2,385
0
python,macos,ipython,homebrew
To transfer all your packages you can use pip to freeze all of your packages installed in ipython and then install them all easily from the file that you put them in. pip freeze > requirements.txt then to install them from the file pip install -r requirements.txt I'm not entirely sure if I understood what you're asking so if this isn't what you want to do please tell me.
0
1
0
0
2016-08-25T15:44:00.000
3
0
false
39,149,554
1
0
0
2
I installed python via brew, and made it my default python. If I run which python, I obtain /usr/local/bin/python. Also pip is installed via brew, which pip returns /usr/local/bin/pip. I do not remember how I installed ipython, but I didn't do it via brew, since when I type which ipython, I obtain /opt/local/bin/ipython. Is it the OS X version of ipython? I installed all libraries on this version of ipython, for example I have matplotlib on ipython but not on python. I do not want to re-install everything again on the brew python, rather continue to install libraries on this version of ipython. How can I install new libraries there? For example, Python Image Library, or libjpeg? If possible, I would like an exhaustive answer so to understand my problem, and not just a quick fix tip.
brew python versus non-brew ipython
39,151,146
0
0
2,385
0
python,macos,ipython,homebrew
OK, so I solved by uninstalling macport (and so the ipython I was using, which was under /opt/local/bin) and installing ipython via pip. Then I re-install what I needed (e.g. jupyter) via pip.
0
1
0
0
2016-08-25T15:44:00.000
3
0
false
39,149,554
1
0
0
2
I installed python via brew, and made it my default python. If I run which python, I obtain /usr/local/bin/python. Also pip is installed via brew, which pip returns /usr/local/bin/pip. I do not remember how I installed ipython, but I didn't do it via brew, since when I type which ipython, I obtain /opt/local/bin/ipython. Is it the OS X version of ipython? I installed all libraries on this version of ipython, for example I have matplotlib on ipython but not on python. I do not want to re-install everything again on the brew python, rather continue to install libraries on this version of ipython. How can I install new libraries there? For example, Python Image Library, or libjpeg? If possible, I would like an exhaustive answer so to understand my problem, and not just a quick fix tip.
How install and use another version python(python 2.7) on linux with the default python version is python 2.6
39,174,922
0
0
115
0
python,linux,python-2.7
To build on Tryph's answer, you can install that new version to your home directory, then in a directory specified within your PATH (like in .bash_profile), you can point to that directory and within there create a sym-link that points to the new python. For instance, if you have a bin folder in your home directory that is specified in the path ln -s /bin/python ~/bin/python
0
1
0
0
2016-08-26T11:11:00.000
3
0
false
39,164,943
1
0
0
2
There is a default python version, namely python 2.6, on the GPU server with Linux OS. Now I want to install a new python version on the server from its source, namely python 2.7. I should not change the default python version since I am not the administrator and some reason. So what should I do?
How install and use another version python(python 2.7) on linux with the default python version is python 2.6
39,165,141
0
0
115
0
python,linux,python-2.7
You can install your new version of Python. It should be accessible with the python27 command (which may be a symbolic link). Then you will just have to check that the python symbolic link still points to python26. Doing this, python will keep on execute Python 2.6 while python27 will execute Python 2.7
0
1
0
0
2016-08-26T11:11:00.000
3
0
false
39,164,943
1
0
0
2
There is a default python version, namely python 2.6, on the GPU server with Linux OS. Now I want to install a new python version on the server from its source, namely python 2.7. I should not change the default python version since I am not the administrator and some reason. So what should I do?
Switch from linux distro package manager to Anaconda
39,167,113
1
1
280
0
python,anaconda,opensuse
I read the anaconda documentation, and there is no evidence of anaconda packages replacing your openSUSE packages. There isn't a reason for it to do so. If I got it right, then Conda is very similar to ruby's gem and similar tools, which definitely don't replace the installed packages. I think you can feel free to install it next to your current packages. Also, you can specify the python and python package version in the anaconda envinroments, which is another thing which it allows you to do, so you can decide what you will use there. Note, I'm not a conda user, this is how I understood the docs. Hope this helps.
0
1
0
0
2016-08-26T12:47:00.000
1
1.2
true
39,166,725
1
0
0
1
I am using openSUSE Leap 42.1 and do some data analysis work in python. Most of the python packages I use are available in the standard openSUSE repositories (e.g. obs://build.opensuse.org/devel:languages:python); however sometimes they aren't, whereas they are available in Anaconda. I would like to replace all of the python packages installed on my computer with those available through Anaconda. Is it possible to just install Anaconda in parallel with the normal openSUSE packages or should I manually delete the packages I've installed? I know python is used heavily throughout the operating system so I probably don't want to deep clean the system of python before going the Anaconda route. Has anyone done this before? I was unable to find any info on this on the Anaconda site, and I'm curious if there is a clean way to do this.
Absolute path in Python requiring an extra leading forward slash?
39,173,233
0
0
107
0
python,python-2.7,file-io,path
The best way to deal with this is to avoid constructing the path yourself altogether. Let os.path.join() do it for you.
0
1
0
0
2016-08-26T18:49:00.000
1
0
false
39,172,944
0
0
0
1
I am trying to open a resource via an absolute path on my Macbook with open(file[,mode]). The resource I am trying to access is not in the same folder as the script that is running. If I use something like /Users/myname/Dev/project/resource I get an IOError: No such file or directory. Whats confusing me is that if i add and extra forward slash to the beginning so it starts with //Users/... it finds the resource without a problem. What is going on here?
Pexpect: Read from the last send
39,256,145
0
1
716
0
python,python-2.7,pexpect
There are three ways in which this problem can be handled but none to flush the buffer In Pexpect every send call should be matched with a call to expect. This ensures that the file pointer has moved ahead of the previous send. If there is a series of send before a single expect then we need to provide a way to move file pointer to the location of last send. This can be done by an extra send whose expect output is unique. The uniqueness should be such that none of the send in the series of send should give that output. Third method is to use set logfile_read to a file. All the output will be logged to this file. Before the send for which the expect is used, get the position of file pointer. Now get the position of file pointer after the send as well. Search for expected pattern in the file in between first and second pointer. First method is the ideal way it should be done.
0
1
0
0
2016-08-26T18:58:00.000
1
0
false
39,173,069
0
0
0
1
I am trying to read the output of pexpect.send(cmd) but here's the problem I am facing. I am sending many commands in a sequence and I want to read/expect after a certain set of commands. Condition is that only the output of last command is to be considered. But expect matches from the point it last read. I have tried different methods such as matching for an EOF before sending the command of which I need the output but EOF means that child has terminated. I have tried reading till timeout and then sending the command but timeout itself causes the child to terminate. I have looked for ways in which I could read from the end or the last line of output. I am considering reading a fixed bytes to a file or string and then manipulate the output to get the info I want. Here as well the fixed number of bytes is not fixed. There does not seems to be a reliable way to do this. Could anyone help me sort this out ?
Remove third-party installed Python on Mac?
39,173,554
1
0
251
0
python,macos,python-2.7
This doesn't answer the question in the post's title, but leave Python 2 as the default python. If you want to run Python 3, you run python3 or maybe python3.4 or python3.5, depending on your installation. The system and other third-party software depend on python being Python 2. If you change it, you may encounter puzzles down the road. I'm not sure if having a third-party Python 2 is good (OS X ships with Python 2 already), but it should be fine. Edit: Sorry, didn't see there was already an answer. It was posted as I was typing.
0
1
0
0
2016-08-26T19:26:00.000
1
0.197375
false
39,173,459
1
0
0
1
So I installed python 2.7.11 a few months ago, now the class I'm about to take uses 3. So I installed 3 and it works fine. I also uninstalled 2.7.11 by going to applications and removing it, but going to terminal and typing which python, the directory is Library/Frameworks/Python.framework/Versions/2.7/bin/python, which means this it's still not removed. What should I do...leave it alone? I only need Python 3, but this is bothering me a bit. Thanks.
How to force application version on AWS Elastic Beanstalk
42,735,371
11
18
10,464
0
python,django,amazon-web-services,amazon-ec2,amazon-elastic-beanstalk
I've realised that the problem was that Elastic Beanstalk, for some reasons, kept the unsuccessfully deployed versions under .elasticbeanstalk. The solution, at least in my case, was to remove those temporal (or whatever you call them) versions of the application.
0
1
0
0
2016-08-27T20:42:00.000
2
1.2
true
39,185,570
0
0
1
1
I'm trying to deploy a new version of my Python/Django application using eb deploy. It unfortunately fails due to unexpected version of the application. The problem is that somehow eb deploy screwed up the version and I don't know how to override it. The application I upload is working fine, only the version number is not correct, hence, Elastic Beanstalk marks it as Degraded. When executing eb deploy, I get this error: "Incorrect application version "app-cca6-160820_155843" (deployment 161). Expected version "app-598b-160820_152351" (deployment 159). " The same says in the health status at AWS Console. So, my question is the following: How can I force Elastic Beanstalk to make the uploaded application version the current one so it doesn't complain?
Celery with Redis vs Redis Alone
39,188,804
4
0
1,102
0
java,python,rabbitmq,celery,messaging
The advantage of using Celery is that we mainly need to write the task processing code and handling of task delivery delivery to the task processors is taken care of by the Celery framework. Scaling out task processing is also easy by just running more Celery workers with higher concurrency (more of processing threads/processes). We don't even need to write code for submitting tasks to queues and consuming tasksfrom the queues. Also, it has built in facility to add/removing consumers for any of the task queues. The framework supports retry of tasks, failure handling, results accumulating etc. It has many many features which helps us to concentrate on implementing the task processing logic only. Just for an analogy, implementing a map-reduce program to run on Hadoop is not a very complex task. If data is small, we can write a simple Python script to implement the map-reduce logic which will outperform a Hadoop map-reduce Job processing the same data. But when data is very huge, we have to divide the data across machines, we will need to run multiple processes across machines and co-ordinate their executions. The complexity lies in running multiple instances of mappers and then reducers tasks across multiple machines, collecting inputs and distributing the inputs to mappers, transferring the outputs of mappers to appropriate reducers, monitoring progress, relaunching failed tasks, detecting job completion etc. But because we have Hadoop, we don't need to care much about the underlying complexity of executing a distribute job. Same way Celery also helps us to concentrate mainly on task execution logic.
0
1
1
0
2016-08-28T06:41:00.000
1
1.2
true
39,188,662
0
0
0
1
I am having trouble understanding what the advantage of using Celery is. I realize you can use Celery with Redis, RabbitMQ etc, but why wouldn't I just get the client for those message queue services directly rather than sitting Celery in front of it?
How to remove/disable gdb-peda in ubuntu
52,784,903
11
4
7,074
0
python,python-3.x,unix,gdb
Actually peda-gdb doesn't really install any executable in your computer. All the peda-gdb does is to modify the config file of gdb. This file is by default located at ~/.gdbinit. use cat ~/.gdbinit can you peek how does peda do Therefore, to go back to vanilla gdb, there are 2 solutions gdb --nx This is a better way, since you may need peda someday rm -rf ~/.gdbinit This will remove the config file of gdb, so what peda did will have no effect on your gdb now.
0
1
0
0
2016-08-29T10:44:00.000
2
1
false
39,204,331
1
0
0
2
While learning debugging,I somehow went into installing gdb and then gdb-peda.But now, I would like to uninstall gdb-peda.Can anyone pls guide me ?
How to remove/disable gdb-peda in ubuntu
42,280,176
2
4
7,074
0
python,python-3.x,unix,gdb
You can remove the Peda folder, should be somewhere in your Home directory. After that you should have your old gdb back.
0
1
0
0
2016-08-29T10:44:00.000
2
0.197375
false
39,204,331
1
0
0
2
While learning debugging,I somehow went into installing gdb and then gdb-peda.But now, I would like to uninstall gdb-peda.Can anyone pls guide me ?
-bash: cd: Resources: No such file or directory
39,286,424
0
0
729
0
python,terminal,directory
Try to use autocompletion on TAB key press — maybe names contain some whitespace (less probably) Check ls -l output — maybe these directories are just broken symbolic links
0
1
0
0
2016-08-30T01:39:00.000
1
0
false
39,217,582
0
0
0
1
the directory /Library/Frameworks/Python.framework/ contains the following four elements: Headers Python Resources Versions When I try to cd into either Headers, Python or Resources (e.g. cd Resources), I get an error message telling me that the element does not exist (e.g.: "-bash: cd: Resources: No such file or directory"). What's going on here?
Twisted setup in Fedora or CentOS
39,235,093
0
0
450
0
linux,python-3.x,twisted,fedora
To get the latest version of Twisted requires Python 2.7+ because 2.6 support has finally been EOL. So if you're running an old Python, then I'd suggest you build your own Python 2.7+ and alt install it. It's very important you don't override CentOS's default Python as this could lead to a disastrous situation. Once Python is updated, then you can do pip install twisted. Alternatively, you could get a yum repo with a updated versions of Python and Twisted.
0
1
0
0
2016-08-30T03:20:00.000
3
0
false
39,218,263
0
0
0
2
Is there a source tarball of Twisted available for download which could be used to build it in Fedora or CentOS? I see the download for Ubuntu/Debian on the site, of course.
Twisted setup in Fedora or CentOS
39,620,252
0
0
450
0
linux,python-3.x,twisted,fedora
You can use python pip to install twisted in centos or fedora. Make sure you have python-pip installed then just do sudo pip install twisted in terminal
0
1
0
0
2016-08-30T03:20:00.000
3
0
false
39,218,263
0
0
0
2
Is there a source tarball of Twisted available for download which could be used to build it in Fedora or CentOS? I see the download for Ubuntu/Debian on the site, of course.
Retrieve Internal attributes(entryUUID) from openldap server
39,302,136
0
0
2,194
0
python-2.7,ldap,openldap
It's an operational attribute, so you have to request it explicitly, or include "+" in the attributes to be returned. However you should not be using this for your own purposes. It's none of your business. It can change across backup/restore, for example.
0
1
0
1
2016-08-30T07:47:00.000
2
0
false
39,221,697
0
0
0
1
I am trying to retrieve internal attributes from openldap server. More specifically I need to retrieve entryUUID attribute of an object. In LDAP, objectGUID is being fetched from server but couldn't retrieve similar field from openldap. SCOPE_SUBTREE is being used to retrieve attributes. Anyone knows way out? Thanks in advance.
how to install libhdf5-dev? (without yum, rpm nor apt-get)
67,224,754
0
6
20,706
0
python,linux,installation,hdf5
For Centos 8, I got the below warning message : Warning: Couldn't find any HDF5 C++ libraries. Disabling HDF5 support. and I solved it using the command : sudo yum -y install hdf5-devel
0
1
0
1
2016-08-30T19:53:00.000
4
0
false
39,236,025
0
0
1
1
I want to use h5py which needs libhdf5-dev to be installed. I installed hdf5 from the source, and thought that any options with compiling that one would offer me the developer headers, but doesn't look like it. Anyone know how I can do this? Is there some other source i need to download? (I cant find any though) I am on amazon linux, yum search libhdf5-dev doesn't give me any result and I cant use rpm nor apt-get there, hence I wanted to compile it myself.
Why is there a delay between writing to and reading from Kafka queue?
39,254,261
2
0
276
0
apache-kafka,message-queue,messaging,kafka-python
Consumer group will take some times to contact group coordinator and get assigned partitions automatically during the delay. If you use manual assignment, you will get less delay.
0
1
0
0
2016-08-31T15:15:00.000
1
1.2
true
39,253,346
0
0
0
1
I have written a worker service to consume messages from a Kafka queue, and I have also written a test script to add messages to the queue every few seconds. What I have noticed is that often the consumer will sit by idle for minutes at a time, while messages are being added to the queue. Then suddenly the consumer will pick up the first message, process it, then rapidly move on to the rest. So it eventually catches up, but I'm wondering why there such a delay in the first place?
idle-python for RHEL 7
39,261,467
0
0
68
0
mariadb,python-idle
IDLE is in the python-tools package.
0
1
0
0
2016-09-01T01:28:00.000
1
0
false
39,261,395
0
0
0
1
Python community. I am looking for a Red Enterprise Linux 7 version of IDLE - Python GUI. The only version I have found are for Windows and Mac. I will be using it to test and build an API to tie in with http.
Python and Appium
41,982,234
1
0
571
0
ubuntu,python-appium
Try to use nosetest. Install: pip install nose Run: nosetests (name of the file containing test)
0
1
0
1
2016-09-03T05:48:00.000
1
0.197375
false
39,303,681
1
0
0
1
I got the following error while executing a python script on appium ImportError: No module named appium I am running appium in one terminal and tried executing the test on another terminal. Does anyone know what is the reason for this error? and how to resolve it?
Problems installing python module pybfd
39,318,468
1
1
259
0
python,pip,easy-install
After some trial and error I discovered that binutils-dev and python-dev packages were missing and causing the header path errors. After installing those the setup script worked.
0
1
0
1
2016-09-04T14:36:00.000
1
1.2
true
39,318,053
1
0
0
1
I've been trying to install the pybfd module but nothing works so far. Tried the following: pip install pybfd returns error: option --single-version-externally-managed not recognized. After a quick search I found the --egg option for pip which seems to work, says successfully installed but when I try to run my code ImportError: No module named pybfd.bfd easy_install pybfd returns an error as well: Writing /tmp/easy_install-oZUgBf/pybfd-0.1.1/setup.cfg Running pybfd-0.1.1/setup.py -q bdist_egg --dist-dir /tmp/easy_install-oZUgBf/pybfd-0.1.1/egg-dist-tmp-gWwhoT [-] Error : unable to determine correct include path for bfd.h / dis-asm.h No eggs found in /tmp/easy_install-oZUgBf/pybfd-0.1.1/egg-dist-tmp-gWwhoT (setup script problem?) For the last attempt I downloaded the pybfd repo from GitHub and ran the setup script: [-] Error : unable to determine correct include path for bfd.h / dis-asm.h Does anyone have any idea what could be causing all this and how to actually install the module ?
How do I build a cx_oracle app using pyinstaller to use multiple Oracle client versions?
39,349,805
1
0
738
0
python,oracle,pyinstaller,cx-oracle
The error "Unable to acquire Oracle environment handle" means there is something wrong with your Oracle configuration. Check to see what libclntsh.so file you are using. The simplest way to do that is by using the ldd command on the cx_Oracle module that PyInstaller has bundled with the executable. Then check to see if there is a conflict due to setting the environment variable ORACLE_HOME to a different client! If PyInstaller picked up the libclntsh.so file during its packaging you will need to tell it to stop doing that. There must be an Oracle client (either full client or the much simpler instant client) on the target machine, not just the one file (libclntsh.so). You can also verify that your configuration is ok by using the cx_Oracle.so module on the target machine to establish a connection -- independently of your application. If that doesn't work or you don't have a Python installation there for some reason, you can also use SQL*Plus to verify that your configuration is ok as well.
0
1
0
0
2016-09-05T05:04:00.000
1
1.2
true
39,324,217
1
0
0
1
I am building an application in Python using cx_Oracle (v5) and Pyinstaller to package up and distribute the application. When I built and packaged the application, I had the Oracle 12c client installed. However, when I deployed it to a machine with the 11g client installed, it seems not to work. I get the message "Unable to acquire Oracle environment handle". I assume this is as the result of the application being packaged with Pyinstaller while my ORACLE_HOME was pointed to a 12c client. I know that the cx_Oracle I have was built against both 11g and 12 libraries. So, I'm wondering how I deploy an application using Pyinstaller so it can run with either 11 or 12c client libraries installed? By the way, I am building this on Linux (debian/Mint 17.2), and deploying to Linux (CentOS 7).
Rabbitmq one queue multiple consumers
39,340,382
2
1
1,439
0
python,python-2.7,rabbitmq,pika
1.If at least two consumers at the same time can get the same message? no - a single message will only be delivered to a single consumer. Because of that, your scenario #2 doesn't come into play at all. You'll never have 2 consumers working on the same message, unless you nack the message back to the queue but continue processing it anyways.
0
1
0
1
2016-09-05T21:20:00.000
1
1.2
true
39,337,821
0
0
0
1
I've multiple consumers which are polling on the same queue, and checking the queue every X seconds, basically after X seconds it could be that at least two consumers can launch basic.get at the very same time. Question are: 1.If at least two consumers at the same time can get the same message? 2.According to what I understood only basic_ack will delete a mesage from the queue, so suppose we have the following scenario: Consumer1 takes msg with basic.get and before it reaches basic_ack line , Consumer2 is getting this message also (basic.get), now Consumer1 reaches the basic.ack, and only now Consumer2 reaches its own basic.ack. What will happen When Consumer2 will reach its basic.ack? Will the message be processes by Consumer2 as well, because actions are not atomic? My code logic of consumer using python pika is as follows: while true: m_frame =None while(m_frame is None): self.connection.sleep(10) m_frame,h_frame,body = self.channel.basic_get('test_queue') self.channel.basic_ack(m_frame.delivery_tag) [Doing some long logic - couple of minutes] Please note that I don't use basic.consume So I don't know if round robin fetching is included for such usage
Celery Worker - Consume from Queue matching a regex
39,340,088
-2
5
409
0
python,celery,django-celery
Something along these lines would work: (\b(dev.)(\w+)). Then refer to the second group for the stuff after "dev.". You'll need to set it up to capturing repeated instances if you want to get multiple.
0
1
0
0
2016-09-06T02:32:00.000
1
-0.379949
false
39,339,804
0
0
1
1
Background Celery worker can be started against a set of queues using -Q flag. E.g. -Q dev.Q1,dev.Q2,dev.Q3 So far I have seen examples where all the queue names are explicitly listed as comma separated values. It is troublesome if I have a very long list. Question Is there a way I can specify queue names as a regex & celery worker will start consuming from all queues satisfying that regex. E.g. -Q dev.* This should consume from all queuess starting with dev i.e. dev.Q1, dev.Q2, dev.Q3. But what I have seen is - it creates a queue dev..* Also how can I tune the regex so that it doesn't pick ERROR queues e.g. dev.Q1.ERROR, dev.Q2.ERROR.
How to add a custom CA Root certificate to the CA Store used by pip in Windows?
55,395,471
-3
128
213,196
0
python,windows,ssl,pip
Open Anaconda Navigator. Go to File\Preferences. Enable SSL verification Disable (not recommended) or Enable and indicate SSL certificate path(Optional) Update a package to a specific version: Select Install on Top-Right Select package click on tick Mark for update Mark for specific version installation Click Apply
0
1
0
0
2016-09-06T19:24:00.000
7
-0.085505
false
39,356,413
0
0
0
2
I just installed Python3 from python.org and am having trouble installing packages with pip. By design, there is a man-in-the-middle packet inspection appliance on the network here that inspects all packets (ssl included) by resigning all ssl connections with its own certificate. Part of the GPO pushes the custom root certificate into the Windows Keystore. When using Java, if I need to access any external https sites, I need to manually update the cacerts in the JVM to trust the Self-Signed CA certificate. How do I accomplish that for python? Right now, when I try to install packages using pip, understandably, I get wonderful [SSL: CERTIFICATE_VERIFY_FAILED] errors. I realize I can ignore them using the --trusted-host parameter, but I don't want to do that for every package I'm trying to install. Is there a way to update the CA Certificate store that python uses?
How to add a custom CA Root certificate to the CA Store used by pip in Windows?
39,358,282
50
128
213,196
0
python,windows,ssl,pip
Run: python -c "import ssl; print(ssl.get_default_verify_paths())" to check the current paths which are used to verify the certificate. Add your company's root certificate to one of those. The path openssl_capath_env points to the environment variable: SSL_CERT_DIR. If SSL_CERT_DIR doesn't exist, you will need to create it and point it to a valid folder within your filesystem. You can then add your certificate to this folder to use it.
0
1
0
0
2016-09-06T19:24:00.000
7
1
false
39,356,413
0
0
0
2
I just installed Python3 from python.org and am having trouble installing packages with pip. By design, there is a man-in-the-middle packet inspection appliance on the network here that inspects all packets (ssl included) by resigning all ssl connections with its own certificate. Part of the GPO pushes the custom root certificate into the Windows Keystore. When using Java, if I need to access any external https sites, I need to manually update the cacerts in the JVM to trust the Self-Signed CA certificate. How do I accomplish that for python? Right now, when I try to install packages using pip, understandably, I get wonderful [SSL: CERTIFICATE_VERIFY_FAILED] errors. I realize I can ignore them using the --trusted-host parameter, but I don't want to do that for every package I'm trying to install. Is there a way to update the CA Certificate store that python uses?
Changing Jupyter Notebook start location [Win 7 Enterprise]
39,370,822
0
0
433
0
ipython,anaconda,jupyter,jupyter-notebook
Found the solution - go to your Anaconda install directory (for me this was C:\Anaconda3) and open the file cwp.py in a text editor. Change the line os.chdir(documents_folder) to os.chdir("C:\\my\\path\\here").
0
1
0
0
2016-09-07T11:58:00.000
1
0
false
39,369,335
1
0
0
1
I am trying to change the default Jupyter Notebook start directory on my Windows 7 Enterprise machine. Other answers have suggested changing the "Start In" field found through Right-click>Properties>Shortcut on the Jupyter program in my Start menu, however this doesn't have any effect. When I change this field to my desired directory and try running the program it still opens in the default directory, when I recheck the "Start In" field it is the same as whatever I had changed it to so it looks like it isn't being changed back by Windows, rather it's being disregarded entirely. For reference the default directory is at P:\ which is not a local directory and is hosted on my company servers, and I am trying to change the Jupyter startup directory to C:. I'm sure the path is correct - I've tried a few different ones and they are working with autocomplete. I should mention this is a locked down corporate machine and I have to run Jupyter as administrator or else it exits immediately. I do have elevated rights and have checked the user permissions on Jupyter. This is using the Jupyter that comes as default with the current Python 3.5 distribution of Anaconda - I have also tried reinstalling the whole Anaconda package and I'm currently working with a fresh default install. I am wondering if there is perhaps a way through changing the startup script that is run when you execute the program?
Comparing the contents of very large files efficiently
39,395,013
1
0
1,140
0
python,performance,file,io
If you can find a way to take advantage of hash tables your task will change from O(N^2) to O(N). The implementation will depend on exactly how large your files are and whether or not you have duplicate job IDs in file 2. I'll assume you don't have any duplicates. If you can fit file 2 in memory, just load the thing into pandas with job as the index. If you can't fit file 2 in memory, you can at least build a dictionary of {Job #: row # in file 2}. Either way, finding a match should be substantially faster.
0
1
0
1
2016-09-08T15:02:00.000
5
0.039979
false
39,394,328
0
0
0
2
I need to compare two files of differing formats quickly and I'm not sure how to do it. I would very much appreciate it if someone could point me in the right direction. I am working on CentOS 6 and I am most comfortable with Python (both Python 2 and Python 3 are available). The problem I am looking to compare the contents two large files (quickly). The files, unfortunately, differ in content; I will need to modify the contents of one before I can compare them. They are not particularly well-organized, so I can't move linearly down each and compare line-by-line. Here's an example of the files: File 1 File 2 Job,Time Job,Start,End 0123,3-00:00:00 0123,2016-01-01T00:00:00,2016-01-04T00:00:00 1111,05:30:00 1111,2016-01-01T00:00:00,2016-01-01T05:30:00 0000,00:00:05 9090.abc,2016-01-01T12:00:00,2016-01-01T22:00:00 9090.abc,10:00:00 0000,2015-06-01T00:00:00,2015-06-01T00:00:05 ... ... I would like to compare the contents of lines with the same "Job" field, like so: Job File 1 Content File 2 Content 0123 3-00:00:00 2016-01-01T00:00:00,2016-01-04T00:00:00 1111 05:30:00 2016-01-01T00:00:00,2016-01-01T05:30:00 0000 00:00:05 2015-06-01T00:00:00,2015-06-01T00:00:05 9090.abc 10:00:00 2016-01-01T12:00:00,2016-01-01T22:00:00 ... ... ... I will be performing calculations on the File 1 Content and File 2 Content and comparing the two (for each line). What is the most efficient way of doing this (matching lines)? The system currently in place loops through one file in its entirety for each line in the other (until a match is found). This process may take hours to complete, and the files are always growing. I am looking to make the process of comparing them as efficient as possible, but even marginal improvements in performance can have a drastic effect. I appreciate any and all help. Thank you!
Comparing the contents of very large files efficiently
39,396,201
1
0
1,140
0
python,performance,file,io
I was trying to develop something where you'd split one of the files into smaller files (say 100,000 records each) and keep a pickled dictionary of each file that contains all Job_id as a key and its line as a value. In a sense, an index for each database and you could use a hash lookup on each subfile to determine whether you wanted to read its contents. However, you say that the file grows continually and each Job_id is unique. So, I would bite the bullet and run your current analysis once. Have a line counter that records how many lines you analysed for each file and write to a file somewhere. Then in future, you can use linecache to know what line you want to start at for your next analysis in both file1 and file2; all previous lines have been processed so there's absolutely no point in scanning the whole content of that file again, just start where you ended in the previous analysis. If you run the analysis at sufficiently frequent intervals, who cares if it's O(n^2) since you're processing, say, 10 records at a time and appending it to your combined database. In other words, the first analysis takes a long time but each subsequent analysis gets quicker and eventually n should converge on 1 so it becomes irrelevant.
0
1
0
1
2016-09-08T15:02:00.000
5
0.039979
false
39,394,328
0
0
0
2
I need to compare two files of differing formats quickly and I'm not sure how to do it. I would very much appreciate it if someone could point me in the right direction. I am working on CentOS 6 and I am most comfortable with Python (both Python 2 and Python 3 are available). The problem I am looking to compare the contents two large files (quickly). The files, unfortunately, differ in content; I will need to modify the contents of one before I can compare them. They are not particularly well-organized, so I can't move linearly down each and compare line-by-line. Here's an example of the files: File 1 File 2 Job,Time Job,Start,End 0123,3-00:00:00 0123,2016-01-01T00:00:00,2016-01-04T00:00:00 1111,05:30:00 1111,2016-01-01T00:00:00,2016-01-01T05:30:00 0000,00:00:05 9090.abc,2016-01-01T12:00:00,2016-01-01T22:00:00 9090.abc,10:00:00 0000,2015-06-01T00:00:00,2015-06-01T00:00:05 ... ... I would like to compare the contents of lines with the same "Job" field, like so: Job File 1 Content File 2 Content 0123 3-00:00:00 2016-01-01T00:00:00,2016-01-04T00:00:00 1111 05:30:00 2016-01-01T00:00:00,2016-01-01T05:30:00 0000 00:00:05 2015-06-01T00:00:00,2015-06-01T00:00:05 9090.abc 10:00:00 2016-01-01T12:00:00,2016-01-01T22:00:00 ... ... ... I will be performing calculations on the File 1 Content and File 2 Content and comparing the two (for each line). What is the most efficient way of doing this (matching lines)? The system currently in place loops through one file in its entirety for each line in the other (until a match is found). This process may take hours to complete, and the files are always growing. I am looking to make the process of comparing them as efficient as possible, but even marginal improvements in performance can have a drastic effect. I appreciate any and all help. Thank you!
Install pybrain in ubuntu 16.04 "ImportError: No module named pybrain.structure"
39,431,699
0
1
253
0
python,pybrain
The problem was resolved replacing pybrain.pybrain by pybrain.
0
1
0
0
2016-09-11T00:05:00.000
1
0
false
39,431,684
1
0
0
1
I'm getting that error: ImportError: No module named pybrain.structure When executing: from pybrain.pybrain.structure import FeedForwardNetwork From the pybrain tutorial. I installed pybrain running: sudo python setup.py install
what's the difference between google.appengine.ext.ndb and gcloud.datastore?
39,453,571
2
14
1,307
0
google-app-engine,google-cloud-datastore,app-engine-ndb,google-app-engine-python
The reason for the two implementations is that originally, the Datastore (called App Engine Datastore) was only available from inside App Engine (through a private RPC API). On Python, the only way to access this API was through an ORM-like library (NDB). As you can see on the import, it is part of the App Engine API. Now Google has made the Datastore available outside of App Engine through a restful API called Cloud Datastore API. gcloud library is a client library that allows access to different rest APIs from Google Cloud, including the Cloud Datastore API.
0
1
0
0
2016-09-11T23:53:00.000
2
0.197375
false
39,441,764
0
0
1
1
ndb: (from google.appengine.ext import ndb) datastore: (from gcloud import datastore) What's the difference? I've seen both of them used, and hints they both save data to google datastore. Why are there two different implementations?
Is it possible to catch data streams other than stdin, stdout and stderr in a Popen call?
39,449,018
2
0
71
0
python,stream,subprocess,stdout,stderr
There is no other console output than stdout and stderr (assuming that samtools does not write to the terminal directly via a tty device). So, if the output is not captured with the subprocesses stdout, it must have been written to stderr, which can be captured as well using Popen() with stderr=subprocess.PIPE and inspecting the stderr attribute of the resulting process object.
0
1
0
0
2016-09-12T11:03:00.000
1
1.2
true
39,448,884
0
0
0
1
I am working on incorporating a program (samtools) into a pipeline. FYI samtools is a program used to manipulate DNA sequence alignments that are in a SAM format. It takes input and generates an output file via stdin and stdout, so it is quite easily controlled via pythons subprocess.Popen(). When it runs, it also outputs short messages to the console - not using stdout, obviously - and I wonder if it would be possible to catch these as well - potentially by getting a os generated handler list? I guess my question in general is if it is possible to catch a programs console output if it is not coming from stdout? Thank you.
Celery worker not reconnecting on network change/IP Change
56,372,222
0
3
948
0
python,python-2.7,rabbitmq,celery,celery-task
The issue was because I was unable to understand the nature of AMQP protocol or RabbitMQ. When a celery worker starts it opens up a channel at RabbitMQ. This channel upon any network changes tries to reconnect, but the port/sock opened for the channel previously is registered with a different public IP address of the client. As such the negotiations between the celery worker (client) and RabbitMQ (server) cannot resume because the client has changed the address, hence a new channel needs to be established in case of a change in the public IP address of the client. The answer by @qreOct above is due to either I was unable to express the question properly or because of the difference in our perceptions. Still thanks a lot for taking your time out!
0
1
0
0
2016-09-13T05:38:00.000
2
1.2
true
39,462,847
0
0
0
1
I deployed celery for some tasks that need to be performed at my workplace. These tasks are huge and I bought a few high-spec machines for performing these. Before I detail my issue, let me brief about what all I've deployed: RabbitMQ broker on a remote server Producer that pushes tasks on another remote server Workers at 3 machines deployed at my workplace Now, when I started the whole process was as smooth as I tested and everything process just great! The problem Unfortunately, I forgot to consult my network guy about a fixed IP address, and as per our location, we do not have a fixed IP address from our ISP. So my celery workers upon network disconnect freeze and do nothing. Even when the network is running, because the IP Address changed, and the connection to the broker is not being recreated or worker is not retrying connection. I have tried configuration like BROKER_CONNECTION_MAX_RETRIES = 0 and BROKER_HEARTBEAT = 10. But I had no option but to post it out here and look for experts on this matter! PS: I cannot restart the workers manually everytime the network changes the IP address by kill -9
pelican make serve error with broken pipe?
61,891,341
0
0
151
0
python,python-2.7,ubuntu,makefile,server
I can report that : I encountered the same problem with a python3 / pip3 installation (which is recommended now) the problem was apparently with the permissions on python. I simply had to run pelican --listen with superuser rights to make the local server work. Also be careful to install all packets you might have installed without superuser rights with sudo in order to have a fully-working installation with sudo.
0
1
0
0
2016-09-13T05:48:00.000
2
0
false
39,462,958
0
0
1
2
I was trying to make a blog with pelican, and in the step of make serve I had below errors. By searching online it looks like a web issue ( I'm not familiar with these at all ) and I didn't see a clear solution. Could anyone shed some light on? I was running on Ubuntu with Python 2.7. Thanks! Python info: Python 2.7.6 (default, Jun 22 2015, 17:58:13) [GCC 4.8.2] on linux2 Error info: 127.0.0.1 - - [13/Sep/2016 13:23:35] "GET / HTTP/1.1" 200 - WARNING:root:Unable to find / file. WARNING:root:Unable to find /.html file. 127.0.0.1 - - [13/Sep/2016 13:24:31] "GET / HTTP/1.1" 200 - ---------------------------------------- Exception happened during processing of request from ('127.0.0.1', 51036) Traceback (most recent call last): File "/usr/lib/python2.7/SocketServer.py", line 295, in _handle_request_noblock self.process_request(request, client_address) File "/usr/lib/python2.7/SocketServer.py", line 321, in process_request self.finish_request(request, client_address) File "/usr/lib/python2.7/SocketServer.py", line 334, in finish_request self.RequestHandlerClass(request, client_address, self) File "/usr/lib/python2.7/SocketServer.py", line 651, in init self.finish() File "/usr/lib/python2.7/SocketServer.py", line 710, in finish self.wfile.close() File "/usr/lib/python2.7/socket.py", line 279, in close self.flush() File "/usr/lib/python2.7/socket.py", line 303, in flush self._sock.sendall(view[write_offset:write_offset+buffer_size]) error: [Errno 32] Broken pipe
pelican make serve error with broken pipe?
39,462,999
0
0
151
0
python,python-2.7,ubuntu,makefile,server
Well I installed pip on Ubuntu and then it all worked.. Not sure if it is a version thing..
0
1
0
0
2016-09-13T05:48:00.000
2
0
false
39,462,958
0
0
1
2
I was trying to make a blog with pelican, and in the step of make serve I had below errors. By searching online it looks like a web issue ( I'm not familiar with these at all ) and I didn't see a clear solution. Could anyone shed some light on? I was running on Ubuntu with Python 2.7. Thanks! Python info: Python 2.7.6 (default, Jun 22 2015, 17:58:13) [GCC 4.8.2] on linux2 Error info: 127.0.0.1 - - [13/Sep/2016 13:23:35] "GET / HTTP/1.1" 200 - WARNING:root:Unable to find / file. WARNING:root:Unable to find /.html file. 127.0.0.1 - - [13/Sep/2016 13:24:31] "GET / HTTP/1.1" 200 - ---------------------------------------- Exception happened during processing of request from ('127.0.0.1', 51036) Traceback (most recent call last): File "/usr/lib/python2.7/SocketServer.py", line 295, in _handle_request_noblock self.process_request(request, client_address) File "/usr/lib/python2.7/SocketServer.py", line 321, in process_request self.finish_request(request, client_address) File "/usr/lib/python2.7/SocketServer.py", line 334, in finish_request self.RequestHandlerClass(request, client_address, self) File "/usr/lib/python2.7/SocketServer.py", line 651, in init self.finish() File "/usr/lib/python2.7/SocketServer.py", line 710, in finish self.wfile.close() File "/usr/lib/python2.7/socket.py", line 279, in close self.flush() File "/usr/lib/python2.7/socket.py", line 303, in flush self._sock.sendall(view[write_offset:write_offset+buffer_size]) error: [Errno 32] Broken pipe