qid
int64 46k
74.7M
| question
stringlengths 54
37.8k
| date
stringlengths 10
10
| metadata
listlengths 3
3
| response_j
stringlengths 17
26k
| response_k
stringlengths 26
26k
|
|---|---|---|---|---|---|
6,966,205
|
I want to convert some base64 encoded png images to jpg using python. I know how to decode from base64 back to raw:
```
import base64
pngraw = base64.decodestring(png_b64text)
```
but how can I convert this now to jpg? Just writing pngraw to a file obviously only gives me a png file. **I know I can use PIL, but HOW exactly would I do it?** Thanks!
|
2011/08/06
|
[
"https://Stackoverflow.com/questions/6966205",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/561766/"
] |
You can use [PIL](http://www.pythonware.com/products/pil/):
```
data = b'''iVBORw0KGgoAAAANSUhEUgAAABQAAAAUCAYAAACNiR0NAAAAAXNSR0IArs4c6QAAAIBJRE
FUOMvN08ENgCAMheG/TGniEo7iEiZuqTeiUkoLHORK++Ul8ODPZ92XS2ZiADITmwI+sWHwi
w2BGtYN1jCAZF1GMYDkGfJix3ZK8g57sJywteTFClBbjmAq+ESiGIBEX9nCqgl7sfyxIykt
7NUUD9rCiupZqAdTu6yhXgzgBtNFSXQ1+FPTAAAAAElFTkSuQmCC'''
import base64
from PIL import Image
from io import BytesIO
im = Image.open(BytesIO(base64.b64decode(data)))
im.save('accept.jpg', 'JPEG')
```
In very old Python versions (2.5 and older), replace `b'''` with `'''` and `from io import BytesIO` with `from StringIO import StringIO`.
|
Right from the PIL tutorial:
>
> To save a file, use the save method of the Image class. When saving files, the name becomes important. Unless you specify the format, the library uses the filename extension to discover which file storage format to use.
>
>
>
Convert files to JPEG
---------------------
```
import os, sys
import Image
for infile in sys.argv[1:]:
f, e = os.path.splitext(infile)
outfile = f + ".jpg"
if infile != outfile:
try:
Image.open(infile).save(outfile)
except IOError:
print "cannot convert", infile
```
So all you have to do is set the file extension to `.jpeg` or `.jpg` and it will convert the image automatically.
|
48,504,746
|
I am trying to add a **GUI** input box and I found out that the way you do that is by using a module called `tkinter`. While I was trying to install it on my arch linux machine through the `ActivePython` package I got the following error:
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lib/python3.6/tkinter/__init__.py", line 36, in <module>
import _tkinter # If this fails your Python may not be configured for Tk
ImportError: libtk8.6.so: cannot open shared object file: No such file or directory
shell returned 1\
```
|
2018/01/29
|
[
"https://Stackoverflow.com/questions/48504746",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9219560/"
] |
All you need to do is to install the tkinter package. Now universal precompiled packages such as ActivePython will not work, well at least it didn't work for me. I don't know if this problem occurs in other OSes but I know the solution for Linux: Install the Tk package from the terminal.
In Arch, Tk is available in the Arch repository. You don't need aur for this, just type on the terminal:
```
sudo pacman -S tk
```
If you are on another Linux distro such as Debian or a Debian based distro you will probably have to find a PPA repository online and in Debian based distros just type on the terminal:
```
sudo apt-get install tk
```
Which applies to all distros.
|
I'm on Manjaro, use Gnome3 on Wayland. After installed `tk` I got an error about Xorg. So I use Google, and found I need to install `python-pygubu` from [Visual editor for creating GUI in Python 3 tkinter](https://bbs.archlinux.org/viewtopic.php?id=221077).
And then another error like: [Gtk-WARNING \*\*: Unable to locate theme engine in module\_path: "murrine"](https://ubuntuforums.org/showthread.php?t=2061142). Also found a solution, to install `gtk-engine-murrine` form that link.
|
48,504,746
|
I am trying to add a **GUI** input box and I found out that the way you do that is by using a module called `tkinter`. While I was trying to install it on my arch linux machine through the `ActivePython` package I got the following error:
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lib/python3.6/tkinter/__init__.py", line 36, in <module>
import _tkinter # If this fails your Python may not be configured for Tk
ImportError: libtk8.6.so: cannot open shared object file: No such file or directory
shell returned 1\
```
|
2018/01/29
|
[
"https://Stackoverflow.com/questions/48504746",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9219560/"
] |
All you need to do is to install the tkinter package. Now universal precompiled packages such as ActivePython will not work, well at least it didn't work for me. I don't know if this problem occurs in other OSes but I know the solution for Linux: Install the Tk package from the terminal.
In Arch, Tk is available in the Arch repository. You don't need aur for this, just type on the terminal:
```
sudo pacman -S tk
```
If you are on another Linux distro such as Debian or a Debian based distro you will probably have to find a PPA repository online and in Debian based distros just type on the terminal:
```
sudo apt-get install tk
```
Which applies to all distros.
|
Install tk through command line
```
sudo pacman -S tk
sudo apt-get install tk
```
depending on your OS.
It will work.
```
import tk
```
or
```
import turtle # (turtle uses tk as a dependancy)
```
reproduces the error.
Also doing pip install does not removes the error.
So you must install tk through your package manager like mentioned above.
|
48,504,746
|
I am trying to add a **GUI** input box and I found out that the way you do that is by using a module called `tkinter`. While I was trying to install it on my arch linux machine through the `ActivePython` package I got the following error:
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lib/python3.6/tkinter/__init__.py", line 36, in <module>
import _tkinter # If this fails your Python may not be configured for Tk
ImportError: libtk8.6.so: cannot open shared object file: No such file or directory
shell returned 1\
```
|
2018/01/29
|
[
"https://Stackoverflow.com/questions/48504746",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9219560/"
] |
All you need to do is to install the tkinter package. Now universal precompiled packages such as ActivePython will not work, well at least it didn't work for me. I don't know if this problem occurs in other OSes but I know the solution for Linux: Install the Tk package from the terminal.
In Arch, Tk is available in the Arch repository. You don't need aur for this, just type on the terminal:
```
sudo pacman -S tk
```
If you are on another Linux distro such as Debian or a Debian based distro you will probably have to find a PPA repository online and in Debian based distros just type on the terminal:
```
sudo apt-get install tk
```
Which applies to all distros.
|
If you happen to be working with [Alpine](https://alpinelinux.org/) then the below command will install `tk`.
```
apk add tk
```
|
48,504,746
|
I am trying to add a **GUI** input box and I found out that the way you do that is by using a module called `tkinter`. While I was trying to install it on my arch linux machine through the `ActivePython` package I got the following error:
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lib/python3.6/tkinter/__init__.py", line 36, in <module>
import _tkinter # If this fails your Python may not be configured for Tk
ImportError: libtk8.6.so: cannot open shared object file: No such file or directory
shell returned 1\
```
|
2018/01/29
|
[
"https://Stackoverflow.com/questions/48504746",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9219560/"
] |
Install tk through command line
```
sudo pacman -S tk
sudo apt-get install tk
```
depending on your OS.
It will work.
```
import tk
```
or
```
import turtle # (turtle uses tk as a dependancy)
```
reproduces the error.
Also doing pip install does not removes the error.
So you must install tk through your package manager like mentioned above.
|
I'm on Manjaro, use Gnome3 on Wayland. After installed `tk` I got an error about Xorg. So I use Google, and found I need to install `python-pygubu` from [Visual editor for creating GUI in Python 3 tkinter](https://bbs.archlinux.org/viewtopic.php?id=221077).
And then another error like: [Gtk-WARNING \*\*: Unable to locate theme engine in module\_path: "murrine"](https://ubuntuforums.org/showthread.php?t=2061142). Also found a solution, to install `gtk-engine-murrine` form that link.
|
48,504,746
|
I am trying to add a **GUI** input box and I found out that the way you do that is by using a module called `tkinter`. While I was trying to install it on my arch linux machine through the `ActivePython` package I got the following error:
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lib/python3.6/tkinter/__init__.py", line 36, in <module>
import _tkinter # If this fails your Python may not be configured for Tk
ImportError: libtk8.6.so: cannot open shared object file: No such file or directory
shell returned 1\
```
|
2018/01/29
|
[
"https://Stackoverflow.com/questions/48504746",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9219560/"
] |
Install tk through command line
```
sudo pacman -S tk
sudo apt-get install tk
```
depending on your OS.
It will work.
```
import tk
```
or
```
import turtle # (turtle uses tk as a dependancy)
```
reproduces the error.
Also doing pip install does not removes the error.
So you must install tk through your package manager like mentioned above.
|
If you happen to be working with [Alpine](https://alpinelinux.org/) then the below command will install `tk`.
```
apk add tk
```
|
58,702,300
|
I have python script for SharePoint login (using python office365-rest-python-client) and download a file. I would like to convert script to executable file so i can share it with non-technical people. Python code run fine but when i convert it to exe using Pyinstaller and try to run, it gives me FileNotFoundError.
I am relatively new to python and i tried couple of tutorial and solution found online but no luck. Any suggestions would be appreciated.
Thanks!
```
Traceback (most recent call last):
File "test.py", line 107, in <module>
File "test.py", line 35, in SPLogin
File "site-packages\office365\runtime\auth\authentication_context.py", line 18, in acquire_token_for_user
File "site-packages\office365\runtime\auth\saml_token_provider.py", line 57, in acquire_token
File "site-packages\office365\runtime\auth\saml_token_provider.py", line 82, in acquire_service_token
File "site-packages\office365\runtime\auth\saml_token_provider.py", line 147, in prepare_security_token_request
FileNotFoundError: [Errno 2] No such file or directory: 'C:\\Users\\foo\\AppData\\Local\\Temp\\_MEI66362\\office365\\runtime\\auth\\SAML.xml'
[6664] Failed to execute script test
```
See below spec file.
SAML.xml location: C:\Users\Foo\AppData\Local\Programs\Python\Python37-32\Lib\site-packages\office365\runtime\auth\SAML.xml
```
# -*- mode: python ; coding: utf-8 -*-
block_cipher = None
a = Analysis(['test.py'],
pathex=['C:\\Users\\Foo\\Downloads\\sptest\\newbuild'],
binaries=[],
datas=[],
hiddenimports=[],
hookspath=['.'],
runtime_hooks=[],
excludes=[],
win_no_prefer_redirects=False,
win_private_assemblies=False,
cipher=block_cipher,
noarchive=False)
pyz = PYZ(a.pure, a.zipped_data,
cipher=block_cipher)
exe = EXE(pyz,
a.scripts,
a.binaries,
a.zipfiles,
a.datas,
[],
name='test',
debug=False,
bootloader_ignore_signals=False,
strip=False,
upx=True,
upx_exclude=[],
runtime_tmpdir=None,
console=True )
```
|
2019/11/04
|
[
"https://Stackoverflow.com/questions/58702300",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10777463/"
] |
Create a copy of `SAML.xml` (in my test case, right next to my python script `test0.py`); you can copy/paste from [this page](https://raw.githubusercontent.com/vgrem/Office365-REST-Python-Client/master/office365/runtime/auth/providers/templates/SAML.xml). Then run:
```
pyinstaller --onefile --add-data "SAML.xml;office365/runtime/auth" test0.py
```
|
The `_MEI66362` folder is created when you execute a `Pyinstaller` created `.exe`. It will contain everything that `Pyinstaller` determined is needed by your application. However, it cannot deduce every file that your app needs. In some cases, you must tell `Pyinstaller` about needed resources. You can use the `-add-data` and `--add-binary` options (or the `datas` and `binaries` class members in a `.spec` file). See the documentation [here](https://pyinstaller.readthedocs.io/en/stable/usage.html#what-to-bundle-where-to-search) and [here](https://pyinstaller.readthedocs.io/en/stable/spec-files.html#adding-files-to-the-bundle).
In your `datas=` statement, the second argument is where the file should be saved in your executable. So you must place it in the folder where `saml_token_provider.py` is looking for it. I think you should use something like `datas=[ ('/pathtofolder/SAML.xml', 'office365/runtime/auth') ],`.
|
62,054,092
|
I have looked [here](https://stackoverflow.com/questions/1475123/easiest-way-to-turn-a-list-into-an-html-table-in-python) but the solution is still not working out for me...
I have `2 lists`
```py
list1 = ['src_table', 'error_response_code', 'error_count', 'max_dt']
list2 = ['src_periods_43200', 404, 21, datetime.datetime(2020, 5, 26, 21, 10, 7)',
'src_periods_86400', 404, 19, datetime.datetime(2020, 5, 25, 21, 10, 7)']
```
The `list1` carries the `column names` of the `HTML` table.
The second `list2` carries the `table data`.
How do I generate the `HTML table` out of these 2 lists so that the first list is used for `column names` and the second as the `table data` (row-wise)
the result should be:
```html
src_table | error_response_code | error_count | max_dt |
src_periods_43200 | 404 | 21 | 2020-5-26 21:10:7 |
src_periods_43200 | 404 | 19 | 2020-5-25 21:10:7 |
```
many thanks
|
2020/05/27
|
[
"https://Stackoverflow.com/questions/62054092",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3107664/"
] |
This Should do it
```
import pandas as pd
import datetime
list1 = ['src_table', 'error_response_code', 'error_count', 'max_dt']
list2 = [
'src_periods_43200', 404, 21, datetime.datetime(2020, 5, 26, 21, 10, 7),
'src_periods_86400', 404, 19, datetime.datetime(2020, 5, 25, 21, 10, 7)
]
index_break = len(list1)
if len(list2) % index_break != 0:
raise Exception('Not enough data.')
staged_list = []
current_list = []
for idx in range(0, len(list2)):
current_list.append(list2[idx])
if len(current_list) == index_break:
staged_list.append(current_list.copy())
current_list = []
df = pd.DataFrame(data=staged_list, columns=list1)
print(df.to_html())
```
|
You can easily write your function for that
Something like:
```py
import datetime
list1 = ['src_table', 'error_response_code', 'error_count', 'max_dt']
list2 = ['src_periods_43200', 404, 21, datetime.datetime(2020, 5, 26, 21, 10, 7), 'src_periods_86400', 404, 19, datetime.datetime(2020, 5, 25, 21, 10, 7)]
print('<table>')
print('<thead><tr>')
for li in list1:
print(f'<th>{li}</th>')
print('</tr></thead>')
print('<tbody>')
for i in range(0, int(len(list2)/4)):
print('<tr>')
print(f'<td>{list2[4*i+0]}</td>')
print(f'<td>{list2[4*i+1]}</td>')
print(f'<td>{list2[4*i+2]}</td>')
print(f'<td>{list2[4*i+3]}</td>')
print('</tr>')
print('</tbody>')
print('</table>')
```
|
48,529,567
|
This is my code:
```
while True:
chosenUser = raw_input("Enter a username: ")
with open("userDetails.txt", "r") as userDetailsFile:
for line in userDetailsFile:
if chosenUser in line:
print "\n"
print chosenUser, "has taken previous tests. "
break
else:
print "That username is not registered."
```
Even after entering a username and it outputting results, the loop continues and asks me to input the username again.
I have recently asked a similar [question](https://stackoverflow.com/questions/48358686/while-loop-not-breaking-using-python) but got it working myself. This one is not working no matter what I try.
Any ideas how to fix it?
Note: `userDetailsFile` is a text file that is in the program earlier.
The problem might be obvious but I'm quite new to Python so I'm sorry if I'm wasting anyone's time.
|
2018/01/30
|
[
"https://Stackoverflow.com/questions/48529567",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9244343/"
] |
As others have noted, the primary problem is that the `break` only breaks from the inner `for` loop, not from the outer `while` loop, which can be fixed using e.g. a boolean variable or `return`.
But unless you want the "not registered" line to be printed for *every* line that does not contain the user's name, you should use [`for/else`](https://docs.python.org/2/tutorial/controlflow.html#break-and-continue-statements-and-else-clauses-on-loops) instead of `if/else`. However, instead of using a `for` loop, you could also just use `next` with a generator expression to get the right line (if any).
```
while True:
chosenUser = raw_input("Enter a username: ")
with open("userDetails.txt", "r") as userDetailsFile:
lineWithUser = next((line for line in userDetailsFile if chosenUser in line), None)
if lineWithUser is not None:
print "\n"
print chosenUser, "has taken previous tests. "
break # will break from while now
else:
print "That username is not registered."
```
Or if you do not actually need the `lineWithUser`, just use `any`:
```
while True:
chosenUser = raw_input("Enter a username: ")
with open("userDetails.txt", "r") as userDetailsFile:
if any(chosenUser in line for line in userDetailsFile):
print "\n"
print chosenUser, "has taken previous tests. "
break # will break from while now
else:
print "That username is not registered."
```
This way, the code is also much more compact and easier to read/understand what it's doing.
|
The easiest way to work around this is to make the outer `while` alterable by using a boolean variable, and toggling it:
```
loop = True
while loop:
for x in y:
if exit_condition:
loop = False
break
```
That way you can stop the outer loop from within an inner one.
|
48,529,567
|
This is my code:
```
while True:
chosenUser = raw_input("Enter a username: ")
with open("userDetails.txt", "r") as userDetailsFile:
for line in userDetailsFile:
if chosenUser in line:
print "\n"
print chosenUser, "has taken previous tests. "
break
else:
print "That username is not registered."
```
Even after entering a username and it outputting results, the loop continues and asks me to input the username again.
I have recently asked a similar [question](https://stackoverflow.com/questions/48358686/while-loop-not-breaking-using-python) but got it working myself. This one is not working no matter what I try.
Any ideas how to fix it?
Note: `userDetailsFile` is a text file that is in the program earlier.
The problem might be obvious but I'm quite new to Python so I'm sorry if I'm wasting anyone's time.
|
2018/01/30
|
[
"https://Stackoverflow.com/questions/48529567",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9244343/"
] |
As others have noted, the primary problem is that the `break` only breaks from the inner `for` loop, not from the outer `while` loop, which can be fixed using e.g. a boolean variable or `return`.
But unless you want the "not registered" line to be printed for *every* line that does not contain the user's name, you should use [`for/else`](https://docs.python.org/2/tutorial/controlflow.html#break-and-continue-statements-and-else-clauses-on-loops) instead of `if/else`. However, instead of using a `for` loop, you could also just use `next` with a generator expression to get the right line (if any).
```
while True:
chosenUser = raw_input("Enter a username: ")
with open("userDetails.txt", "r") as userDetailsFile:
lineWithUser = next((line for line in userDetailsFile if chosenUser in line), None)
if lineWithUser is not None:
print "\n"
print chosenUser, "has taken previous tests. "
break # will break from while now
else:
print "That username is not registered."
```
Or if you do not actually need the `lineWithUser`, just use `any`:
```
while True:
chosenUser = raw_input("Enter a username: ")
with open("userDetails.txt", "r") as userDetailsFile:
if any(chosenUser in line for line in userDetailsFile):
print "\n"
print chosenUser, "has taken previous tests. "
break # will break from while now
else:
print "That username is not registered."
```
This way, the code is also much more compact and easier to read/understand what it's doing.
|
The `break` breaks out the the inner `for` loop only.
Better put everything in a function and stop your `while` loop with a `return`:
```
def main():
while True:
chosenUser = raw_input("Enter a username: ")
with open("userDetails.txt", "r") as userDetailsFile:
for line in userDetailsFile:
if chosenUser in line:
print "\n"
print chosenUser, "has taken previous tests. "
return # stops the while loop
print "That username is not registered."
main()
print 'keep going here'
```
|
1,465,036
|
I have a shell that runs CentOS.
For a project I'm doing, I need python 2.5+, but centOS is pretty dependent on 2.4.
From what I've read, a number of things will break if you upgrade to 2.5.
I want to install 2.5 separately from 2.4, but I'm not sure how to do it. So far I've downloaded the source tarball, untarred it, and did a `./configure --prefix=/opt` which is where I want it to end up. Can I now just `make, make install` ? Or is there more?
|
2009/09/23
|
[
"https://Stackoverflow.com/questions/1465036",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/168500/"
] |
```
# yum groupinstall "Development tools"
# yum install zlib-devel bzip2-devel openssl-devel ncurses-devel sqlite-devel readline-devel tk-devel
```
**Download and install Python 3.3.0**
```
# wget http://python.org/ftp/python/3.3.0/Python-3.3.0.tar.bz2
# tar xf Python-3.3.0.tar.bz2
# cd Python-3.3.0
# ./configure --prefix=/usr/local
# make && make altinstall
```
**Download and install Distribute for Python 3.3**
```
# wget http://pypi.python.org/packages/source/d/distribute/distribute-0.6.35.tar.gz
# tar xf distribute-0.6.35.tar.gz
# cd distribute-0.6.35
# python3.3 setup.py install
```
**Install and use virtualenv for Python 3.3**
```
# easy_install-3.3 virtualenv
# virtualenv-3.3 --distribute otherproject
New python executable in otherproject/bin/python3.3
Also creating executable in otherproject/bin/python
Installing distribute...................done.
Installing pip................done.
# source otherproject/bin/activate
# python --version
Python 3.3.0
```
|
I unistalled the original version of python (2.6.6) and install 2.7(with option `make && make altinstall`) but when I tried install something with yum didn't work.
So I solved this issue as follow:
1. `# ln -s /usr/local/bin/python /usr/bin/python`
2. Download the RPM package python-2.6.6-36.el6.i686.rpm from <http://rpm.pbone.net/index.php3/stat/4/idpl/20270470/dir/centos_6/com/python-2.6.6-36.el6.i686.rpm.html>
3. Execute as root `rpm -Uvh python-2.6.6-36.el6.i686.rpm`
Done
|
1,465,036
|
I have a shell that runs CentOS.
For a project I'm doing, I need python 2.5+, but centOS is pretty dependent on 2.4.
From what I've read, a number of things will break if you upgrade to 2.5.
I want to install 2.5 separately from 2.4, but I'm not sure how to do it. So far I've downloaded the source tarball, untarred it, and did a `./configure --prefix=/opt` which is where I want it to end up. Can I now just `make, make install` ? Or is there more?
|
2009/09/23
|
[
"https://Stackoverflow.com/questions/1465036",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/168500/"
] |
[Chris Lea](http://chrislea.com/2009/09/09/easy-python-2-6-django-on-centos-5/) provides a YUM repository for python26 RPMs that can co-exist with the 'native' 2.4 that is needed for quite a few admin tools on CentOS.
Quick instructions that worked at least for me:
```
$ sudo rpm -Uvh http://yum.chrislea.com/centos/5/i386/chl-release-5-3.noarch.rpm
$ sudo rpm --import /etc/pki/rpm-gpg/RPM-GPG-KEY-CHL
$ sudo yum install python26
$ python26
```
|
```
rpm -Uvh http://yum.chrislea.com/centos/5/i386/chl-release-5-3.noarch.rpm
rpm --import /etc/pki/rpm-gpg/RPM-GPG-KEY-CHL
rpm -Uvh ftp://ftp.pbone.net/mirror/centos.karan.org/el5/extras/testing/i386/RPMS/libffi-3.0.5-1.el5.kb.i386.rpm
yum install python26
python26
```
for dos that just dont know :=)
|
1,465,036
|
I have a shell that runs CentOS.
For a project I'm doing, I need python 2.5+, but centOS is pretty dependent on 2.4.
From what I've read, a number of things will break if you upgrade to 2.5.
I want to install 2.5 separately from 2.4, but I'm not sure how to do it. So far I've downloaded the source tarball, untarred it, and did a `./configure --prefix=/opt` which is where I want it to end up. Can I now just `make, make install` ? Or is there more?
|
2009/09/23
|
[
"https://Stackoverflow.com/questions/1465036",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/168500/"
] |
No need to do yum or make your own RPM. Build `python26` from source.
```
wget https://www.python.org/ftp/python/2.6.6/Python-2.6.6.tgz
tar -zxvf Python-2.6.6.tgz
cd Python-2.6.6
./configure && make && make install
```
There can be a **dependency error** use
```
yum install gcc cc
```
Add the install path (`/usr/local/bin/python` by default) to `~/.bash_profile`.
It will not break `yum` or any other things which are dependent on `python24`.
|
```
# yum groupinstall "Development tools"
# yum install zlib-devel bzip2-devel openssl-devel ncurses-devel sqlite-devel readline-devel tk-devel
```
**Download and install Python 3.3.0**
```
# wget http://python.org/ftp/python/3.3.0/Python-3.3.0.tar.bz2
# tar xf Python-3.3.0.tar.bz2
# cd Python-3.3.0
# ./configure --prefix=/usr/local
# make && make altinstall
```
**Download and install Distribute for Python 3.3**
```
# wget http://pypi.python.org/packages/source/d/distribute/distribute-0.6.35.tar.gz
# tar xf distribute-0.6.35.tar.gz
# cd distribute-0.6.35
# python3.3 setup.py install
```
**Install and use virtualenv for Python 3.3**
```
# easy_install-3.3 virtualenv
# virtualenv-3.3 --distribute otherproject
New python executable in otherproject/bin/python3.3
Also creating executable in otherproject/bin/python
Installing distribute...................done.
Installing pip................done.
# source otherproject/bin/activate
# python --version
Python 3.3.0
```
|
1,465,036
|
I have a shell that runs CentOS.
For a project I'm doing, I need python 2.5+, but centOS is pretty dependent on 2.4.
From what I've read, a number of things will break if you upgrade to 2.5.
I want to install 2.5 separately from 2.4, but I'm not sure how to do it. So far I've downloaded the source tarball, untarred it, and did a `./configure --prefix=/opt` which is where I want it to end up. Can I now just `make, make install` ? Or is there more?
|
2009/09/23
|
[
"https://Stackoverflow.com/questions/1465036",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/168500/"
] |
No, that's it. You might want to make sure you have all optional library headers installed too so you don't have to recompile it later. They are listed in the documentation I think.
Also, you can install it even in the standard path if you do `make altinstall`. That way it won't override your current default "python".
|
Missing Dependency: libffi.so.5 is here :
<ftp://ftp.pbone.net/mirror/centos.karan.org/el5/extras/testing/i386/RPMS/libffi-3.0.5-1.el5.kb.i386.rpm>
|
1,465,036
|
I have a shell that runs CentOS.
For a project I'm doing, I need python 2.5+, but centOS is pretty dependent on 2.4.
From what I've read, a number of things will break if you upgrade to 2.5.
I want to install 2.5 separately from 2.4, but I'm not sure how to do it. So far I've downloaded the source tarball, untarred it, and did a `./configure --prefix=/opt` which is where I want it to end up. Can I now just `make, make install` ? Or is there more?
|
2009/09/23
|
[
"https://Stackoverflow.com/questions/1465036",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/168500/"
] |
If you want to make it easier on yourself, there are CentOS RPMs for new Python versions floating around the net. E.g. see:
<http://www.geekymedia.com/python_26_centos.html>
|
I unistalled the original version of python (2.6.6) and install 2.7(with option `make && make altinstall`) but when I tried install something with yum didn't work.
So I solved this issue as follow:
1. `# ln -s /usr/local/bin/python /usr/bin/python`
2. Download the RPM package python-2.6.6-36.el6.i686.rpm from <http://rpm.pbone.net/index.php3/stat/4/idpl/20270470/dir/centos_6/com/python-2.6.6-36.el6.i686.rpm.html>
3. Execute as root `rpm -Uvh python-2.6.6-36.el6.i686.rpm`
Done
|
1,465,036
|
I have a shell that runs CentOS.
For a project I'm doing, I need python 2.5+, but centOS is pretty dependent on 2.4.
From what I've read, a number of things will break if you upgrade to 2.5.
I want to install 2.5 separately from 2.4, but I'm not sure how to do it. So far I've downloaded the source tarball, untarred it, and did a `./configure --prefix=/opt` which is where I want it to end up. Can I now just `make, make install` ? Or is there more?
|
2009/09/23
|
[
"https://Stackoverflow.com/questions/1465036",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/168500/"
] |
I unistalled the original version of python (2.6.6) and install 2.7(with option `make && make altinstall`) but when I tried install something with yum didn't work.
So I solved this issue as follow:
1. `# ln -s /usr/local/bin/python /usr/bin/python`
2. Download the RPM package python-2.6.6-36.el6.i686.rpm from <http://rpm.pbone.net/index.php3/stat/4/idpl/20270470/dir/centos_6/com/python-2.6.6-36.el6.i686.rpm.html>
3. Execute as root `rpm -Uvh python-2.6.6-36.el6.i686.rpm`
Done
|
Type the following commands on the terminal to install Python 3.6 on CentOS 7:
```
$ sudo yum install https://centos7.iuscommunity.org/ius-release.rpm
```
Then do :
```
$ sudo yum install python36u
```
You can also install any version instead of 3.6 (if you want to) by just replacing 36 by your version number.
|
1,465,036
|
I have a shell that runs CentOS.
For a project I'm doing, I need python 2.5+, but centOS is pretty dependent on 2.4.
From what I've read, a number of things will break if you upgrade to 2.5.
I want to install 2.5 separately from 2.4, but I'm not sure how to do it. So far I've downloaded the source tarball, untarred it, and did a `./configure --prefix=/opt` which is where I want it to end up. Can I now just `make, make install` ? Or is there more?
|
2009/09/23
|
[
"https://Stackoverflow.com/questions/1465036",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/168500/"
] |
When you install your python version (in this case it is python2.6) then issue this command to create your `virtualenv`:
```
virtualenv -p /usr/bin/python2.6 /your/virtualenv/path/here/
```
|
```
rpm -Uvh http://yum.chrislea.com/centos/5/i386/chl-release-5-3.noarch.rpm
rpm --import /etc/pki/rpm-gpg/RPM-GPG-KEY-CHL
rpm -Uvh ftp://ftp.pbone.net/mirror/centos.karan.org/el5/extras/testing/i386/RPMS/libffi-3.0.5-1.el5.kb.i386.rpm
yum install python26
python26
```
for dos that just dont know :=)
|
1,465,036
|
I have a shell that runs CentOS.
For a project I'm doing, I need python 2.5+, but centOS is pretty dependent on 2.4.
From what I've read, a number of things will break if you upgrade to 2.5.
I want to install 2.5 separately from 2.4, but I'm not sure how to do it. So far I've downloaded the source tarball, untarred it, and did a `./configure --prefix=/opt` which is where I want it to end up. Can I now just `make, make install` ? Or is there more?
|
2009/09/23
|
[
"https://Stackoverflow.com/questions/1465036",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/168500/"
] |
You could also use the [EPEL-repository](http://fedoraproject.org/wiki/EPEL), and then do `sudo yum install python26` to install python 2.6
|
Type the following commands on the terminal to install Python 3.6 on CentOS 7:
```
$ sudo yum install https://centos7.iuscommunity.org/ius-release.rpm
```
Then do :
```
$ sudo yum install python36u
```
You can also install any version instead of 3.6 (if you want to) by just replacing 36 by your version number.
|
1,465,036
|
I have a shell that runs CentOS.
For a project I'm doing, I need python 2.5+, but centOS is pretty dependent on 2.4.
From what I've read, a number of things will break if you upgrade to 2.5.
I want to install 2.5 separately from 2.4, but I'm not sure how to do it. So far I've downloaded the source tarball, untarred it, and did a `./configure --prefix=/opt` which is where I want it to end up. Can I now just `make, make install` ? Or is there more?
|
2009/09/23
|
[
"https://Stackoverflow.com/questions/1465036",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/168500/"
] |
No need to do yum or make your own RPM. Build `python26` from source.
```
wget https://www.python.org/ftp/python/2.6.6/Python-2.6.6.tgz
tar -zxvf Python-2.6.6.tgz
cd Python-2.6.6
./configure && make && make install
```
There can be a **dependency error** use
```
yum install gcc cc
```
Add the install path (`/usr/local/bin/python` by default) to `~/.bash_profile`.
It will not break `yum` or any other things which are dependent on `python24`.
|
I unistalled the original version of python (2.6.6) and install 2.7(with option `make && make altinstall`) but when I tried install something with yum didn't work.
So I solved this issue as follow:
1. `# ln -s /usr/local/bin/python /usr/bin/python`
2. Download the RPM package python-2.6.6-36.el6.i686.rpm from <http://rpm.pbone.net/index.php3/stat/4/idpl/20270470/dir/centos_6/com/python-2.6.6-36.el6.i686.rpm.html>
3. Execute as root `rpm -Uvh python-2.6.6-36.el6.i686.rpm`
Done
|
1,465,036
|
I have a shell that runs CentOS.
For a project I'm doing, I need python 2.5+, but centOS is pretty dependent on 2.4.
From what I've read, a number of things will break if you upgrade to 2.5.
I want to install 2.5 separately from 2.4, but I'm not sure how to do it. So far I've downloaded the source tarball, untarred it, and did a `./configure --prefix=/opt` which is where I want it to end up. Can I now just `make, make install` ? Or is there more?
|
2009/09/23
|
[
"https://Stackoverflow.com/questions/1465036",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/168500/"
] |
No need to do yum or make your own RPM. Build `python26` from source.
```
wget https://www.python.org/ftp/python/2.6.6/Python-2.6.6.tgz
tar -zxvf Python-2.6.6.tgz
cd Python-2.6.6
./configure && make && make install
```
There can be a **dependency error** use
```
yum install gcc cc
```
Add the install path (`/usr/local/bin/python` by default) to `~/.bash_profile`.
It will not break `yum` or any other things which are dependent on `python24`.
|
```
rpm -Uvh http://yum.chrislea.com/centos/5/i386/chl-release-5-3.noarch.rpm
rpm --import /etc/pki/rpm-gpg/RPM-GPG-KEY-CHL
rpm -Uvh ftp://ftp.pbone.net/mirror/centos.karan.org/el5/extras/testing/i386/RPMS/libffi-3.0.5-1.el5.kb.i386.rpm
yum install python26
python26
```
for dos that just dont know :=)
|
71,036,475
|
There
<https://www.amazon.de/sp?marketplaceID=A1PA6795UKMFR9&seller=A135E02VGPPVQ&isAmazonFulfilled=1&ref=dp_merchant_link>
I just can access on browser as well, but python requests returns 404 error status.
Until yesterday, this page worked as well with python requests. but from today, it does not work for me and it returns 404 error status.
```py
import requests
headers = {
'Connection': 'keep-alive',
'rtt': '300',
'downlink': '0.4',
'ect': '3g',
'sec-ch-ua': '" Not;A Brand";v="99", "Google Chrome";v="97", "Chromium";v="97"',
'sec-ch-ua-mobile': '?0',
'sec-ch-ua-platform': '"Windows"',
'Upgrade-Insecure-Requests': '1',
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36',
'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9',
'Sec-Fetch-Site': 'none',
'Sec-Fetch-Mode': 'navigate',
'Sec-Fetch-User': '?1',
'Sec-Fetch-Dest': 'document',
'Accept-Language': 'en-US,en;q=0.9,ko;q=0.8',
}
response = requests.get(
'https://www.amazon.de/sp?marketplaceID=A1PA6795UKMFR9&seller=A135E02VGPPVQ&isAmazonFulfilled=1&ref=dp_merchant_link',
headers=headers
)
print(response.status_code) # 404
```
I really will appreciate if I can get any help from you.
Regards.
|
2022/02/08
|
[
"https://Stackoverflow.com/questions/71036475",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/-1/"
] |
It works just fine to me. Maybe you can try changing your user agent.
```
import requests
from fake_useragent import UserAgent # fake user agent library
# random user-agent
ua = UserAgent()
user_agent = ua.random
headers = {
'Connection': 'keep-alive',
'rtt': '300',
'downlink': '0.4',
'ect': '3g',
'sec-ch-ua': '" Not;A Brand";v="99", "Google Chrome";v="97", "Chromium";v="97"',
'sec-ch-ua-mobile': '?0',
'sec-ch-ua-platform': '"Windows"',
'Upgrade-Insecure-Requests': '1',
'User-Agent': user_agent,
'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9',
'Sec-Fetch-Site': 'none',
'Sec-Fetch-Mode': 'navigate',
'Sec-Fetch-User': '?1',
'Sec-Fetch-Dest': 'document',
'Accept-Language': 'en-US,en;q=0.9,ko;q=0.8',
}
response = requests.get(
'https://www.amazon.de/sp?marketplaceID=A1PA6795UKMFR9&seller=A135E02VGPPVQ&isAmazonFulfilled=1&ref=dp_merchant_link',
headers=headers
)
print(response.status_code)
```
|
I suggest you add a **referer url in header** and generally I don't think Amazon allow scraping but use selenium if you want to scrape it easily (sure it may be overkill) but has more customization
|
67,464,761
|
I am using python flask framework to develop Apis. I am planning to use mongodb as backend database. How can I connect mongodb database to python? Is there any in built in library?
|
2021/05/10
|
[
"https://Stackoverflow.com/questions/67464761",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/15882833/"
] |
You almost there. It is better to keep control over appearance/disappearance inside menu view. Find below fixed parts, places are highlighted with comments in code.
Tested with Xcode 12.5 / iOS 14.5
*Note: demo prepared with turned on "Simulator > Debug > Slow Animations" for better visibility*
[](https://i.stack.imgur.com/J7kcP.gif)
```
struct LeftNavigationView:View {
@EnvironmentObject var viewModel:ViewModel
var body: some View {
ZStack {
if self.viewModel.isLeftMenuVisible { // << here !!
Color.black.opacity(0.8)
.ignoresSafeArea()
.transition(.opacity)
VStack {
Button(action: {
self.viewModel.isLeftMenuVisible.toggle()
}, label: {
Text("Close Me")
})
}
.frame(maxWidth:.infinity, maxHeight: .infinity)
.background(Color.white)
.cornerRadius(10)
.padding(.trailing)
.padding(.trailing)
.padding(.trailing)
.padding(.trailing)
.transition(
.asymmetric(
insertion: .move(edge: .leading),
removal: .move(edge: .leading)
)
).zIndex(1) // << force keep at top where removed!!
}
}
.frame(maxWidth: .infinity, maxHeight: .infinity)
.animation(.default, value: self.viewModel.isLeftMenuVisible) // << here !!
}
}
struct ContentView: View {
@StateObject var viewModel = ViewModel()
var body: some View {
ZStack {
NavigationView {
VStack(alignment:.leading) {
Button(action: {
self.viewModel.isLeftMenuVisible.toggle()
}, label: {
Text("Button")
})
}.padding(.horizontal)
.navigationTitle("ContentView")
}
// included here, everything else is managed inside (!) view
LeftNavigationView()
}.environmentObject(self.viewModel)
}
}
```
|
You should use this library [KYDrawerController](https://github.com/ykyouhei/KYDrawerController)
Declare ContentView and MenuView in SceneDelegate:
```
import KYDrawerController
class SceneDelegate: UIResponder, UIWindowSceneDelegate {
var window: UIWindow?
let drawerController = KYDrawerController(drawerDirection: .left, drawerWidth: 300)
func scene(_ scene: UIScene, willConnectTo session: UISceneSession, options connectionOptions: UIScene.ConnectionOptions) {
drawerController.mainViewController = UIHostingController(rootView: ContentView())
drawerController.drawerViewController = UIHostingController(rootView: LeftNavigationView()
if let windowScene = scene as? UIWindowScene {
let window = UIWindow(windowScene: windowScene)
let vc = drawerController
vc.view.frame = window.bounds
window.rootViewController = vc
self.window = window
(UIApplication.shared.delegate as? AppDelegate)?.self.window = window
window.makeKeyAndVisible()
}
//...
}
//...
}
```
|
12,816,464
|
I am using web.py framework to create a simple web application
I want to create a radio button so I wrote the following code
```
from web import form
from web.contrib.auth import DBAuth
import MySQLdb as mdb
render = web.template.render('templates/')
urls = (
'/project_details', 'Project_Details',
)
class Project_Details:
project_details = form.Form(
form.Radio('Home Page'),
form.Radio('Content'),
form.Radio('Contact Us'),
form.Radio('Sitemap'),
)
def GET(self):
project_details = self.project_details()
return render.projectdetails(project_details)
```
When I run the code with url `localhost:8080` I am seeing following error
```
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/web/application.py", line 237, in process
return p(lambda: process(processors))
File "/usr/lib/python2.7/site-packages/web/application.py", line 565, in processor
h()
File "/usr/lib/python2.7/site-packages/web/application.py", line 661, in __call__
self.check(mod)
File "/usr/lib/python2.7/site-packages/web/application.py", line 680, in check
reload(mod)
File "/home/local/user/python_webcode/index.py", line 68, in <module>
class Project_Details:
File "/home/local/user/python_webcode/index.py", line 72, in Project_Details
form.Radio('password'),
TypeError: __init__() takes at least 3 arguments (2 given)
```
What parameter need to be passed in the radio button in order to avoid this error
|
2012/10/10
|
[
"https://Stackoverflow.com/questions/12816464",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1342109/"
] |
Looking at the [source](https://github.com/webpy/webpy/blob/master/web/form.py#L291), it looks like you have to use one `Radio` constructor for all of your items as the same `Radio` object will actually generate multiple `<input>` elements.
Try something like::
```
project_details = form.Form(
form.Radio('details', ['Home Page', 'Content', 'Contact Us', 'Sitemap']),
)
```
|
Here's what I was able to decipher. checked='checked' seems to select a random (last?) item in the list. Without a default selection, my testing was coming back with a NoneType, if none of the radio-buttons got selected.
```
project_details = form.Form(
form.Radio('selections', ['Home Page', 'Content', 'Contact Us','Sitemap'], checked='checked'),
form.Button("Submit")
)
```
To access your user selection as a string...
```
result = project_details['selections'].value
```
If you want to use javascript while your template is active, you can add, onchange='myFunction()' to the end of the Radio line-item. I'm also assigning an id for each element, to avoid frustration with my getElementById calls, so my declaration looks like this.
```
project_details = form.Form(
form.Radio('selections', ['Home Page', 'Content', 'Contact Us','Sitemap'], checked='checked', onchange='myFunction()', id='selections'),
form.Button("Submit")
)
```
|
30,226,891
|
I am new at both, python and stackoverflow, so please have that on mind. I tried to do this myself and manage to do it, but it works only if i hardcode hash number of previous version like this one in hash1, and then compare with hash number of currently version. I wolud like that program every time save hash number of currently version and then with every run compare it with newer version, and if the file is changed do something.
This is my code
```
import hashlib
hash1 = '3379b3b9b9c82650831db2aba0cf4e99'
hasher = hashlib.md5()
with open('word.txt', 'rb') as afile:
buf = afile.read()
hasher.update(buf)
hash2 = hasher.hexdigest()
if hash1 == hash2:
print('same version')
else
print('diffrent version')
```
|
2015/05/13
|
[
"https://Stackoverflow.com/questions/30226891",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4897764/"
] |
Just simply save the hash to a file like file.txt and then when you need to compare a hash just read from your file.txt and compare the two strings.
Here is an example of how to read and write to files in python.
<http://www.pythonforbeginners.com/files/reading-and-writing-files-in-python>
|
For relatively simple comparisons, use [filecmp](https://docs.python.org/2/library/filecmp.html). For finer control and feedback, use [difflib](https://docs.python.org/2/library/difflib.html#module-difflib), which is similar to the \*nix utility, `diff`.
|
62,776,024
|
I simply want to reorder the rows of my pandas dataframe such that `col1` matches the order of the external list elements in `my_order`.
```
d = {'col1': ['A', 'B', 'C'], 'col2': [1,2,3]}
df = pd.DataFrame(data=d)
my_order = ['B', 'C', 'A']
```
This post [sorting by a custom list in pandas](https://stackoverflow.com/questions/23482668/sorting-by-a-custom-list-in-pandas) does the order work sorting by a custom list in pandas and using it for my data produces
```
d = {'col1': ['A', 'B', 'C'], 'col2': [1,2,3]}
df = pd.DataFrame(data=d)
my_order = ['B', 'C', 'A']
df.col1 = df.col1.astype("category")
df.col1.cat.set_categories(my_order, inplace=True)
df.sort_values(["col1"])
```
However, this seems to be a wasteful amount of code relative to an R process which would simply be
```
df = data.frame(col1 = c('A','B','C'), col2 = c(1,2,3))
my_order = c('B', 'C', 'A')
df[match(my_order, df$col1),]
```
Ordering is expensive and the python version above takes 3 steps where R takes only 1 using the match function. Can python not rival R in this case?
If this were simply done once in my real world example I wouldn't care much. But, this is a process that will be iterated on millions of times on a web server application and so a truly minimal, inexpensive path is the best approach
|
2020/07/07
|
[
"https://Stackoverflow.com/questions/62776024",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7273115/"
] |
```
{{ json.Date.strftime('%Y-%m-%d') }}
```
|
strftime converts a date object into a string by specifying a format you want to use
you can do
```
now = datetime.now() # <- datetime object
now_string = now.strftime('%d.%m.%Y')
```
however, if you already converted your datetime object into a string (or getting strings from an api or something), you can convert the string back to a datetime object using strptime
```
date_string = '2020-08-05T00:00:00'
date_obj = strptime(date_string, '%Y-%m-%dT%H:%M:%S') # <- the format the string is currently in
```
You can first to strptime and then strftime directly from that to effectively convert a string into a different string, but you should avoid this if possible
```
converted_string = strptime(original_string, 'old_format').strftime('new_format')
```
This topic may be confusing because when you print(now) (remember, the datetime object), python automatically "represents" this and converts it into a string. however, it isnt actually a string at this time, but rather a datetime object. python can't print that, so it converts it - but you wont be able to jsonify this directly, for example.
|
5,635,054
|
I have a really large excel file and i need to delete about 20,000 rows, contingent on meeting a simple condition and excel won't let me delete such a complex range when using a filter. The condition is:
**If the first column contains the value, X, then I need to be able to delete the entire row.**
I'm trying to automate this using python and xlwt, but am not quite sure where to start. Seeking some code snippits to get me started...
Grateful for any help that's out there!
|
2011/04/12
|
[
"https://Stackoverflow.com/questions/5635054",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/704039/"
] |
You can use,
```
sh.Range(sh.Cells(1,1),sh.Cells(20000,1)).EntireRow.Delete()
```
will delete rows 1 to 20,000 in an open Excel spreadsheet so,
```
if sh.Cells(1,1).Value == 'X':
sh.Cells(1,1).EntireRow.Delete()
```
|
I have achieved this using Pandas package....
```
import pandas as pd
#Read from Excel
xl= pd.ExcelFile("test.xls")
#Parsing Excel Sheet to DataFrame
dfs = xl.parse(xl.sheet_names[0])
#Update DataFrame as per requirement
#(Here Removing the row from DataFrame having blank value in "Name" column)
dfs = dfs[dfs['Name'] != '']
#Updating the excel sheet with the updated DataFrame
dfs.to_excel("test.xls",sheet_name='Sheet1',index=False)
```
|
5,635,054
|
I have a really large excel file and i need to delete about 20,000 rows, contingent on meeting a simple condition and excel won't let me delete such a complex range when using a filter. The condition is:
**If the first column contains the value, X, then I need to be able to delete the entire row.**
I'm trying to automate this using python and xlwt, but am not quite sure where to start. Seeking some code snippits to get me started...
Grateful for any help that's out there!
|
2011/04/12
|
[
"https://Stackoverflow.com/questions/5635054",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/704039/"
] |
Don't delete. Just copy what you need.
1. read the original file
2. open a new file
3. iterate over rows of the original file (if the first column of the row does not contain the value X, add this row to the new file)
4. close both files
5. rename the new file into the original file
|
I have achieved this using Pandas package....
```
import pandas as pd
#Read from Excel
xl= pd.ExcelFile("test.xls")
#Parsing Excel Sheet to DataFrame
dfs = xl.parse(xl.sheet_names[0])
#Update DataFrame as per requirement
#(Here Removing the row from DataFrame having blank value in "Name" column)
dfs = dfs[dfs['Name'] != '']
#Updating the excel sheet with the updated DataFrame
dfs.to_excel("test.xls",sheet_name='Sheet1',index=False)
```
|
5,635,054
|
I have a really large excel file and i need to delete about 20,000 rows, contingent on meeting a simple condition and excel won't let me delete such a complex range when using a filter. The condition is:
**If the first column contains the value, X, then I need to be able to delete the entire row.**
I'm trying to automate this using python and xlwt, but am not quite sure where to start. Seeking some code snippits to get me started...
Grateful for any help that's out there!
|
2011/04/12
|
[
"https://Stackoverflow.com/questions/5635054",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/704039/"
] |
You can try using the csv reader:
<http://docs.python.org/library/csv.html>
|
I have achieved this using Pandas package....
```
import pandas as pd
#Read from Excel
xl= pd.ExcelFile("test.xls")
#Parsing Excel Sheet to DataFrame
dfs = xl.parse(xl.sheet_names[0])
#Update DataFrame as per requirement
#(Here Removing the row from DataFrame having blank value in "Name" column)
dfs = dfs[dfs['Name'] != '']
#Updating the excel sheet with the updated DataFrame
dfs.to_excel("test.xls",sheet_name='Sheet1',index=False)
```
|
5,635,054
|
I have a really large excel file and i need to delete about 20,000 rows, contingent on meeting a simple condition and excel won't let me delete such a complex range when using a filter. The condition is:
**If the first column contains the value, X, then I need to be able to delete the entire row.**
I'm trying to automate this using python and xlwt, but am not quite sure where to start. Seeking some code snippits to get me started...
Grateful for any help that's out there!
|
2011/04/12
|
[
"https://Stackoverflow.com/questions/5635054",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/704039/"
] |
Don't delete. Just copy what you need.
1. read the original file
2. open a new file
3. iterate over rows of the original file (if the first column of the row does not contain the value X, add this row to the new file)
4. close both files
5. rename the new file into the original file
|
You can use,
```
sh.Range(sh.Cells(1,1),sh.Cells(20000,1)).EntireRow.Delete()
```
will delete rows 1 to 20,000 in an open Excel spreadsheet so,
```
if sh.Cells(1,1).Value == 'X':
sh.Cells(1,1).EntireRow.Delete()
```
|
5,635,054
|
I have a really large excel file and i need to delete about 20,000 rows, contingent on meeting a simple condition and excel won't let me delete such a complex range when using a filter. The condition is:
**If the first column contains the value, X, then I need to be able to delete the entire row.**
I'm trying to automate this using python and xlwt, but am not quite sure where to start. Seeking some code snippits to get me started...
Grateful for any help that's out there!
|
2011/04/12
|
[
"https://Stackoverflow.com/questions/5635054",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/704039/"
] |
You can try using the csv reader:
<http://docs.python.org/library/csv.html>
|
If you just need to delete the data (rather than 'getting rid of' the row, i.e. it shifts rows) you can try using my module, PyWorkbooks. You can get the most recent version here:
<https://sourceforge.net/projects/pyworkbooks/>
There is a pdf tutorial to guide you through how to use it. Happy coding!
|
5,635,054
|
I have a really large excel file and i need to delete about 20,000 rows, contingent on meeting a simple condition and excel won't let me delete such a complex range when using a filter. The condition is:
**If the first column contains the value, X, then I need to be able to delete the entire row.**
I'm trying to automate this using python and xlwt, but am not quite sure where to start. Seeking some code snippits to get me started...
Grateful for any help that's out there!
|
2011/04/12
|
[
"https://Stackoverflow.com/questions/5635054",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/704039/"
] |
If you just need to delete the data (rather than 'getting rid of' the row, i.e. it shifts rows) you can try using my module, PyWorkbooks. You can get the most recent version here:
<https://sourceforge.net/projects/pyworkbooks/>
There is a pdf tutorial to guide you through how to use it. Happy coding!
|
I have achieved this using Pandas package....
```
import pandas as pd
#Read from Excel
xl= pd.ExcelFile("test.xls")
#Parsing Excel Sheet to DataFrame
dfs = xl.parse(xl.sheet_names[0])
#Update DataFrame as per requirement
#(Here Removing the row from DataFrame having blank value in "Name" column)
dfs = dfs[dfs['Name'] != '']
#Updating the excel sheet with the updated DataFrame
dfs.to_excel("test.xls",sheet_name='Sheet1',index=False)
```
|
5,635,054
|
I have a really large excel file and i need to delete about 20,000 rows, contingent on meeting a simple condition and excel won't let me delete such a complex range when using a filter. The condition is:
**If the first column contains the value, X, then I need to be able to delete the entire row.**
I'm trying to automate this using python and xlwt, but am not quite sure where to start. Seeking some code snippits to get me started...
Grateful for any help that's out there!
|
2011/04/12
|
[
"https://Stackoverflow.com/questions/5635054",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/704039/"
] |
You can try using the csv reader:
<http://docs.python.org/library/csv.html>
|
You can use,
```
sh.Range(sh.Cells(1,1),sh.Cells(20000,1)).EntireRow.Delete()
```
will delete rows 1 to 20,000 in an open Excel spreadsheet so,
```
if sh.Cells(1,1).Value == 'X':
sh.Cells(1,1).EntireRow.Delete()
```
|
5,635,054
|
I have a really large excel file and i need to delete about 20,000 rows, contingent on meeting a simple condition and excel won't let me delete such a complex range when using a filter. The condition is:
**If the first column contains the value, X, then I need to be able to delete the entire row.**
I'm trying to automate this using python and xlwt, but am not quite sure where to start. Seeking some code snippits to get me started...
Grateful for any help that's out there!
|
2011/04/12
|
[
"https://Stackoverflow.com/questions/5635054",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/704039/"
] |
Don't delete. Just copy what you need.
1. read the original file
2. open a new file
3. iterate over rows of the original file (if the first column of the row does not contain the value X, add this row to the new file)
4. close both files
5. rename the new file into the original file
|
I like using COM objects for this kind of fun:
```
import win32com.client
from win32com.client import constants
f = r"h:\Python\Examples\test.xls"
DELETE_THIS = "X"
exc = win32com.client.gencache.EnsureDispatch("Excel.Application")
exc.Visible = 1
exc.Workbooks.Open(Filename=f)
row = 1
while True:
exc.Range("B%d" % row).Select()
data = exc.ActiveCell.FormulaR1C1
exc.Range("A%d" % row).Select()
condition = exc.ActiveCell.FormulaR1C1
if data == '':
break
elif condition == DELETE_THIS:
exc.Rows("%d:%d" % (row, row)).Select()
exc.Selection.Delete(Shift=constants.xlUp)
else:
row += 1
# Before
#
# a
# b
# X c
# d
# e
# X d
# g
#
# After
#
# a
# b
# d
# e
# g
```
I usually record snippets of Excel macros and glue them together with Python as I dislike Visual Basic :-D.
|
5,635,054
|
I have a really large excel file and i need to delete about 20,000 rows, contingent on meeting a simple condition and excel won't let me delete such a complex range when using a filter. The condition is:
**If the first column contains the value, X, then I need to be able to delete the entire row.**
I'm trying to automate this using python and xlwt, but am not quite sure where to start. Seeking some code snippits to get me started...
Grateful for any help that's out there!
|
2011/04/12
|
[
"https://Stackoverflow.com/questions/5635054",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/704039/"
] |
You can use,
```
sh.Range(sh.Cells(1,1),sh.Cells(20000,1)).EntireRow.Delete()
```
will delete rows 1 to 20,000 in an open Excel spreadsheet so,
```
if sh.Cells(1,1).Value == 'X':
sh.Cells(1,1).EntireRow.Delete()
```
|
If you just need to delete the data (rather than 'getting rid of' the row, i.e. it shifts rows) you can try using my module, PyWorkbooks. You can get the most recent version here:
<https://sourceforge.net/projects/pyworkbooks/>
There is a pdf tutorial to guide you through how to use it. Happy coding!
|
5,635,054
|
I have a really large excel file and i need to delete about 20,000 rows, contingent on meeting a simple condition and excel won't let me delete such a complex range when using a filter. The condition is:
**If the first column contains the value, X, then I need to be able to delete the entire row.**
I'm trying to automate this using python and xlwt, but am not quite sure where to start. Seeking some code snippits to get me started...
Grateful for any help that's out there!
|
2011/04/12
|
[
"https://Stackoverflow.com/questions/5635054",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/704039/"
] |
Don't delete. Just copy what you need.
1. read the original file
2. open a new file
3. iterate over rows of the original file (if the first column of the row does not contain the value X, add this row to the new file)
4. close both files
5. rename the new file into the original file
|
If you just need to delete the data (rather than 'getting rid of' the row, i.e. it shifts rows) you can try using my module, PyWorkbooks. You can get the most recent version here:
<https://sourceforge.net/projects/pyworkbooks/>
There is a pdf tutorial to guide you through how to use it. Happy coding!
|
74,245,242
|
I want to perform the selection of a group of lines in a text file to get all jobs related to an ipref
The test file is like this :
job numbers : (1,2,3), ip ref : (10,12,10)
text file :
1
... (several lines of text)
xxx 10
2
... (several lines of text)
xxx 12
3
... (several lines of text)
xxx 10
i want to select job numbers for IPref=10.
Code :
```
#!/usr/bin/python
import re
import sys
fic=open('test2.xml','r')
texte=fic.read()
fic.close()
#pattern='\n?\d(?!(?:\n?xxx \d{2}\n)*)xxx 10'
pattern='\n?\d.*?xxx 10'
result= re.findall(pattern,texte, re.DOTALL)
i=1
for match in result:
print("\nmatch:",i)
i=i+1
print(match)
```
Result :
```
match: 1
1
a
b
xxx 10
match: 2
1
a
b
xxx 12
1
a
b
xxx 10
```
i have tried to replace .\* by a a negative lookahead assertion to only select if no expr like `"\n?xxx \d{2}\n"` is before "xxx 10" :
```
pattern='\n?\d(?!(?:\n?xxx \d{2}\n)*)xxx 10'
```
but it is not working ...
|
2022/10/29
|
[
"https://Stackoverflow.com/questions/74245242",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/20365230/"
] |
You can write the pattern in this way, repeating the newline and asserting not xxx followed by 1 or more digits:
```
^\d(?:\n(?!xxx \d+$).*)*\nxxx 10$
```
The pattern matches:
* `^` Start of string
* `\d` Match a single digit (or `\d+` for 1 or more)
* `(?:` Non capture group
+ `\n` Match a newline
+ `(?!xxx \d+$)` Negative lookahead to assert that the string is not `xxx` followed by 1+ digits
+ `.*` If the assertion is true, match the whole line
* `)*` Close the group and optionally repeat it
* `\nxxx 10$` Match a newline, `xxx` and 10
[Regex demo](https://regex101.com/r/RaKVcM/1)
|
Good day to you :) and Thank you very much for your quick response!!
i give you below the result
Note : i have modified re.DOTALL by re.DOTALL|re.MULTILINE (because the result is none without that... Sorry for the previous presentation ... it wat not very clear)
Text file :
```
1
a
b
xxx 10
1
a
b
xxx 12
1
a
b
xxx 10
```
Code With your pattern :
```
#!/usr/bin/python
import re
import sys
fic=open('test2.xml','r')
texte=fic.read()
fic.close()
print(texte)
#pattern='<\/?(?!(?:span|br|b)(?: [^>]*)?>)[^>\/]*>'
#pattern='\n?\d(?!(?:\n?xxx \d{2}\n?)*?)xxx 10'
#pattern='\n?\d.*?xxx 10'
pattern='^\d(?:\n(?!xxx \d+$).*)*\nxxx 10$'
result= re.findall(pattern,texte, re.DOTALL|re.MULTILINE)
i=1
for match in result:
print("\nmatch:",i)
i=i+1
print(match)
```
Result :
```
match: 1
1
a
b
xxx 10
1
a
b
xxx 12
1
a
b
xxx 10
```
but i try to obtain :
```
match: 1
1
a
b
xxx 10
match 2 :
1
a
b
xxx 10
```
|
74,245,242
|
I want to perform the selection of a group of lines in a text file to get all jobs related to an ipref
The test file is like this :
job numbers : (1,2,3), ip ref : (10,12,10)
text file :
1
... (several lines of text)
xxx 10
2
... (several lines of text)
xxx 12
3
... (several lines of text)
xxx 10
i want to select job numbers for IPref=10.
Code :
```
#!/usr/bin/python
import re
import sys
fic=open('test2.xml','r')
texte=fic.read()
fic.close()
#pattern='\n?\d(?!(?:\n?xxx \d{2}\n)*)xxx 10'
pattern='\n?\d.*?xxx 10'
result= re.findall(pattern,texte, re.DOTALL)
i=1
for match in result:
print("\nmatch:",i)
i=i+1
print(match)
```
Result :
```
match: 1
1
a
b
xxx 10
match: 2
1
a
b
xxx 12
1
a
b
xxx 10
```
i have tried to replace .\* by a a negative lookahead assertion to only select if no expr like `"\n?xxx \d{2}\n"` is before "xxx 10" :
```
pattern='\n?\d(?!(?:\n?xxx \d{2}\n)*)xxx 10'
```
but it is not working ...
|
2022/10/29
|
[
"https://Stackoverflow.com/questions/74245242",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/20365230/"
] |
You can write the pattern in this way, repeating the newline and asserting not xxx followed by 1 or more digits:
```
^\d(?:\n(?!xxx \d+$).*)*\nxxx 10$
```
The pattern matches:
* `^` Start of string
* `\d` Match a single digit (or `\d+` for 1 or more)
* `(?:` Non capture group
+ `\n` Match a newline
+ `(?!xxx \d+$)` Negative lookahead to assert that the string is not `xxx` followed by 1+ digits
+ `.*` If the assertion is true, match the whole line
* `)*` Close the group and optionally repeat it
* `\nxxx 10$` Match a newline, `xxx` and 10
[Regex demo](https://regex101.com/r/RaKVcM/1)
|
Thank you very much, (you saved my day !!)
as you say :
```
pattern='^\d(?:\n(?!xxx \d+$).*)*\nxxx 10$'
result= re.findall(pattern,texte, re.MULTILINE)
```
result : OK, the line group (1..xxx 12) is ignored,
NOTE : i can adapt it to a case where line 1 is a line giving job information and "xxx 12" is a line giving printer IP information.
```
match: 1
1
a
b
xxx 10
match: 2
1
a
b
xxx 10
```
|
74,245,242
|
I want to perform the selection of a group of lines in a text file to get all jobs related to an ipref
The test file is like this :
job numbers : (1,2,3), ip ref : (10,12,10)
text file :
1
... (several lines of text)
xxx 10
2
... (several lines of text)
xxx 12
3
... (several lines of text)
xxx 10
i want to select job numbers for IPref=10.
Code :
```
#!/usr/bin/python
import re
import sys
fic=open('test2.xml','r')
texte=fic.read()
fic.close()
#pattern='\n?\d(?!(?:\n?xxx \d{2}\n)*)xxx 10'
pattern='\n?\d.*?xxx 10'
result= re.findall(pattern,texte, re.DOTALL)
i=1
for match in result:
print("\nmatch:",i)
i=i+1
print(match)
```
Result :
```
match: 1
1
a
b
xxx 10
match: 2
1
a
b
xxx 12
1
a
b
xxx 10
```
i have tried to replace .\* by a a negative lookahead assertion to only select if no expr like `"\n?xxx \d{2}\n"` is before "xxx 10" :
```
pattern='\n?\d(?!(?:\n?xxx \d{2}\n)*)xxx 10'
```
but it is not working ...
|
2022/10/29
|
[
"https://Stackoverflow.com/questions/74245242",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/20365230/"
] |
You can write the pattern in this way, repeating the newline and asserting not xxx followed by 1 or more digits:
```
^\d(?:\n(?!xxx \d+$).*)*\nxxx 10$
```
The pattern matches:
* `^` Start of string
* `\d` Match a single digit (or `\d+` for 1 or more)
* `(?:` Non capture group
+ `\n` Match a newline
+ `(?!xxx \d+$)` Negative lookahead to assert that the string is not `xxx` followed by 1+ digits
+ `.*` If the assertion is true, match the whole line
* `)*` Close the group and optionally repeat it
* `\nxxx 10$` Match a newline, `xxx` and 10
[Regex demo](https://regex101.com/r/RaKVcM/1)
|
file :
```
job_number job_id
1 10202
bla bla
bla bla bla
xxx 100.10.10.100
2 10203
bla bla
bla bla bla
bla bla bla
xxx 100.10.10.102
3 10204
bla bla bla
bla bla bla
xxx 100.10.10.100
```
bash script with embedded python script :
```
#!/bin/bash
# function , $1 : ip of a printer
get_jobs_ip ()
{
cat <<EOF | python
import re
fic=open('test3.xml','r')
texte=fic.read()
fic.close()
"""
The pattern matches example with ip="100\.10\.10\.100" :
thank you to Fourth bird for the pattern !!!
#pattern='^\d\s+\d+(?:\n(?!xxx \d+\.\d+\.\d+\.\d+$).*)*\nxxx 100\.10\.10\.100$'
^ Start of string
\d Match a single digit (or \d+ for 1 or more)
(?: Non capture group
\n Match a newline
(?!xxx \d+\.\d+\.\d+\.\d+$) Negative lookahead to assert that the string is not xxx followed by 1+ digits
.* If the assertion is true, match the whole line
)* Close the group and optionally repeat it
\nxxx 100\.10\.10\.100$ Match a newline, xxx and 10
"""
ip="$1"
pattern_template='^\d\s+\d+(?:\n(?!xxx \d+\.\d+\.\d+\.\d+$).*)*\nxxx @ip@$'
pattern=pattern_template.replace('@ip@',ip)
result= re.findall(pattern,texte, re.MULTILINE)
i=1
for match in result:
print("\nmatch:",i)
i=i+1
print(match)
EOF
}
get_jobs_ip "100\.10\.10\.100"
get_jobs_ip "100\.10\.10\.102"
```
result :
```
match: 1
1 10202
bla bla
bla bla bla
xxx 100.10.10.100
match: 2
3 10204
bla bla bla
bla bla bla
xxx 100.10.10.100
match: 1
2 10203
bla bla
bla bla bla
bla bla bla
xxx 100.10.10.102
```
|
74,245,242
|
I want to perform the selection of a group of lines in a text file to get all jobs related to an ipref
The test file is like this :
job numbers : (1,2,3), ip ref : (10,12,10)
text file :
1
... (several lines of text)
xxx 10
2
... (several lines of text)
xxx 12
3
... (several lines of text)
xxx 10
i want to select job numbers for IPref=10.
Code :
```
#!/usr/bin/python
import re
import sys
fic=open('test2.xml','r')
texte=fic.read()
fic.close()
#pattern='\n?\d(?!(?:\n?xxx \d{2}\n)*)xxx 10'
pattern='\n?\d.*?xxx 10'
result= re.findall(pattern,texte, re.DOTALL)
i=1
for match in result:
print("\nmatch:",i)
i=i+1
print(match)
```
Result :
```
match: 1
1
a
b
xxx 10
match: 2
1
a
b
xxx 12
1
a
b
xxx 10
```
i have tried to replace .\* by a a negative lookahead assertion to only select if no expr like `"\n?xxx \d{2}\n"` is before "xxx 10" :
```
pattern='\n?\d(?!(?:\n?xxx \d{2}\n)*)xxx 10'
```
but it is not working ...
|
2022/10/29
|
[
"https://Stackoverflow.com/questions/74245242",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/20365230/"
] |
Good day to you :) and Thank you very much for your quick response!!
i give you below the result
Note : i have modified re.DOTALL by re.DOTALL|re.MULTILINE (because the result is none without that... Sorry for the previous presentation ... it wat not very clear)
Text file :
```
1
a
b
xxx 10
1
a
b
xxx 12
1
a
b
xxx 10
```
Code With your pattern :
```
#!/usr/bin/python
import re
import sys
fic=open('test2.xml','r')
texte=fic.read()
fic.close()
print(texte)
#pattern='<\/?(?!(?:span|br|b)(?: [^>]*)?>)[^>\/]*>'
#pattern='\n?\d(?!(?:\n?xxx \d{2}\n?)*?)xxx 10'
#pattern='\n?\d.*?xxx 10'
pattern='^\d(?:\n(?!xxx \d+$).*)*\nxxx 10$'
result= re.findall(pattern,texte, re.DOTALL|re.MULTILINE)
i=1
for match in result:
print("\nmatch:",i)
i=i+1
print(match)
```
Result :
```
match: 1
1
a
b
xxx 10
1
a
b
xxx 12
1
a
b
xxx 10
```
but i try to obtain :
```
match: 1
1
a
b
xxx 10
match 2 :
1
a
b
xxx 10
```
|
Thank you very much, (you saved my day !!)
as you say :
```
pattern='^\d(?:\n(?!xxx \d+$).*)*\nxxx 10$'
result= re.findall(pattern,texte, re.MULTILINE)
```
result : OK, the line group (1..xxx 12) is ignored,
NOTE : i can adapt it to a case where line 1 is a line giving job information and "xxx 12" is a line giving printer IP information.
```
match: 1
1
a
b
xxx 10
match: 2
1
a
b
xxx 10
```
|
74,245,242
|
I want to perform the selection of a group of lines in a text file to get all jobs related to an ipref
The test file is like this :
job numbers : (1,2,3), ip ref : (10,12,10)
text file :
1
... (several lines of text)
xxx 10
2
... (several lines of text)
xxx 12
3
... (several lines of text)
xxx 10
i want to select job numbers for IPref=10.
Code :
```
#!/usr/bin/python
import re
import sys
fic=open('test2.xml','r')
texte=fic.read()
fic.close()
#pattern='\n?\d(?!(?:\n?xxx \d{2}\n)*)xxx 10'
pattern='\n?\d.*?xxx 10'
result= re.findall(pattern,texte, re.DOTALL)
i=1
for match in result:
print("\nmatch:",i)
i=i+1
print(match)
```
Result :
```
match: 1
1
a
b
xxx 10
match: 2
1
a
b
xxx 12
1
a
b
xxx 10
```
i have tried to replace .\* by a a negative lookahead assertion to only select if no expr like `"\n?xxx \d{2}\n"` is before "xxx 10" :
```
pattern='\n?\d(?!(?:\n?xxx \d{2}\n)*)xxx 10'
```
but it is not working ...
|
2022/10/29
|
[
"https://Stackoverflow.com/questions/74245242",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/20365230/"
] |
Good day to you :) and Thank you very much for your quick response!!
i give you below the result
Note : i have modified re.DOTALL by re.DOTALL|re.MULTILINE (because the result is none without that... Sorry for the previous presentation ... it wat not very clear)
Text file :
```
1
a
b
xxx 10
1
a
b
xxx 12
1
a
b
xxx 10
```
Code With your pattern :
```
#!/usr/bin/python
import re
import sys
fic=open('test2.xml','r')
texte=fic.read()
fic.close()
print(texte)
#pattern='<\/?(?!(?:span|br|b)(?: [^>]*)?>)[^>\/]*>'
#pattern='\n?\d(?!(?:\n?xxx \d{2}\n?)*?)xxx 10'
#pattern='\n?\d.*?xxx 10'
pattern='^\d(?:\n(?!xxx \d+$).*)*\nxxx 10$'
result= re.findall(pattern,texte, re.DOTALL|re.MULTILINE)
i=1
for match in result:
print("\nmatch:",i)
i=i+1
print(match)
```
Result :
```
match: 1
1
a
b
xxx 10
1
a
b
xxx 12
1
a
b
xxx 10
```
but i try to obtain :
```
match: 1
1
a
b
xxx 10
match 2 :
1
a
b
xxx 10
```
|
file :
```
job_number job_id
1 10202
bla bla
bla bla bla
xxx 100.10.10.100
2 10203
bla bla
bla bla bla
bla bla bla
xxx 100.10.10.102
3 10204
bla bla bla
bla bla bla
xxx 100.10.10.100
```
bash script with embedded python script :
```
#!/bin/bash
# function , $1 : ip of a printer
get_jobs_ip ()
{
cat <<EOF | python
import re
fic=open('test3.xml','r')
texte=fic.read()
fic.close()
"""
The pattern matches example with ip="100\.10\.10\.100" :
thank you to Fourth bird for the pattern !!!
#pattern='^\d\s+\d+(?:\n(?!xxx \d+\.\d+\.\d+\.\d+$).*)*\nxxx 100\.10\.10\.100$'
^ Start of string
\d Match a single digit (or \d+ for 1 or more)
(?: Non capture group
\n Match a newline
(?!xxx \d+\.\d+\.\d+\.\d+$) Negative lookahead to assert that the string is not xxx followed by 1+ digits
.* If the assertion is true, match the whole line
)* Close the group and optionally repeat it
\nxxx 100\.10\.10\.100$ Match a newline, xxx and 10
"""
ip="$1"
pattern_template='^\d\s+\d+(?:\n(?!xxx \d+\.\d+\.\d+\.\d+$).*)*\nxxx @ip@$'
pattern=pattern_template.replace('@ip@',ip)
result= re.findall(pattern,texte, re.MULTILINE)
i=1
for match in result:
print("\nmatch:",i)
i=i+1
print(match)
EOF
}
get_jobs_ip "100\.10\.10\.100"
get_jobs_ip "100\.10\.10\.102"
```
result :
```
match: 1
1 10202
bla bla
bla bla bla
xxx 100.10.10.100
match: 2
3 10204
bla bla bla
bla bla bla
xxx 100.10.10.100
match: 1
2 10203
bla bla
bla bla bla
bla bla bla
xxx 100.10.10.102
```
|
24,309,586
|
This is similar to this [question](https://stackoverflow.com/questions/20403387/how-to-remove-a-package-from-pypi) with one exception. I want to remove a few specific versions of the package from our local pypi index, which I had uploaded with the following command in the past.
```
python setup.py sdist upload -r <index_name>
```
Any ideas?
|
2014/06/19
|
[
"https://Stackoverflow.com/questions/24309586",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/251096/"
] |
Removing packages from local pypi index **depends on type of pypi index you use**.
removing package from `devpi` index
===================================
`devpi` allows [removing packages](http://doc.devpi.net/latest/userman/devpi_packages.html#removing-a-release-file-or-project) only from so called volatile indexes. Non-volatile are "release like" indexes and removing from them is not allowed (as you would surprise users depending on released package).
E.g. for package `pysober` version 0.2.0:
```
$ devpi remove -y pysober==0.2.0
```
removing package from public pypi
=================================
is described in the [answer](https://stackoverflow.com/questions/20403387/how-to-remove-a-package-from-pypi) you already refered to.
removing package from other indexes
===================================
Can vary, but in many cases you can manually delete the files (with proper care).
|
I'm using [pypiserver](https://pypi.org/project/pypiserver/) and had to remove a bad package so I just SSH'd in and removed the bad packages and restarted the service.
The commands were roughly:
```
ssh root@pypiserver
cd ~pypiserver/pypiserver/packages
rm bad-package*
systemctl restart pypiserver.service
```
That seemed to work fine for me, and you can just remove what you need using standard shell commands. Just be sure to restart the process so it refreshes its index.
|
24,309,586
|
This is similar to this [question](https://stackoverflow.com/questions/20403387/how-to-remove-a-package-from-pypi) with one exception. I want to remove a few specific versions of the package from our local pypi index, which I had uploaded with the following command in the past.
```
python setup.py sdist upload -r <index_name>
```
Any ideas?
|
2014/06/19
|
[
"https://Stackoverflow.com/questions/24309586",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/251096/"
] |
As an addenum from @jan-vlcinsky's answer
Removing from [`pypiserver`](https://github.com/pypiserver/pypiserver)
======================================================================
Using `curl` for instance:
```
curl --form ":action=remove_pkg" --form "name=<package_name>" --form "version=<version>" <pypiserver_url>
```
|
I'm using [pypiserver](https://pypi.org/project/pypiserver/) and had to remove a bad package so I just SSH'd in and removed the bad packages and restarted the service.
The commands were roughly:
```
ssh root@pypiserver
cd ~pypiserver/pypiserver/packages
rm bad-package*
systemctl restart pypiserver.service
```
That seemed to work fine for me, and you can just remove what you need using standard shell commands. Just be sure to restart the process so it refreshes its index.
|
17,321,910
|
I have a file with a correspondence key -> value:
```
sort keyFile.txt | head
ENSMUSG00000000001 ENSMUSG00000000001_Gnai3
ENSMUSG00000000003 ENSMUSG00000000003_Pbsn
ENSMUSG00000000003 ENSMUSG00000000003_Pbsn
ENSMUSG00000000028 ENSMUSG00000000028_Cdc45
ENSMUSG00000000028 ENSMUSG00000000028_Cdc45
ENSMUSG00000000028 ENSMUSG00000000028_Cdc45
ENSMUSG00000000031 ENSMUSG00000000031_H19
ENSMUSG00000000031 ENSMUSG00000000031_H19
ENSMUSG00000000031 ENSMUSG00000000031_H19
ENSMUSG00000000031 ENSMUSG00000000031_H19
```
And I would like to replace every correspondence of "key" with the "value" in the temp.txt:
```
head temp.txt
ENSMUSG00000000001:001 515
ENSMUSG00000000001:002 108
ENSMUSG00000000001:003 64
ENSMUSG00000000001:004 45
ENSMUSG00000000001:005 58
ENSMUSG00000000001:006 63
ENSMUSG00000000001:007 46
ENSMUSG00000000001:008 11
ENSMUSG00000000001:009 13
ENSMUSG00000000003:001 0
```
The result should be:
```
out.txt
ENSMUSG00000000001_Gnai3:001 515
ENSMUSG00000000001_Gnai3:002 108
ENSMUSG00000000001_Gnai3:003 64
ENSMUSG00000000001_Gnai3:004 45
ENSMUSG00000000001_Gnai3:005 58
ENSMUSG00000000001_Gnai3:006 63
ENSMUSG00000000001_Gnai3:007 46
ENSMUSG00000000001_Gnai3:008 11
ENSMUSG00000000001_Gnai3:009 13
ENSMUSG00000000001_Gnai3:001 0
```
I have tried a few variations following [this AWK example](https://stackoverflow.com/a/13079914/1274242) but as you can see the result is not what I expected:
```
awk 'NR==FNR{a[$1]=$1;next}{$1=a[$1];}1' keyFile.txt temp.txt | head
515
108
64
45
58
63
46
11
13
0
```
My guess is that column 1 of temp does not match 'exactly' column 1 of keyValues. Could someone please help me with this?
R/python/sed solutions are also welcome.
|
2013/06/26
|
[
"https://Stackoverflow.com/questions/17321910",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1274242/"
] |
Use awk command like this:
```
awk 'NR==FNR {a[$1]=$2;next} {
split($1, b, ":");
if (b[1] in a)
print a[b[1]] ":" b[2], $2;
else
print $0;
}' keyFile.txt temp.txt
```
|
Another awk option
```
awk -F: 'NR == FNR{split($0, a, " "); x[a[1]]=a[2]; next}{print x[$1]":"$2}' keyFile.txt temp.txt
```
|
17,321,910
|
I have a file with a correspondence key -> value:
```
sort keyFile.txt | head
ENSMUSG00000000001 ENSMUSG00000000001_Gnai3
ENSMUSG00000000003 ENSMUSG00000000003_Pbsn
ENSMUSG00000000003 ENSMUSG00000000003_Pbsn
ENSMUSG00000000028 ENSMUSG00000000028_Cdc45
ENSMUSG00000000028 ENSMUSG00000000028_Cdc45
ENSMUSG00000000028 ENSMUSG00000000028_Cdc45
ENSMUSG00000000031 ENSMUSG00000000031_H19
ENSMUSG00000000031 ENSMUSG00000000031_H19
ENSMUSG00000000031 ENSMUSG00000000031_H19
ENSMUSG00000000031 ENSMUSG00000000031_H19
```
And I would like to replace every correspondence of "key" with the "value" in the temp.txt:
```
head temp.txt
ENSMUSG00000000001:001 515
ENSMUSG00000000001:002 108
ENSMUSG00000000001:003 64
ENSMUSG00000000001:004 45
ENSMUSG00000000001:005 58
ENSMUSG00000000001:006 63
ENSMUSG00000000001:007 46
ENSMUSG00000000001:008 11
ENSMUSG00000000001:009 13
ENSMUSG00000000003:001 0
```
The result should be:
```
out.txt
ENSMUSG00000000001_Gnai3:001 515
ENSMUSG00000000001_Gnai3:002 108
ENSMUSG00000000001_Gnai3:003 64
ENSMUSG00000000001_Gnai3:004 45
ENSMUSG00000000001_Gnai3:005 58
ENSMUSG00000000001_Gnai3:006 63
ENSMUSG00000000001_Gnai3:007 46
ENSMUSG00000000001_Gnai3:008 11
ENSMUSG00000000001_Gnai3:009 13
ENSMUSG00000000001_Gnai3:001 0
```
I have tried a few variations following [this AWK example](https://stackoverflow.com/a/13079914/1274242) but as you can see the result is not what I expected:
```
awk 'NR==FNR{a[$1]=$1;next}{$1=a[$1];}1' keyFile.txt temp.txt | head
515
108
64
45
58
63
46
11
13
0
```
My guess is that column 1 of temp does not match 'exactly' column 1 of keyValues. Could someone please help me with this?
R/python/sed solutions are also welcome.
|
2013/06/26
|
[
"https://Stackoverflow.com/questions/17321910",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1274242/"
] |
Use awk command like this:
```
awk 'NR==FNR {a[$1]=$2;next} {
split($1, b, ":");
if (b[1] in a)
print a[b[1]] ":" b[2], $2;
else
print $0;
}' keyFile.txt temp.txt
```
|
Another `awk` version:
```
awk 'NR==FNR{a[$1]=$2;next}
{sub(/[^:]+/,a[substr($1,1,index($1,":")-1)])}1' keyFile.txt temp.txt
```
|
17,321,910
|
I have a file with a correspondence key -> value:
```
sort keyFile.txt | head
ENSMUSG00000000001 ENSMUSG00000000001_Gnai3
ENSMUSG00000000003 ENSMUSG00000000003_Pbsn
ENSMUSG00000000003 ENSMUSG00000000003_Pbsn
ENSMUSG00000000028 ENSMUSG00000000028_Cdc45
ENSMUSG00000000028 ENSMUSG00000000028_Cdc45
ENSMUSG00000000028 ENSMUSG00000000028_Cdc45
ENSMUSG00000000031 ENSMUSG00000000031_H19
ENSMUSG00000000031 ENSMUSG00000000031_H19
ENSMUSG00000000031 ENSMUSG00000000031_H19
ENSMUSG00000000031 ENSMUSG00000000031_H19
```
And I would like to replace every correspondence of "key" with the "value" in the temp.txt:
```
head temp.txt
ENSMUSG00000000001:001 515
ENSMUSG00000000001:002 108
ENSMUSG00000000001:003 64
ENSMUSG00000000001:004 45
ENSMUSG00000000001:005 58
ENSMUSG00000000001:006 63
ENSMUSG00000000001:007 46
ENSMUSG00000000001:008 11
ENSMUSG00000000001:009 13
ENSMUSG00000000003:001 0
```
The result should be:
```
out.txt
ENSMUSG00000000001_Gnai3:001 515
ENSMUSG00000000001_Gnai3:002 108
ENSMUSG00000000001_Gnai3:003 64
ENSMUSG00000000001_Gnai3:004 45
ENSMUSG00000000001_Gnai3:005 58
ENSMUSG00000000001_Gnai3:006 63
ENSMUSG00000000001_Gnai3:007 46
ENSMUSG00000000001_Gnai3:008 11
ENSMUSG00000000001_Gnai3:009 13
ENSMUSG00000000001_Gnai3:001 0
```
I have tried a few variations following [this AWK example](https://stackoverflow.com/a/13079914/1274242) but as you can see the result is not what I expected:
```
awk 'NR==FNR{a[$1]=$1;next}{$1=a[$1];}1' keyFile.txt temp.txt | head
515
108
64
45
58
63
46
11
13
0
```
My guess is that column 1 of temp does not match 'exactly' column 1 of keyValues. Could someone please help me with this?
R/python/sed solutions are also welcome.
|
2013/06/26
|
[
"https://Stackoverflow.com/questions/17321910",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1274242/"
] |
Use awk command like this:
```
awk 'NR==FNR {a[$1]=$2;next} {
split($1, b, ":");
if (b[1] in a)
print a[b[1]] ":" b[2], $2;
else
print $0;
}' keyFile.txt temp.txt
```
|
Code for GNU [sed](/questions/tagged/sed "show questions tagged 'sed'"):
```
$sed -nr '$!N;/^(.*)\n\1$/!bk;D;:k;s#\S+\s+(\w+)_(\w+)#/^\1/s/(\\w+)(:\\w+)\\s+(\\w+)/\\1_\2\\2 \\3/p#;P;s/^(.*)\n//' keyfile.txt|sed -nrf - temp.txt
ENSMUSG00000000001_Gnai3:001 515
ENSMUSG00000000001_Gnai3:002 108
ENSMUSG00000000001_Gnai3:003 64
ENSMUSG00000000001_Gnai3:004 45
ENSMUSG00000000001_Gnai3:005 58
ENSMUSG00000000001_Gnai3:006 63
ENSMUSG00000000001_Gnai3:007 46
ENSMUSG00000000001_Gnai3:008 11
ENSMUSG00000000001_Gnai3:009 13
ENSMUSG00000000003_Pbsn:001 0
```
|
17,321,910
|
I have a file with a correspondence key -> value:
```
sort keyFile.txt | head
ENSMUSG00000000001 ENSMUSG00000000001_Gnai3
ENSMUSG00000000003 ENSMUSG00000000003_Pbsn
ENSMUSG00000000003 ENSMUSG00000000003_Pbsn
ENSMUSG00000000028 ENSMUSG00000000028_Cdc45
ENSMUSG00000000028 ENSMUSG00000000028_Cdc45
ENSMUSG00000000028 ENSMUSG00000000028_Cdc45
ENSMUSG00000000031 ENSMUSG00000000031_H19
ENSMUSG00000000031 ENSMUSG00000000031_H19
ENSMUSG00000000031 ENSMUSG00000000031_H19
ENSMUSG00000000031 ENSMUSG00000000031_H19
```
And I would like to replace every correspondence of "key" with the "value" in the temp.txt:
```
head temp.txt
ENSMUSG00000000001:001 515
ENSMUSG00000000001:002 108
ENSMUSG00000000001:003 64
ENSMUSG00000000001:004 45
ENSMUSG00000000001:005 58
ENSMUSG00000000001:006 63
ENSMUSG00000000001:007 46
ENSMUSG00000000001:008 11
ENSMUSG00000000001:009 13
ENSMUSG00000000003:001 0
```
The result should be:
```
out.txt
ENSMUSG00000000001_Gnai3:001 515
ENSMUSG00000000001_Gnai3:002 108
ENSMUSG00000000001_Gnai3:003 64
ENSMUSG00000000001_Gnai3:004 45
ENSMUSG00000000001_Gnai3:005 58
ENSMUSG00000000001_Gnai3:006 63
ENSMUSG00000000001_Gnai3:007 46
ENSMUSG00000000001_Gnai3:008 11
ENSMUSG00000000001_Gnai3:009 13
ENSMUSG00000000001_Gnai3:001 0
```
I have tried a few variations following [this AWK example](https://stackoverflow.com/a/13079914/1274242) but as you can see the result is not what I expected:
```
awk 'NR==FNR{a[$1]=$1;next}{$1=a[$1];}1' keyFile.txt temp.txt | head
515
108
64
45
58
63
46
11
13
0
```
My guess is that column 1 of temp does not match 'exactly' column 1 of keyValues. Could someone please help me with this?
R/python/sed solutions are also welcome.
|
2013/06/26
|
[
"https://Stackoverflow.com/questions/17321910",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1274242/"
] |
Another awk option
```
awk -F: 'NR == FNR{split($0, a, " "); x[a[1]]=a[2]; next}{print x[$1]":"$2}' keyFile.txt temp.txt
```
|
Another `awk` version:
```
awk 'NR==FNR{a[$1]=$2;next}
{sub(/[^:]+/,a[substr($1,1,index($1,":")-1)])}1' keyFile.txt temp.txt
```
|
17,321,910
|
I have a file with a correspondence key -> value:
```
sort keyFile.txt | head
ENSMUSG00000000001 ENSMUSG00000000001_Gnai3
ENSMUSG00000000003 ENSMUSG00000000003_Pbsn
ENSMUSG00000000003 ENSMUSG00000000003_Pbsn
ENSMUSG00000000028 ENSMUSG00000000028_Cdc45
ENSMUSG00000000028 ENSMUSG00000000028_Cdc45
ENSMUSG00000000028 ENSMUSG00000000028_Cdc45
ENSMUSG00000000031 ENSMUSG00000000031_H19
ENSMUSG00000000031 ENSMUSG00000000031_H19
ENSMUSG00000000031 ENSMUSG00000000031_H19
ENSMUSG00000000031 ENSMUSG00000000031_H19
```
And I would like to replace every correspondence of "key" with the "value" in the temp.txt:
```
head temp.txt
ENSMUSG00000000001:001 515
ENSMUSG00000000001:002 108
ENSMUSG00000000001:003 64
ENSMUSG00000000001:004 45
ENSMUSG00000000001:005 58
ENSMUSG00000000001:006 63
ENSMUSG00000000001:007 46
ENSMUSG00000000001:008 11
ENSMUSG00000000001:009 13
ENSMUSG00000000003:001 0
```
The result should be:
```
out.txt
ENSMUSG00000000001_Gnai3:001 515
ENSMUSG00000000001_Gnai3:002 108
ENSMUSG00000000001_Gnai3:003 64
ENSMUSG00000000001_Gnai3:004 45
ENSMUSG00000000001_Gnai3:005 58
ENSMUSG00000000001_Gnai3:006 63
ENSMUSG00000000001_Gnai3:007 46
ENSMUSG00000000001_Gnai3:008 11
ENSMUSG00000000001_Gnai3:009 13
ENSMUSG00000000001_Gnai3:001 0
```
I have tried a few variations following [this AWK example](https://stackoverflow.com/a/13079914/1274242) but as you can see the result is not what I expected:
```
awk 'NR==FNR{a[$1]=$1;next}{$1=a[$1];}1' keyFile.txt temp.txt | head
515
108
64
45
58
63
46
11
13
0
```
My guess is that column 1 of temp does not match 'exactly' column 1 of keyValues. Could someone please help me with this?
R/python/sed solutions are also welcome.
|
2013/06/26
|
[
"https://Stackoverflow.com/questions/17321910",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1274242/"
] |
Code for GNU [sed](/questions/tagged/sed "show questions tagged 'sed'"):
```
$sed -nr '$!N;/^(.*)\n\1$/!bk;D;:k;s#\S+\s+(\w+)_(\w+)#/^\1/s/(\\w+)(:\\w+)\\s+(\\w+)/\\1_\2\\2 \\3/p#;P;s/^(.*)\n//' keyfile.txt|sed -nrf - temp.txt
ENSMUSG00000000001_Gnai3:001 515
ENSMUSG00000000001_Gnai3:002 108
ENSMUSG00000000001_Gnai3:003 64
ENSMUSG00000000001_Gnai3:004 45
ENSMUSG00000000001_Gnai3:005 58
ENSMUSG00000000001_Gnai3:006 63
ENSMUSG00000000001_Gnai3:007 46
ENSMUSG00000000001_Gnai3:008 11
ENSMUSG00000000001_Gnai3:009 13
ENSMUSG00000000003_Pbsn:001 0
```
|
Another `awk` version:
```
awk 'NR==FNR{a[$1]=$2;next}
{sub(/[^:]+/,a[substr($1,1,index($1,":")-1)])}1' keyFile.txt temp.txt
```
|
34,994,130
|
Looking through several projects recently, I noticed some of them use `platforms` argument to `setup()` in `setup.py`, though with only one value of `any`, i.e.
```
#setup.py file in project's package folder
...
setup(
...,
platforms=['any'],
...
)
```
OR
```
#setup.py file in project's package folder
...
setup(
...,
platforms='any',
...
)
```
From the name "platforms", I can make a guess about what this argument means, and it seems that the list variant is the right usage.
So I googled, looked through [setuptools docs](http://pythonhosted.org/setuptools/setuptools.html), but I failed to find any explanation to what are the possible values to `platforms` and what it does/affects in package exactly.
Please, explain or provide a link to explanation of what it does exactly and what values it accepts?
P.S. Also tried to provide different values to it in my OS independent package and see what changes, when creating wheels, but it seems it does nothing.
|
2016/01/25
|
[
"https://Stackoverflow.com/questions/34994130",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5738152/"
] |
`platforms` is an argument the `setuptools` package inherits from `distutils`; see the [*Additional meta-data* section](https://docs.python.org/2/distutils/setupscript.html#additional-meta-data) in the `distutils` documentation:
>
> *Meta-Data*: `platforms`
>
> *Description*: a list of platforms
>
> *Value*: list of strings
>
>
>
So, yes, using a list is the correct syntax.
The field just provides metadata; what platforms does the package target. Use this to communicate to tools or people about where you expect the package to be used.
There is no further specification for the contents of this list, it is unstructured and free-form. If you want to use something more structured, use the [available Trove classifier strings](https://pypi.python.org/pypi?%3Aaction=list_classifiers) in the `classifiers` field, where tags under `Operating System`, `Environment` and others let you more strictly define a platform.
Wheels do not use this field other than to include it in the metadata, just like other fields like `author` or `license`.
|
Just an update to provide more information for anyone interested.
I found an accurate description of `platforms` in a [PEP](https://www.python.org/dev/peps/pep-0345/#platform-multiple-use).
So, "*There's a PEP for that!*":
[PEP-0345](https://www.python.org/dev/peps/pep-0345/) lists all possible arguments to `setup()` in `setup.py`, but it's a little old.
[PEP-0426](https://www.python.org/dev/peps/pep-0426/) and [PEP-0459](https://www.python.org/dev/peps/pep-0459/) are newer versions describing metadata for Python packages.
|
48,465,325
|
I am trying to build my container image using docker\_image module of Ansible.
**My host machine details:**
```
OS: Lubuntu 17.10
ansible 2.4.2.0
config file = /etc/ansible/ansible.cfg
configured module search path = [u'/home/myuser/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/dist-packages/ansible
executable location = /usr/bin/ansible
python version = 2.7.14 (default, Sep 23 2017, 22:06:14) [GCC 7.2.0]
```
**My Remote machine details:**
```
Remote OS: CentOS 7.2
Pip modules: docker-py==1.2.3 , six==latest
```
**My tasks inside playbook:**
```
- name: Install dependent python modules
pip:
name: "{{item}}"
state: present
with_items:
- docker-py
- name: Build container image for api
docker_image:
name: api
path: /home/abc/api/ #location of my Dockerfile
```
However i am constantly getting the following error message:
```
"msg": "Failed to import docker-py - No module named 'requests.packages.urllib3'. Try `pip install docker-py`"
```
I see there is some issue with the docker-py module and also some solutions and fixes into ansible docker\_container is also merged in the following links:
<https://github.com/ansible/ansible/issues/20492>
<https://medium.com/dronzebot/ansible-and-docker-py-path-issues-and-resolving-them-e3834d5bb79a>
I even tried with the following command to run my playbook:
```
python3 ansible-playbook main.yml
```
None of the above helped to resolve it successfully yet. How should i go about this now
|
2018/01/26
|
[
"https://Stackoverflow.com/questions/48465325",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3592502/"
] |
Your client (presumably) would not have Sequelize installed and cannot pass a Sequelize operator via JSON, so what you are trying to do is not exactly possible. You would probably to get the client to send string (e.g. `or`, `and` and then you would have to map those string to Sequelize operators) [1].
```
"filters": {"week": 201740, "project": 8, "itemgroup": {"or": ["group1", "group2"]}}
```
Then in your server code you would need to maintain a map of strings to Sequelize operators:
```
const operatorsMap = {
or: [Op.or],
and: [Op.and],
etc
}
```
And for each request, you then to loop over all keys and replace strings with the Sequelize operators.
```
function isObject(o) {
return o instanceof Object && o.constructor === Object;
}
function replacer(obj) {
var newObj = {};
for (key in obj) {
var value = obj[key];
if (isObject(value)) {
newObj[key] = replacer(value);
} else if (operatorsMap[key]) {
var op = operatorsMap[key];
newObj[op] = value;
} else {
newObj[key] = value
}
}
return newObj;
}
module.exports = {
getAOLDataCount: function (res, filters) {
let result = '';
wd = new WeeklyData(sequelize, Sequelize.DataTypes);
wd.count({where: replacer(filters)}).then(function (aolCount) {
res.send('the value ' + aolCount);
});
return result;
}
};
```
FYI, The code above has not been tested.
[1] Without knowing the specifics of your project, I would recommend you reconsider this approach. The client really should not send a JSON object that gets fed directly to the ORM. What about bad SQL injection? What if you upgrade Sequelize and old filters get deprecated? Instead, considering allowing filtering via query paramters and have your API create a filter object based on that.
|
In the end I found out from the sequelize slack group that the old operator method is still in the codebase. I don't know how long it will stay, but since it is the only way of sending via json, it may well stay.
So the way it works is that rather than using [Op.or] you can use $or.
Example:
```
"filters": {"week": 201740, "project": 8, "itemgroup": {"$or": ["group1", "group2"]}}
```
|
48,465,325
|
I am trying to build my container image using docker\_image module of Ansible.
**My host machine details:**
```
OS: Lubuntu 17.10
ansible 2.4.2.0
config file = /etc/ansible/ansible.cfg
configured module search path = [u'/home/myuser/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/dist-packages/ansible
executable location = /usr/bin/ansible
python version = 2.7.14 (default, Sep 23 2017, 22:06:14) [GCC 7.2.0]
```
**My Remote machine details:**
```
Remote OS: CentOS 7.2
Pip modules: docker-py==1.2.3 , six==latest
```
**My tasks inside playbook:**
```
- name: Install dependent python modules
pip:
name: "{{item}}"
state: present
with_items:
- docker-py
- name: Build container image for api
docker_image:
name: api
path: /home/abc/api/ #location of my Dockerfile
```
However i am constantly getting the following error message:
```
"msg": "Failed to import docker-py - No module named 'requests.packages.urllib3'. Try `pip install docker-py`"
```
I see there is some issue with the docker-py module and also some solutions and fixes into ansible docker\_container is also merged in the following links:
<https://github.com/ansible/ansible/issues/20492>
<https://medium.com/dronzebot/ansible-and-docker-py-path-issues-and-resolving-them-e3834d5bb79a>
I even tried with the following command to run my playbook:
```
python3 ansible-playbook main.yml
```
None of the above helped to resolve it successfully yet. How should i go about this now
|
2018/01/26
|
[
"https://Stackoverflow.com/questions/48465325",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3592502/"
] |
Here's an implementation of mcranston18's answer that runs:
```
const Op = this.Sequelize.Op;
this.operatorsMap = { // Add additional operators as needed.
$ne: Op.ne,
};
replaceOperators(oldObject) {
let newObject = {};
for (let key in oldObject) {
let value = oldObject[key];
if (typeof value === 'object') {
newObject[key] = this.replaceOperators(value); // Recurse
} else if (this.operatorsMap[key]) {
let op = this.operatorsMap[key];
newObject[op] = value;
} else {
newObject[key] = value;
}
}
return newObject;
}
```
With this code you can do things like:
```
let whereFilter = {"emailAddress":"xxx@xxx.org","id":{"$ne":7}};
let objectFilter = JSON.parse(whereFilter);
let translatedObjectFilter = this.replaceOperators(objectFilter);
let options = {};
options.where = translatedObjectFilter;
await this.table.findAll(options).then(....
```
By getting the ability to translate string operators to work for Sequelize calls, struggling devs can accomplish much more on client side single page applications because there's no need to write a specific server side query with literal objects for every different client query. The server side framework can be generic, rather than application specific.
For example, the SPA I'm building has a server framework of a mere 5 js files. The client has 28 js files and growing. This uses Sequelize 4.37.10.
|
In the end I found out from the sequelize slack group that the old operator method is still in the codebase. I don't know how long it will stay, but since it is the only way of sending via json, it may well stay.
So the way it works is that rather than using [Op.or] you can use $or.
Example:
```
"filters": {"week": 201740, "project": 8, "itemgroup": {"$or": ["group1", "group2"]}}
```
|
65,363,105
|
I am using making use of some user\_types in my application which are is\_admin and is\_landlord. I want it that whenever a user is created using the `python manage.py createsuperuser`. he is automatically assigned is\_admin.
models.py
```
class UserManager(BaseUserManager):
def _create_user(self, email, password, is_staff, is_superuser, **extra_fields):
if not email:
raise ValueError('Users must have an email address')
now = datetime.datetime.now(pytz.utc)
email = self.normalize_email(email)
user = self.model(
email=email,
is_staff=is_staff,
is_active=True,
is_superuser=is_superuser,
last_login=now,
date_joined=now,
**extra_fields
)
user.set_password(password)
user.save(using=self._db)
return user
def create_user(self, email=None, password=None, **extra_fields):
return self._create_user(email, password, False, False, **extra_fields)
def create_superuser(self, email, password, **extra_fields):
user = self._create_user(email, password, True, True, **extra_fields)
user.save(using=self._db)
return user
class User(AbstractBaseUser, PermissionsMixin):
email = models.EmailField(max_length=254, unique=True)
is_staff = models.BooleanField(default=False)
is_superuser = models.BooleanField(default=False)
is_active = models.BooleanField(default=True)
last_login = models.DateTimeField(null=True, blank=True)
date_joined = models.DateTimeField(auto_now_add=True)
# CUSTOM USER FIELDS
name = models.CharField(max_length=30, blank=True, null=True)
telephone = models.IntegerField(blank=True, null=True)
USERNAME_FIELD = 'email'
EMAIL_FIELD = 'email'
REQUIRED_FIELDS = []
objects = UserManager()
def get_absolute_url(self):
return "/users/%i/" % (self.pk)
def get_email(self):
return self.email
class user_type(models.Model):
is_admin = models.BooleanField(default=False)
is_landlord = models.BooleanField(default=False)
user = models.OneToOneField(User, on_delete=models.CASCADE)
def __str__(self):
if self.is_landlord == True:
return User.get_email(self.user) + " - is_landlord"
else:
return User.get_email(self.user) + " - is_admin"
```
**Note**: I want a superuser created to automatically have the is\_admin status
|
2020/12/18
|
[
"https://Stackoverflow.com/questions/65363105",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14434294/"
] |
You can automatically construct a `user_type` model in the `create_superuser` function:
```
class UserManager(BaseUserManager):
# …
def create_superuser(self, email, password, **extra_fields):
user = self._create_user(email, password, True, True, **extra_fields)
user.save(using=self._db)
**user\_type**.objects.**update\_or\_create(**
user=user,
defaults={
'is_admin': True
}
**)**
return user
```
It is however not clear to my why you use a `user_type` class, and not just add extra fields to your `User` model. This will make it more efficient since these will be fetched in the *same* query when you fetc the logged in user. It is also more convenient to just access an `.is_landlord` attribute from the user model.
---
>
> **Note**: Models in Django are written in *PerlCase*, not *snake\_case*,
> so you might want to rename the model from ~~`user_type`~~ to `UserType`.
>
>
>
|
So I have tried a method that works not so efficiently but manageably.
What I do is a simple trick.
views.py
```
def Dashboard(request, pk):
user = User.objects.get(id=request.user.pk)
landlord = user.is_staff & user.is_active
if not landlord:
reservations = Tables.objects.all()
available_ = Tables.objects.filter(status="available")
unavailable_ = Tables.objects.filter(status="unavailable")
bar_owners_ = user_type.objects.filter(is_landlord=True)
context = {"reservations":reservations, "available_":available_, "unavailable_":unavailable_, "bar_owners_":bar_owners_}
return render(request, "dashboard/super/admin/dashboard.html", context)
else:
"""Admin dashboard view."""
user = User.objects.get(pk=pk)
reservations = Tables.objects.filter(bar__user_id=user.id)
available =Tables.objects.filter(bar__user_id=user.id, status="available")
unavailable =Tables.objects.filter(bar__user_id=user.id, status="unavailable")
context = {"reservations":reservations, "available":available, "unavailable":unavailable}
return render(request, "dashboard/super/landlord/dashboard.html", context)
```
So I let Django know that if a user is active or has the is\_active set to true only then they should have access to the dashboard else they wouldn't have access to it.
|
68,286,395
|
I am trying to create a Toplevel window, however, this Toplevel is called from a different file in the same directory within a function.
Apologies I am by no means a tkinter or python guru. Here are the two parts of the code. (snippets)
#File 1 (Main)
```
import tkinter as tk
from tkinter import *
import comm1
from comm1 import com1
root = tk.Tk()
root.title("")
root.geometry("1900x1314")
#grid Center && 3x6 configuration for correct gui layout
root.grid_rowconfigure(0, weight=1)
root.grid_rowconfigure(11, weight=1)
root.grid_columnconfigure(0, weight=1)
root.grid_columnconfigure(11, weight=1)
#background image
canvas = Canvas(root, width=1900, height=1314)
canvas.place(x=0, y=0, relwidth=1, relheight=1)
bckground = PhotoImage(file='img.png')
canvas.create_image(20 ,20 ,anchor=NW, image=bckground)
#command to create new Toplevel
btn1 = tk.Button(root, text='Top', command=com1, justify='center', font=("Arial", 10))
btn1.config(anchor=CENTER)
btn1.grid(row=4, column=1)
```
#File 2 (Toplevel)
```
#command for new window
def com1():
newWindow1 = Toplevel(root)
newWindow1.title("")
newWindow1.geometry("500x500")
entry1 = tk.Entry(root, justify='center' , font=("Arial", 12), fg="Grey")
newWindow1.pack()
newWindow1.mainloop()
```
The weird part is this worked perfectly for a few minutes and without changing any code it just stopped working.
Where am I going wrong?
|
2021/07/07
|
[
"https://Stackoverflow.com/questions/68286395",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11911902/"
] |
No, don't make it `static`. If you want you can make it an instance field, but making it a class field is not optimal. E.g. see the note on thread-safety on the `Random` class that it has been derived from:
>
> Instances of `java.util.Random` are threadsafe. However, the concurrent use of the same `java.util.Random` instance across threads may encounter contention and consequent poor performance. Consider instead using `ThreadLocalRandom` in multithreaded designs.
>
>
>
Beware though that the `ThreadLocalRandom` is **not** cryptographically secure, and therefore not a good option for you. In general, you should try and avoid using `static` class fields, **especially** when the instances are **stateful**.
If you only require the random instance in one or a few methods that are not in a tight loop then making it a local instance is perfectly fine (just using `var rng = new SecureRandom()` in other words, or even just `new SecureRandom()` if you have *a single* method call that requires it).
|
I totally agree with Maartens's answer. However one can notice that java.util classes create statics for SecureRandom themselfes.
```
public final class UUID implements java.io.Serializable, Comparable<UUID> {
...
/*
* The random number generator used by this class to create random
* based UUIDs. In a holder class to defer initialization until needed.
*/
private static class Holder {
static final SecureRandom numberGenerator = new SecureRandom();
}
```
|
63,306,561
|
File/Directory Structure:
```
main/home/script.py
main/home/a/.init
main/home/b/.init
```
I want to setup my `gitignore` to exclude everything in the home directory but to include specific file types.
What i tried:
```
home/* #exclude everything in the home directory and subdirectories
!home/*.py #include python files immediately in the home directory
!**.init #include .init files in all directories and subdirectories.
```
The problem, i can't seem to make sure `.init` files are included. The purpose of this file is to ensure that git will create all my directories, even if they do not have files yet. As such i want to place an empty 0 byte .init file inside of each directory to ensure the "empty" directory is committed by git.
Thanks.
|
2020/08/07
|
[
"https://Stackoverflow.com/questions/63306561",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/-1/"
] |
If you want to create, e.g., `home/foo/.init` and have this file be put into Git's *index* (for more about the index, see below), you will need to tell Git *not* to cut off searches into the `home/*/` directories:
```
!home/*/
```
Then, as [Fady Adal noted](https://stackoverflow.com/a/63306691/1256452) (but I've adjusted slightly), you probably also want:
```
!**/.init
```
so that when Git searches `home/*/` it will find and de-ignore files named `.init`. Note that this de-ignores *all* `.init` files; perhaps you want:
```
!home/**/.init
```
here so that you can ignore a file named, e.g., `nothome/foo/.init`. (You might even ignore `home/**/*` while un-ignoring `home/**/*/` and `home/**/.init`.)
Long: what's going on here?
===========================
I like to say that Git stores only files, not directories, and this is true—but the *reason* it's true has to do with the way Git builds new commits, which is, from Git's *index*.
### Commits and the index
Each *commit* stores a full and complete copy of every file that Git knows about. This full-and-complete copy, however, is stored in a special, read-only, Git-only, frozen-for-all-time format in which duplicate files are automatically de-duplicated. That way, the fact that your first commit has (say) a `README.md` file that hardly ever changes, means that every commit just *shares* that `README.md` file. If it does change, the new commits begin sharing the new file. If it changes back, new commits after that go back to sharing the original file. So if there are only three *versions* of `README.md`, despite having 3 million commits, those 3 million commits all share the three *versions* of the file.
But note that these files are literally read-only. You can't change them. Not even *Git* can change them (for technical reasons having to do with hash IDs; this is true of all existing commits too). They're not in a format that most of your computer programs can use, either. That means that to *work on* the file, or even just to *look* at it, Git has to expand out the frozen-and-compressed, Git-only committed file, into an ordinary everyday form.
This means that when you pick some commit to work on it, Git has to *extract* all the files from that commit. So there are already two copies of each file: the frozen one, in the Git-only compressed-and-de-duplicated form, and the useful one, in your work-tree.
Most version control systems (VCS-es) have this same pattern: there's a committed copy of each file, in some VCS-specific form, saved inside the VCS, and there's a plain-text / ordinary-format version that you work on. Many VCSes stop here, with just the two active files (and one of them might be stored in some central repository, rather than on your computer; Git stores the VCS copy on your computer).
To make a *new* commit, then, the VCS obviously has to package up all of your work-tree (ordinary-format) files. Some version control systems literally do this. Most at least stick a cache in here to make this go faster, because doing things this way is painfully slow. Git, however, uses a sneaky trick.
In Git, there's a *third* copy of each active file. This third copy goes in what Git calls, variously, the *index*, the *staging area*, or—rarely these days—the *cache*. Technically this one isn't usually a *copy* as Git stores it in the internal, compressed-and-de-duplicated form, so it's really just a reference to a blob-hash-ID. This also means that it's ready to go into the *next* commit.
**What this means is that the index—or staging area, if you prefer that term—can be described as holding *the next commit you intend to make*.** The index takes on an expanded role during conflicted merges, so this is not a complete description, but it's good enough for thinking about it. When you use `git commit` to make a new commit, Git just packages up all the prepared, frozen-format, pre-de-duplicated files from the index. But the index holds only *files*—files with long names, like `home/a/.init` for instance, but *files*, not directories.
Checking out some commit, to work on it, means extracting the files from that commit. Git puts them—in their frozen format, but now change-able—into the index so that they're ready to make a *new* commit, and de-compresses them into ordinary format in your work-tree so that you can see and work on them. Then, when you use `git add`, you are telling Git: *Make the index copy of some file match the work-tree copy of that file.*
* If there's already an index copy, the index copy gets booted out (it's probably safely in some commit though) and Git de-duplicates the work-tree copy into the appropriate compressed, frozen-format copy and puts *that* in the index instead.
* If there *wasn't* an index copy, now there is. (It's also still de-duplicated: if you make a new file that has some old file's content, the old content from the old commit gets re-used.)
Either way, it's now ready to go into a new commit.
### This is where `.gitignore` comes in
The `.gitignore` files are somewhat misnamed. They do not literally make Git *ignore* a file. A file's presence, or absence, in new commits you make is determined strictly by whether the file was in the index at the time you ran `git commit`.
What `.gitignore` does instead is two-fold. First, when you use `git status`, Git will *complain* about files that exist in your work-tree, but are not in Git's index. This complaint comes in the form of telling you that some file is *untracked*. This is literally what untracked means: that there is a file in your work-tree, where you can see it and edit it and so on, that isn't in Git's index *right now*. That's *all* it means, since you can put a file into Git's index (`git add`) or take one out (`git rm` or `git rm --cached`) at any time. But because the index is the source of each *new* commit, it's important to know whether some file is in the index or not—which is why Git complains if it's not.
Sometimes, though, this complaint is just annoying: *Yes, I know this compiled object-code file is not in the index. Don't tell me! I **know** already and it's not important!* So, to keep Git from complaining, you list the file in another file that probably should be called `.git-do-not-complain-about-these-untracked-files`.
But that's not the only thing that you get by listing the file in `.gitgnore`. It not only shuts up `git status`, it also makes `git add` *not* actually add the file. So `git add *` or `git add .` *won't* add the object-code file, or whatever. So to keep Git from adding, you list the file in a file that perhaps should be called `.git-do-not-auto-add-these-files`.
Hence `.gitignore` might be called `.git-do-not-complain-about-these-untracked-files-and-do-not-automatically-add-them-either`. But once those files *are* in the index, a `.gitignore` entry has no effect, so maybe it should be `.git-do-not-complain-about-these-untracked-files-and-do-not-automatically-add-them-either-but-if-they-are-in-the-index-go-ahead-and-commit-them`. But that's just ridiculous, so `.gitignore` it is.
### Scanning directories is slow
When you have a massive Git repository, with millions1 of files in it, some of the things that Git normally does very quickly start really bogging down. Even at just a few hundred thousand files, some things can be slow. One of the slowest is scanning through a directory (or folder) to look for untracked files.2
By listing a *directory*, such as `home/a/`, in a `.gitignore` file, you give Git permission to take a shortcut. Normally, Git would say to itself: *Ah, here is a directory `home/a`. I must open it and read out every file in it, and see if those files are in the index or not, in order to decide whether these files are untracked and/or need to be added.* But if the entire directory is to be ignored, Git can stop a little short: *Wait! I see `home/a` is to be ignored! I can skip it entirely!* And so it goes on to `home/b/` instead of looking inside `home/a/`.
To make sure that Git *doesn't* skip a directory, you must make sure that it's not ignored. This is where trailing slashes in the `.gitignore` entries come in.
---
1Most aren't even this big, but Microsoft are working on making Git perform with repositories of this size.
2The usual trick for these kinds of speed issues is to insert a cache. The problem here is that untracked files are, by definition, not in the index. Git's index does have an extension to do some untracked caching but this can never catch everything.
---
### The `.gitignore` line format
The format of lines in `.gitignore` is:
* blank and comment lines are ignored;
* lines that start with `!` are negations;
* lines that *end* with `/` refer to directories; and
* the rest of the line names the file, complete with leading and/or embedded slashes.
A negation only makes sense to undo the effect of an earlier line. In general, later lines override earlier lines, but there is that one big exception having to do with skipping entire directories.
A line that—after any `!` marking a negation—*starts* with a slash provides a *rooted* or *anchored* path.3 So `/home` for instance means just that—`/home`—and not something like `a/home`. A line that *contains an embedded slash* is also rooted, so that `home/a` and `/home/a` both mean the same thing.
The final slash, if there, gets *removed* from the "is rooted/anchored" test. That is, `home/` and `/home/` are different, because `home` is non-rooted/non-anchored but `/home` is rooted/anchored.
As Git scans through directories (folders) and subdirectories (sub-folders), it will try matching each *file or directory name* it finds at each level to all the non-rooted / non-anchored names. Only those at the level of that particular `.gitignore` get matched against the rooted / anchored names, though.
A trailing slash in the pattern means *match only if this is a directory*. So if `home/a` is a directory, it matches both `home/*` and `home/*/`; if `home/xyz` is a file, it matches only `home/*`, not `home/*/`.
Hence, if we want to ignore all *files* underneath `home`, we use:
```
home/*
```
to ignore them. This has an embedded slash so it is rooted/anchored. Unfortunately it gives Git permission to *skip* all subdirectories, so we must counter that with:
```
!home/*/
```
which has a trailing slash so that it applies only to directories. It too is anchored.
---
3I'm borrowing the term *anchored* from regular expression descriptions here. *Rooted* refers to the top level of the Git repository work-tree. Both terms should convey the right idea; use whichever you like better.
|
It should be
```
home/* #exclude everything in the home directory and subdirectories
!home/*.py #include python files immediately in the home directory
!**/*.init #include .init files in all directories and subdirectories.
```
|
58,187,677
|
i've worked with tensorflow for a while and everything worked properly until i tried to switch to the gpu version.
Uninstalled previous tensorflow,
pip installed tensorflow-gpu (v2.0)
downloaded and installed visual studio community 2019
downloaded and installed CUDA 10.1
downloaded and installed cuDNN
tested with CUDA sample "deviceQuery\_vs2019" and got positive result.
test passed
Nvidia GeForce rtx 2070
run test with previous working file and get the error
tensorflow.python.framework.errors\_impl.InternalError: cudaGetDevice() failed. Status: cudaGetErrorString symbol not found.
after some research i've found that the supported CUDA version is 10.0
so i've downgraded the version, changed the CUDA path, but nothing changed
using this code
```
import tensorflow as tf
print("Num GPUs Available: ",
len(tf.config.experimental.list_physical_devices('GPU')))
```
i get
```
2019-10-01 16:55:03.317232: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library nvcuda.dll
2019-10-01 16:55:03.420537: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1618] Found device 0 with properties:
Num GPUs Available: 1
name: GeForce RTX 2070 major: 7 minor: 5 memoryClockRate(GHz): 1.62
pciBusID: 0000:01:00.0
2019-10-01 16:55:03.421029: I tensorflow/stream_executor/platform/default/dlopen_checker_stub.cc:25] GPU libraries are statically linked, skip dlopen check.
2019-10-01 16:55:03.421849: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1746] Adding visible gpu devices: 0
[Finished in 2.01s]
```
CUDA seems to recognize the card, so does tensorflow, but i cannot get rid of the error:
tensorflow.python.framework.errors\_impl.InternalError: cudaGetDevice() failed. Status: cudaGetErrorString symbol not found.
what am i doing wrong? should i stick with cuda 10.0? am i missing a piece of the installation?
|
2019/10/01
|
[
"https://Stackoverflow.com/questions/58187677",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12149136/"
] |
SOLVED, it's mostly an alchemy of versions to avoid conflicts.
Here's what i've done (order matters as far as i know)
1. uninstall everything (tf, cuda, visual studio)
2. pip install tensorflow-gpu
3. download and install visual studio community 2017 (2019 won't work)
4. I also have installed the c++ workload from visual studio (not sure if it's necessary but it has the required compiler visual c++ 15.x)
5. download and install cuda 10.0 (the one i have is 10.0.130)
6. go to system environment variables (search it in the windows bar) > advanced > click Environment Variables...
7. create New user variables (do not confuse with system var)
8. Variable name: CUDA\_PATH,
9. Variable value: browse to the cuda directory down to the version directory (mine is C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v10.0)
10. the guide says you need cudnn 7.4.1, but i got an error about expected version being 7.6 minimum. go to the nvidia developers cudnn archive and download "cudnn v7.6.0 for CUDA 10.0" (be sure you get the right file). unzip, put the cudnn files into the corresponding cuda directories (lib, include, bin).
From there everything worked like a charm. I haven't been able to build the cuda sample file from visual studio (devicequery) but it's not a vital step.
Almost every error was due to incompatible versions of the files, took me 3-4 days to figure the right mix. Hope that help :)
|
tensorflow-gpu v2.0.0 is [now available on conda](https://anaconda.org/anaconda/tensorflow-gpu), and is very easy to install with:
`conda install -c anaconda tensorflow-gpu`. No additional downloads or cuda installs required.
|
58,187,677
|
i've worked with tensorflow for a while and everything worked properly until i tried to switch to the gpu version.
Uninstalled previous tensorflow,
pip installed tensorflow-gpu (v2.0)
downloaded and installed visual studio community 2019
downloaded and installed CUDA 10.1
downloaded and installed cuDNN
tested with CUDA sample "deviceQuery\_vs2019" and got positive result.
test passed
Nvidia GeForce rtx 2070
run test with previous working file and get the error
tensorflow.python.framework.errors\_impl.InternalError: cudaGetDevice() failed. Status: cudaGetErrorString symbol not found.
after some research i've found that the supported CUDA version is 10.0
so i've downgraded the version, changed the CUDA path, but nothing changed
using this code
```
import tensorflow as tf
print("Num GPUs Available: ",
len(tf.config.experimental.list_physical_devices('GPU')))
```
i get
```
2019-10-01 16:55:03.317232: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library nvcuda.dll
2019-10-01 16:55:03.420537: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1618] Found device 0 with properties:
Num GPUs Available: 1
name: GeForce RTX 2070 major: 7 minor: 5 memoryClockRate(GHz): 1.62
pciBusID: 0000:01:00.0
2019-10-01 16:55:03.421029: I tensorflow/stream_executor/platform/default/dlopen_checker_stub.cc:25] GPU libraries are statically linked, skip dlopen check.
2019-10-01 16:55:03.421849: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1746] Adding visible gpu devices: 0
[Finished in 2.01s]
```
CUDA seems to recognize the card, so does tensorflow, but i cannot get rid of the error:
tensorflow.python.framework.errors\_impl.InternalError: cudaGetDevice() failed. Status: cudaGetErrorString symbol not found.
what am i doing wrong? should i stick with cuda 10.0? am i missing a piece of the installation?
|
2019/10/01
|
[
"https://Stackoverflow.com/questions/58187677",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12149136/"
] |
SOLVED, it's mostly an alchemy of versions to avoid conflicts.
Here's what i've done (order matters as far as i know)
1. uninstall everything (tf, cuda, visual studio)
2. pip install tensorflow-gpu
3. download and install visual studio community 2017 (2019 won't work)
4. I also have installed the c++ workload from visual studio (not sure if it's necessary but it has the required compiler visual c++ 15.x)
5. download and install cuda 10.0 (the one i have is 10.0.130)
6. go to system environment variables (search it in the windows bar) > advanced > click Environment Variables...
7. create New user variables (do not confuse with system var)
8. Variable name: CUDA\_PATH,
9. Variable value: browse to the cuda directory down to the version directory (mine is C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v10.0)
10. the guide says you need cudnn 7.4.1, but i got an error about expected version being 7.6 minimum. go to the nvidia developers cudnn archive and download "cudnn v7.6.0 for CUDA 10.0" (be sure you get the right file). unzip, put the cudnn files into the corresponding cuda directories (lib, include, bin).
From there everything worked like a charm. I haven't been able to build the cuda sample file from visual studio (devicequery) but it's not a vital step.
Almost every error was due to incompatible versions of the files, took me 3-4 days to figure the right mix. Hope that help :)
|
i had similar problems.
combined with the fact that i am using windows 8 and pycharm. BUt i figured it out eventually using [this post.](https://stackoverflow.com/questions/50622525/which-tensorflow-and-cuda-version-combinations-are-compatible)
the combination that worked:
* Cuda 10
* CuDNN 7.6 for windows7
* Tensorflow-gpu 2.0
* then using the path environment variable as described above.
Important is to restart after setting environment variables ;)
i did not think that tensorflow 2.2. would not be able to use cuda 11...
|
58,187,677
|
i've worked with tensorflow for a while and everything worked properly until i tried to switch to the gpu version.
Uninstalled previous tensorflow,
pip installed tensorflow-gpu (v2.0)
downloaded and installed visual studio community 2019
downloaded and installed CUDA 10.1
downloaded and installed cuDNN
tested with CUDA sample "deviceQuery\_vs2019" and got positive result.
test passed
Nvidia GeForce rtx 2070
run test with previous working file and get the error
tensorflow.python.framework.errors\_impl.InternalError: cudaGetDevice() failed. Status: cudaGetErrorString symbol not found.
after some research i've found that the supported CUDA version is 10.0
so i've downgraded the version, changed the CUDA path, but nothing changed
using this code
```
import tensorflow as tf
print("Num GPUs Available: ",
len(tf.config.experimental.list_physical_devices('GPU')))
```
i get
```
2019-10-01 16:55:03.317232: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library nvcuda.dll
2019-10-01 16:55:03.420537: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1618] Found device 0 with properties:
Num GPUs Available: 1
name: GeForce RTX 2070 major: 7 minor: 5 memoryClockRate(GHz): 1.62
pciBusID: 0000:01:00.0
2019-10-01 16:55:03.421029: I tensorflow/stream_executor/platform/default/dlopen_checker_stub.cc:25] GPU libraries are statically linked, skip dlopen check.
2019-10-01 16:55:03.421849: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1746] Adding visible gpu devices: 0
[Finished in 2.01s]
```
CUDA seems to recognize the card, so does tensorflow, but i cannot get rid of the error:
tensorflow.python.framework.errors\_impl.InternalError: cudaGetDevice() failed. Status: cudaGetErrorString symbol not found.
what am i doing wrong? should i stick with cuda 10.0? am i missing a piece of the installation?
|
2019/10/01
|
[
"https://Stackoverflow.com/questions/58187677",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12149136/"
] |
tensorflow-gpu v2.0.0 is [now available on conda](https://anaconda.org/anaconda/tensorflow-gpu), and is very easy to install with:
`conda install -c anaconda tensorflow-gpu`. No additional downloads or cuda installs required.
|
i had similar problems.
combined with the fact that i am using windows 8 and pycharm. BUt i figured it out eventually using [this post.](https://stackoverflow.com/questions/50622525/which-tensorflow-and-cuda-version-combinations-are-compatible)
the combination that worked:
* Cuda 10
* CuDNN 7.6 for windows7
* Tensorflow-gpu 2.0
* then using the path environment variable as described above.
Important is to restart after setting environment variables ;)
i did not think that tensorflow 2.2. would not be able to use cuda 11...
|
63,887,379
|
I'm using **Visual Studio Code** with **Cloud Code** extension. When I try to "**Deploy to Cloud Run**", I'm having this error:
>
> Automatic image build detection failed. Error: Component map not
> found. Consider adding a Dockerfile to the workspace and try again.
>
>
>
But I already have a Dockerfile:
```
# Python image to use.
FROM python:3.8
# Set the working directory to /app
WORKDIR /app
# copy the requirements file used for dependencies
COPY requirements.txt .
# Install any needed packages specified in requirements.txt
RUN pip install --trusted-host pypi.python.org -r requirements.txt
# Install ptvsd for debugging
RUN pip install ptvsd
# Copy the rest of the working directory contents into the container at /app
COPY . .
# Run app.py when the container launches
ENTRYPOINT ["python", "-m", "ptvsd", "--port", "3000", "--host", "0.0.0.0", "manage.py", "runserver", "--noreload"]
```
How to solve that, please?
|
2020/09/14
|
[
"https://Stackoverflow.com/questions/63887379",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5411494/"
] |
I wasn't able to repro the issue, but I checked in with the Cloud Code team and it sounds like there could have been an underlying issue with gcloud that wasn't your fault.
I don't think you'll see this error again, but if you do, it would be awesome if you could file an issue at the [Cloud Code VS Code repo](https://github.com/GoogleCloudPlatform/cloud-code-vscode) so that we can gather more info and take a closer look.
This does show that we need to improve our error messages though. I've filed a bug to fix messaging regarding this scenario.
|
I don't know why, but after connecting in a different network and running the commands below, the error is gone.
```
gcloud auth revoke
gcloud auth login
gcloud init
```
|
63,544,127
|
I'm occasionally learning Java. As a person from python background, I'd like to know whether there exists something like `sorted(iterable, key=function)` of python in java.
For exmaple, in python I can sort a list ordered by an element's specific character, such as
```
>>> a_list = ['bob', 'kate', 'jaguar', 'mazda', 'honda', 'civic', 'grasshopper']
>>> s=sorted(a_list) # sort all elements in ascending order first
>>> s
['bob', 'civic', 'grasshopper', 'honda', 'jaguar', 'kate', 'mazda']
>>> sorted(s, key=lambda x: x[1]) # sort by the second character of each element
['jaguar', 'kate', 'mazda', 'civic', 'bob', 'honda', 'grasshopper']
```
So `a_list` is sorted by ascending order first, and then by each element's 1 indexed(second) character.
My question is, if I want to sort elements by specific character in ascending order in Java, how can I achieve that?
Below is the Java code that I wrote:
```
import java.util.Arrays;
public class sort_list {
public static void main(String[] args)
{
String [] a_list = {"bob", "kate", "jaguar", "mazda", "honda", "civic", "grasshopper"};
Arrays.sort(a_list);
System.out.println(Arrays.toString(a_list));}
}
}
```
and the result is like this:
```
[bob, civic, grasshopper, honda, jaguar, kate, mazda]
```
Here I only achieved sorting the array in ascending order. I want the java array as the same as the python list result.
Java is new to me, so any suggestion would be highly appreciated.
Thank you in advance.
|
2020/08/23
|
[
"https://Stackoverflow.com/questions/63544127",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11790764/"
] |
Use a custom comparator to compare two strings.
```
Arrays.sort(a_list, Comparator.comparing(s -> s.charAt(1)));
```
This compares two strings by the string's second character.
This will result in
```
[kate, jaguar, mazda, civic, bob, honda, grasshopper]
```
I see that `jaguar` and `kate` are switched in your output. I'm not sure how Python orders two String that are equal. The `Arrays.sort` does *stable* sort.
>
> This sort is guaranteed to be *stable*: equal elements will
> not be reordered as a result of the sort.
>
>
>
|
You can supply a lambda function to [`Arrays.sort`](https://docs.oracle.com/javase/7/docs/api/java/util/Arrays.html#sort(T%5B%5D,%20java.util.Comparator)). For your example, you could use:
```
Arrays.sort(a_list, (String a, String b) -> a.charAt(1) - b.charAt(1));
```
Assuming you have first sorted the array alphabetically (using `Arrays.sort(a_list)`) this will give you your desired result of:
```
['jaguar', 'kate', 'mazda', 'civic', 'bob', 'honda', 'grasshopper']
```
|
63,544,127
|
I'm occasionally learning Java. As a person from python background, I'd like to know whether there exists something like `sorted(iterable, key=function)` of python in java.
For exmaple, in python I can sort a list ordered by an element's specific character, such as
```
>>> a_list = ['bob', 'kate', 'jaguar', 'mazda', 'honda', 'civic', 'grasshopper']
>>> s=sorted(a_list) # sort all elements in ascending order first
>>> s
['bob', 'civic', 'grasshopper', 'honda', 'jaguar', 'kate', 'mazda']
>>> sorted(s, key=lambda x: x[1]) # sort by the second character of each element
['jaguar', 'kate', 'mazda', 'civic', 'bob', 'honda', 'grasshopper']
```
So `a_list` is sorted by ascending order first, and then by each element's 1 indexed(second) character.
My question is, if I want to sort elements by specific character in ascending order in Java, how can I achieve that?
Below is the Java code that I wrote:
```
import java.util.Arrays;
public class sort_list {
public static void main(String[] args)
{
String [] a_list = {"bob", "kate", "jaguar", "mazda", "honda", "civic", "grasshopper"};
Arrays.sort(a_list);
System.out.println(Arrays.toString(a_list));}
}
}
```
and the result is like this:
```
[bob, civic, grasshopper, honda, jaguar, kate, mazda]
```
Here I only achieved sorting the array in ascending order. I want the java array as the same as the python list result.
Java is new to me, so any suggestion would be highly appreciated.
Thank you in advance.
|
2020/08/23
|
[
"https://Stackoverflow.com/questions/63544127",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11790764/"
] |
You can use `Comparator.comparing` to sort the list
```
Arrays.sort(a_list, Comparator.comparing(e -> e.charAt(1)));
```
And if you want to sort and collect in the new list using Java Stream API
```
String [] listSorted = Arrays.stream(a_list)
.sorted(Comparator.comparing(s -> s.charAt(1)))
.toArray(String[]::new);
```
|
You can supply a lambda function to [`Arrays.sort`](https://docs.oracle.com/javase/7/docs/api/java/util/Arrays.html#sort(T%5B%5D,%20java.util.Comparator)). For your example, you could use:
```
Arrays.sort(a_list, (String a, String b) -> a.charAt(1) - b.charAt(1));
```
Assuming you have first sorted the array alphabetically (using `Arrays.sort(a_list)`) this will give you your desired result of:
```
['jaguar', 'kate', 'mazda', 'civic', 'bob', 'honda', 'grasshopper']
```
|
33,241,842
|
I'm using TextBlob for python to do some sentiment analysis on tweets. The default analyzer in TextBlob is the PatternAnalyzer which works resonably well and is appreciably fast.
```
sent = TextBlob(tweet.decode('utf-8')).sentiment
```
I have now tried to switch to the NaiveBayesAnalyzer and found the runtime to be impractical for my needs. (Approaching 5 seconds per tweet.)
```
sent = TextBlob(tweet.decode('utf-8'), analyzer=NaiveBayesAnalyzer()).sentiment
```
I have used the scikit learn implementation of the Naive Bayes Classifier before and did not find it to be this slow, so I'm wondering if I'm using it right in this case.
I am assuming the analyzer is pretrained, at least [the documentation](http://Naive%20Bayes%20analyzer%20that%20is%20trained%20on%20a%20dataset%20of%20movie%20reviews.) states "Naive Bayes analyzer that is trained on a dataset of movie reviews." But then it also has a function train() which is described as "Train the Naive Bayes classifier on the movie review corpus." Does it internally train the analyzer before each run? I hope not.
Does anyone know of a way to speed this up?
|
2015/10/20
|
[
"https://Stackoverflow.com/questions/33241842",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3174668/"
] |
Yes, Textblob will train the analyzer before each run. You can use following code to avoid train the analyzer everytime.
```
from textblob import Blobber
from textblob.sentiments import NaiveBayesAnalyzer
tb = Blobber(analyzer=NaiveBayesAnalyzer())
print tb("sentence you want to test")
```
|
Adding to Alan's very useful answer if you have table data in a dataframe and want to use textblob's NaiveBayesAnalyzer then this works. Just change out `word_list` for your relevant series of strings.
```
import textblob
import pandas as pd
tb = textblob.Blobber(analyzer=NaiveBayesAnalyzer())
for index, row in df.iterrows():
sent = tb(row['word_list']).sentiment
df.loc[index, 'classification'] = sent[0]
df.loc[index, 'p_pos'] = sent[1]
df.loc[index, 'p_neg'] = sent[2]
```
Above splits the tuple that `sentiment` returns into three separate series.
This works if the series is all strings but if it has mixed datatypes, as can be a problem in pandas with the `object` datatype then you might want to put a try/except block around it to catch exceptions.
On time it is doing 1000 rows in around 4.7 seconds in my tests.
Hope this is helpful.
|
33,241,842
|
I'm using TextBlob for python to do some sentiment analysis on tweets. The default analyzer in TextBlob is the PatternAnalyzer which works resonably well and is appreciably fast.
```
sent = TextBlob(tweet.decode('utf-8')).sentiment
```
I have now tried to switch to the NaiveBayesAnalyzer and found the runtime to be impractical for my needs. (Approaching 5 seconds per tweet.)
```
sent = TextBlob(tweet.decode('utf-8'), analyzer=NaiveBayesAnalyzer()).sentiment
```
I have used the scikit learn implementation of the Naive Bayes Classifier before and did not find it to be this slow, so I'm wondering if I'm using it right in this case.
I am assuming the analyzer is pretrained, at least [the documentation](http://Naive%20Bayes%20analyzer%20that%20is%20trained%20on%20a%20dataset%20of%20movie%20reviews.) states "Naive Bayes analyzer that is trained on a dataset of movie reviews." But then it also has a function train() which is described as "Train the Naive Bayes classifier on the movie review corpus." Does it internally train the analyzer before each run? I hope not.
Does anyone know of a way to speed this up?
|
2015/10/20
|
[
"https://Stackoverflow.com/questions/33241842",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3174668/"
] |
Yes, Textblob will train the analyzer before each run. You can use following code to avoid train the analyzer everytime.
```
from textblob import Blobber
from textblob.sentiments import NaiveBayesAnalyzer
tb = Blobber(analyzer=NaiveBayesAnalyzer())
print tb("sentence you want to test")
```
|
In Addition to the above solutions, I tried the above solutions and I faced errors, so if someone found this question, I solved the errors as well as tried using PatternAnalyzer the in below code:
```
from textblob import Blobber
from textblob.sentiments import NaiveBayesAnalyzer, PatternAnalyzer
import nltk
nltk.download('punkt')
nltk.download('movie_reviews')
tb = Blobber(analyzer=NaiveBayesAnalyzer())
tb1 = Blobber(analyzer=PatternAnalyzer())
print(tb("sentence you want to test").sentiment)
print(tb1("sentence you want to test").sentiment)
print(tb("I love the book").sentiment)
print(tb1("I love the book").sentiment)
```
|
68,441,299
|
I'm trying to get the number of commits of github repos using python and beautiful soup
html code:
```
<div class="flex-shrink-0">
<h2 class="sr-only">Git stats</h2>
<ul class="list-style-none d-flex">
<li class="ml-0 ml-md-3">
<a data-pjax href="..." class="pl-3 pr-3 py-3 p-md-0 mt-n3 mb-n3 mr-n3 m-md-0 Link--primary no-underline no-wrap">
<span class="d-none d-sm-inline">
<strong>26</strong>
<span aria-label="Commits on master" class="color-text-secondary d-none d-lg-inline">
commits
</span>
</span>
</a>
</li>
</ul>
</div>
```
my code:
```
r = requests.get(source_code_link)
soup = bs(r.content, 'lxml')
spans = soup.find_all('span', class_='d-none d-sm-inline')
for span in spans:
number = span.select_one('strong')
```
sometimes works but sometimes no because there are more then one `span` tag with class `d-none d-sm-inline`.
how can i solve ?
|
2021/07/19
|
[
"https://Stackoverflow.com/questions/68441299",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/-1/"
] |
```
from bs4 import BeautifulSoup as bs
html="""<span class="d-none d-sm-inline">
<strong>26</strong>
<span aria-label="Commits on master" class="color-text-secondary d-none d-lg-inline">
commits
</span>
</span>
<div class="flex-shrink-0">
<h2 class="sr-only">Git stats</h2>
<ul class="list-style-none d-flex">
<li class="ml-0 ml-md-3">
<a data-pjax href="..." class="pl-3 pr-3 py-3 p-md-0 mt-n3 mb-n3 mr-n3 m-md-0 Link--primary no-underline no-wrap">
<span class="d-none d-sm-inline">
<strong>23</strong>
<span aria-label="Commits on master" class="color-text-secondary d-none d-lg-inline">
commits
</span>
</span>
</a>
</li>
</ul>
</div>"""
```
I have combine both example which look up for tag `strong` and based on that prints data by using `.contents` method
```
soup = bs(html, 'lxml')
spans = soup.find_all('span', class_='d-none d-sm-inline')
for span in spans:
for tag in span.contents:
if tag.name=="strong" :
print(tag.get_text())
```
using list comprehension :
```
for span in spans:
data=[tag for tag in span.contents if tag.name=="strong"]
print(data[0].get_text())
```
Ouput for both case:
```
26
23
```
|
You can use the [`find_next()`](https://www.crummy.com/software/BeautifulSoup/bs4/doc/#find-all-next-and-find-next) method to look for a `<strong>` after the class `d-none d-sm-inline`.
In your case:
```
from bs4 import BeautifulSoup
soup = BeautifulSoup(html, "html.parser")
for tag in soup.find_all("span", class_="d-none d-sm-inline"):
print(tag.find_next("strong").text)
```
|
68,441,299
|
I'm trying to get the number of commits of github repos using python and beautiful soup
html code:
```
<div class="flex-shrink-0">
<h2 class="sr-only">Git stats</h2>
<ul class="list-style-none d-flex">
<li class="ml-0 ml-md-3">
<a data-pjax href="..." class="pl-3 pr-3 py-3 p-md-0 mt-n3 mb-n3 mr-n3 m-md-0 Link--primary no-underline no-wrap">
<span class="d-none d-sm-inline">
<strong>26</strong>
<span aria-label="Commits on master" class="color-text-secondary d-none d-lg-inline">
commits
</span>
</span>
</a>
</li>
</ul>
</div>
```
my code:
```
r = requests.get(source_code_link)
soup = bs(r.content, 'lxml')
spans = soup.find_all('span', class_='d-none d-sm-inline')
for span in spans:
number = span.select_one('strong')
```
sometimes works but sometimes no because there are more then one `span` tag with class `d-none d-sm-inline`.
how can i solve ?
|
2021/07/19
|
[
"https://Stackoverflow.com/questions/68441299",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/-1/"
] |
Here's an approach using [list commits](https://docs.github.com/en/rest/reference/repos#list-commits) from GitHub's REST API
```
import requests
user = ... # username or organisation
repo = ... # repository name
response = requests.get(f"https://api.github.com/repos/{user}/{repo}/commits")
if response.ok:
ncommits = len(response.json())
else:
raise ValueError(f"error: {response.url} responded {response.reason}")
print(ncommits)
```
|
You can use the [`find_next()`](https://www.crummy.com/software/BeautifulSoup/bs4/doc/#find-all-next-and-find-next) method to look for a `<strong>` after the class `d-none d-sm-inline`.
In your case:
```
from bs4 import BeautifulSoup
soup = BeautifulSoup(html, "html.parser")
for tag in soup.find_all("span", class_="d-none d-sm-inline"):
print(tag.find_next("strong").text)
```
|
23,184,410
|
I have jobs scheduled thru `apscheduler`. I have 3 jobs so far, but soon will have many more. i'm looking for a way to scale my code.
Currently, each job is its own `.py` file, and in the file, I have turned the script into a function with `run()` as the function name. Here is my code.
```
from apscheduler.scheduler import Scheduler
import logging
import job1
import job2
import job3
logging.basicConfig()
sched = Scheduler()
@sched.cron_schedule(day_of_week='mon-sun', hour=7)
def runjobs():
job1.run()
job2.run()
job3.run()
sched.start()
```
This works, right now the code is just stupid, but it gets the job done. But when I have 50 jobs, the code will be stupid long. How do I scale it?
note: the actual names of the jobs are arbitrary and doesn't follow a pattern. The name of the file is `scheduler.py` and I run it using `execfile('scheduler.py')` in python shell.
|
2014/04/20
|
[
"https://Stackoverflow.com/questions/23184410",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1744744/"
] |
```
import urllib
import threading
import datetime
pages = ['http://google.com', 'http://yahoo.com', 'http://msn.com']
#------------------------------------------------------------------------------
# Getting the pages WITHOUT threads
#------------------------------------------------------------------------------
def job(url):
response = urllib.urlopen(url)
html = response.read()
def runjobs():
for page in pages:
job(page)
start = datetime.datetime.now()
runjobs()
end = datetime.datetime.now()
print "jobs run in {} microseconds WITHOUT threads" \
.format((end - start).microseconds)
#------------------------------------------------------------------------------
# Getting the pages WITH threads
#------------------------------------------------------------------------------
def job(url):
response = urllib.urlopen(url)
html = response.read()
def runjobs():
threads = []
for page in pages:
t = threading.Thread(target=job, args=(page,))
t.start()
threads.append(t)
for t in threads:
t.join()
start = datetime.datetime.now()
runjobs()
end = datetime.datetime.now()
print "jobs run in {} microsecond WITH threads" \
.format((end - start).microseconds)
```
|
Look @
<http://furius.ca/pubcode/pub/conf/bin/python-recursive-import-test>
This will help you import all python / .py files.
while importing you can create a list which keeps keeps a function call, for example.
[job1.run(),job2.run()]
Then iterate through them and call function :)
Thanks Arjun
|
29,413,719
|
When I input the variable a
```
a = int(1388620800)*1000
```
(of course) this variable is returned
```
1388620800000
```
But in the Google Appengine Server, this variable is changed by <https://cloud.google.com/appengine/docs/python/datastore/typesandpropertyclasses#int>
```
1388620800000L
```
How to convert 1388620800000L to int type in Appengine?
**EDIT**
I will print this number for JSON. In the local Python2.7, the variable 'a' is just integer. But in the appengine server, the variable 'a' has string 'L'. I solved it using str() and .replace('L',''), but I wonder how to solve it with changing number type.
|
2015/04/02
|
[
"https://Stackoverflow.com/questions/29413719",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1474183/"
] |
Maven has a [standard directory layout](https://maven.apache.org/guides/introduction/introduction-to-the-standard-directory-layout.html). The directory `src/main/resources` is intended for such application resources. Place your text files into it.
You now basically have two options where exactly to place your files:
1. The resource file belongs to a class.
An example for this is a class representing a GUI element (a panel) that needs to also show some images.
In this case place the resource file into the same directory (package) as the corresponding class. E.g. for a class named `your.pkg.YourClass` place the resource file into the directory `your/pkg`:
```
src/main
+-- java/
| +-- your/pkg/
| | +-- YourClass.java
+-- resources/
+-- your/pkg/
+-- resource-file.txt
```
You now load the resource via the corresponding class. Inside the class `your.pkg.YourClass` you have the following code snippet for loading:
```
String resource = "resource-file.txt"; // the "file name" without any package or directory
Class<?> clazz = this.getClass(); // or YourClass.class
URL resourceUrl = clazz.getResource(resource);
if (resourceUrl != null) {
try (InputStream input = resourceUrl.openStream()) {
// load the resource here from the input stream
}
}
```
Note: You can also load the resource via the class' class loader:
```
String resource = "your/pkg/resource-file.txt";
ClassLoader loader = this.getClass().getClassLoader(); // or YourClass.class.getClassLoader()
URL resourceUrl = loader.getResource(resource);
if (resourceUrl != null) {
try (InputStream input = resourceUrl.openStream()) {
// load the resource here from the input stream
}
}
```
Choose, what you find more convenient.
2. The resource belongs to the application at whole.
In this case simply place the resource directly into the `src/main/resources` directory or into an appropriate sub directory. Let's look at an example with your lookup file:
```
src/main/resources/
+-- LookupTables/
+-- LookUpTable1.txt
```
You then must load the resource via a class loader, using either the current thread's context class loader or the application class loader (whatever is more appropriate - go and search for articles on this issue if interested). I will show you both ways:
```
String resource = "LookupTables/LookUpTable1.txt";
ClassLoader ctxLoader = Thread.currentThread().getContextClassLoader();
ClassLoader sysLoader = ClassLoader.getSystemClassLoader();
URL resourceUrl = ctxLoader.getResource(resource); // or sysLoader.getResource(resource)
if (resourceUrl != null) {
try (InputStream input = resourceUrl.openStream()) {
// load the resource here from the input stream
}
}
```
As a first suggestion, use the current thread's context class loader. In a standalone application this will be the system class loader or have the system class loader as a parent. (The distinction between these class loaders will become important for libraries that also load resources.)
You should always use a class loader for loading resource. This way you make loading independent from the place (just take care that the files are inside the class path when launching the application) and you can package the whole application into a JAR file which still finds the resources.
|
I tried to reproduce your problem given the MWE you provided, but did not succeed. I uploaded my project including a pom.xml (you mentioned you used maven) here: <http://www.filedropper.com/stackoverflow>
This is what my lookup class looks like (also showing how to use the getResourceAsStream method):
```
public class LookUpClass {
final static String tableName = "resources/LookUpTables/LookUpTable1.txt";
public static String getLineFromLUT(final int line) {
final URL url = LookUpClass.class.getResource(tableName);
if (url.toString().startsWith("jar:")) {
try (final URLClassLoader loader = URLClassLoader
.newInstance(new URL[] { url })) {
return getLineFromLUT(
new InputStreamReader(
loader.getResourceAsStream(tableName)), line);
} catch (final IOException e) {
e.printStackTrace();
}
} else {
return getLineFromLUT(
new InputStreamReader(
LookUpClass.class.getResourceAsStream(tableName)),
line);
}
return null;
}
public static String getLineFromLUT(final Reader reader, final int line) {
try (final BufferedReader br = new BufferedReader(reader)) {
for (int i = 0; i < line; ++i)
br.readLine();
return br.readLine();
} catch (final IOException e) {
e.printStackTrace();
}
return null;
}
}
```
|
74,127,611
|
I am unsure if this is one of those problems that is impossible or not, in my mind it seems like it should be possible. ***Edit** - We more or less agree it is impossible*
Given a range specified by two integers (i.e. `n1 ... n2`), is it possible to create a python generator that yields a random integer from the range WITHOUT repetitions and WITHOUT loading the list of options into memory (i.e. `list(range(n1, n2))`).
Expected usage would be something like this:
```
def random_range_generator(n1, n2):
...
gen = random_range_generator(1, 6)
for n in gen:
print(n)
```
Output:
```
4
1
5
3
2
```
|
2022/10/19
|
[
"https://Stackoverflow.com/questions/74127611",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8627756/"
] |
As I stated in the comment above, what you are seeking is some of the facilities of reactive programming paradigm. (not to be confounded with the JavaScript library which borrows its name from there).
It is possible to instrument objects in Python to do so - I think the minimum setup here would be a specialized target mapping, and a special object type you set as the values in it, that would fetch the target value.
Python can do this in more straightforward ways with direct *attribute* access (using the dot notation: `myinstance.value`) than by using the key-retrieving notation used in dictionaries `mydata['value']` due to the fact a class is already a template to a certain data group, and class attributes can define mechanisms to access each instance's attribute value. That is called the "descriptor protocol" and is bound into the language model itself.
Nonetheless a minimalist Mapping based version can be implemented as such:
```
FOO = {'foo': 1}
from collections.abc import MutableMapping
class LazyValue:
def __init__(self, source, key):
self.source = source
self.key = key
def get(self):
return self.source[self.key]
def __repr__(self):
return f"<LazyValue {self.get()!r}>"
class LazyDict(MutableMapping):
def __init__(self, *args, **kw):
self.data = dict(*args, **kw)
def __getitem__(self, key):
value = self.data[key]
if isinstance(value, LazyValue):
value = value.get()
return value
def __setitem__(self, key, value):
self.data[key] = value
def __delitem__(key):
del self.data[key]
def __iter__(self):
return iter(self.data)
def __len__():
return len(self.data)
def __repr__():
return repr({key: value} for key, value in self.items())
BAR = LazyDict({'test': LazyValue(FOO, 'foo')})
# ...complex logic here which ultimately updates the value of Foo['foo']...
FOO['foo'] = 2
print(BAR['test']) # Outputs 2
```
The reason this much code is needed is that there are several ways to retrieve data from a dictionary or mapping (`.values`, `.items`, `.get`, `.setdefault`) and simply inheriting from `dict` and implementing `__getitem__` would "leak" the special lazy object in any of the other methods. Going through this MutableMapping approach ensure a single point of reading of the value in the `__getitem__` method - and the resulting instance can be used reliably anywhere a mapping is expected.
However, notice that if you are using normal classes and instances rather than dictionaries, this can be much simpler - you can just use plain Python "property" and have a getter that will fetch the value. The main factor you should ponder is whether your referenced data keys are fixed, and can be hard-coded when writting the source code, or if they are dynamic, and which keys will work as lazy-references are only known at runtime. In this last case, the custom mapping approach, as above, will be usually better:
```
FOO = {'foo': 1}
class LazyStuff:
def __init__(self, source):
self.source = source
@property
def test(self):
return self.source["foo"]
BAR = LazyStuff(FOO)
FOO["foo"] = 2
print(BAR.test)
```
Perceive that in this way you have to hardcode the key "foo" and "test" in the class body, but it is just plaincode, and no need for the intermediary "LazyValue" class. Also, if you need this data as a dictionary, you could add an `.as_dict` method to `LazyStuff` that would collect all attributes in the moment it were called and yield a snapshot of those values as a dictionary..
|
You can try using lambdas and calling the value on return. Like this:
```
FOO = {'foo': 1}
BAR = {'test': lambda: FOO['foo'] }
FOO['foo'] = 2
print(BAR['test']()) # Outputs 2
```
|
74,127,611
|
I am unsure if this is one of those problems that is impossible or not, in my mind it seems like it should be possible. ***Edit** - We more or less agree it is impossible*
Given a range specified by two integers (i.e. `n1 ... n2`), is it possible to create a python generator that yields a random integer from the range WITHOUT repetitions and WITHOUT loading the list of options into memory (i.e. `list(range(n1, n2))`).
Expected usage would be something like this:
```
def random_range_generator(n1, n2):
...
gen = random_range_generator(1, 6)
for n in gen:
print(n)
```
Output:
```
4
1
5
3
2
```
|
2022/10/19
|
[
"https://Stackoverflow.com/questions/74127611",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8627756/"
] |
As I stated in the comment above, what you are seeking is some of the facilities of reactive programming paradigm. (not to be confounded with the JavaScript library which borrows its name from there).
It is possible to instrument objects in Python to do so - I think the minimum setup here would be a specialized target mapping, and a special object type you set as the values in it, that would fetch the target value.
Python can do this in more straightforward ways with direct *attribute* access (using the dot notation: `myinstance.value`) than by using the key-retrieving notation used in dictionaries `mydata['value']` due to the fact a class is already a template to a certain data group, and class attributes can define mechanisms to access each instance's attribute value. That is called the "descriptor protocol" and is bound into the language model itself.
Nonetheless a minimalist Mapping based version can be implemented as such:
```
FOO = {'foo': 1}
from collections.abc import MutableMapping
class LazyValue:
def __init__(self, source, key):
self.source = source
self.key = key
def get(self):
return self.source[self.key]
def __repr__(self):
return f"<LazyValue {self.get()!r}>"
class LazyDict(MutableMapping):
def __init__(self, *args, **kw):
self.data = dict(*args, **kw)
def __getitem__(self, key):
value = self.data[key]
if isinstance(value, LazyValue):
value = value.get()
return value
def __setitem__(self, key, value):
self.data[key] = value
def __delitem__(key):
del self.data[key]
def __iter__(self):
return iter(self.data)
def __len__():
return len(self.data)
def __repr__():
return repr({key: value} for key, value in self.items())
BAR = LazyDict({'test': LazyValue(FOO, 'foo')})
# ...complex logic here which ultimately updates the value of Foo['foo']...
FOO['foo'] = 2
print(BAR['test']) # Outputs 2
```
The reason this much code is needed is that there are several ways to retrieve data from a dictionary or mapping (`.values`, `.items`, `.get`, `.setdefault`) and simply inheriting from `dict` and implementing `__getitem__` would "leak" the special lazy object in any of the other methods. Going through this MutableMapping approach ensure a single point of reading of the value in the `__getitem__` method - and the resulting instance can be used reliably anywhere a mapping is expected.
However, notice that if you are using normal classes and instances rather than dictionaries, this can be much simpler - you can just use plain Python "property" and have a getter that will fetch the value. The main factor you should ponder is whether your referenced data keys are fixed, and can be hard-coded when writting the source code, or if they are dynamic, and which keys will work as lazy-references are only known at runtime. In this last case, the custom mapping approach, as above, will be usually better:
```
FOO = {'foo': 1}
class LazyStuff:
def __init__(self, source):
self.source = source
@property
def test(self):
return self.source["foo"]
BAR = LazyStuff(FOO)
FOO["foo"] = 2
print(BAR.test)
```
Perceive that in this way you have to hardcode the key "foo" and "test" in the class body, but it is just plaincode, and no need for the intermediary "LazyValue" class. Also, if you need this data as a dictionary, you could add an `.as_dict` method to `LazyStuff` that would collect all attributes in the moment it were called and yield a snapshot of those values as a dictionary..
|
If you're only one level deep, you may wish to try [`ChainMap`](https://docs.python.org/3/library/collections.html#collections.ChainMap), E.g.,
```py
>>> from collections import ChainMap
>>> defaults = {'foo': 42}
>>> myvalues = {}
>>> result = ChainMap(myvalues, defaults)
>>> result['foo']
42
>>> defaults['foo'] = 99
>>> result['foo']
99
```
|
48,903,304
|
I am naive in Big data, I am trying to connect kafka to spark.
Here is my producer code
```python
import os
import sys
import pykafka
def get_text():
## This block generates my required text.
text_as_bytes=text.encode(text)
producer.produce(text_as_bytes)
if __name__ == "__main__":
client = pykafka.KafkaClient("localhost:9092")
print ("topics",client.topics)
producer = client.topics[b'imagetext'].get_producer()
get_text()
```
This is printing my generated text on console consumer when I do
bin/kafka-console-consumer.sh --zookeeper localhost:2181 --topic imagetext --from-beginning
Now I want this text to be consumed using Spark and this is my Jupyter code
```python
import findspark
findspark.init()
import os
from pyspark import SparkContext, SparkConf
from pyspark.streaming import StreamingContext
from pyspark.streaming.kafka import KafkaUtils
import json
os.environ['PYSPARK_SUBMIT_ARGS'] = '--jars /spark-2.1.1-bin-hadoop2.6/spark-streaming-kafka-0-8-assembly_2.11-2.1.0.jar pyspark-shell'
conf = SparkConf().setMaster("local[2]").setAppName("Streamer")
sc = SparkContext(conf=conf)
ssc = StreamingContext(sc,5)
print('ssc =================== {} {}')
kstream = KafkaUtils.createDirectStream(ssc, topics = ['imagetext'],
kafkaParams = {"metadata.broker.list": 'localhost:9092'})
print('contexts =================== {} {}')
lines = kstream.map(lambda x: x[1])
lines.pprint()
ssc.start()
ssc.awaitTermination()
ssc.stop(stopGraceFully = True)
```
But this is producing output on my Jupyter as
```
Time: 2018-02-21 15:03:25
-------------------------------------------
-------------------------------------------
Time: 2018-02-21 15:03:30
-------------------------------------------
```
Not the text that is on my console consumer..
Please help, unable to figure out the mistake.
|
2018/02/21
|
[
"https://Stackoverflow.com/questions/48903304",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9369585/"
] |
The same issue happened to me. I was unable to solve it using jsonp. What i ended up doing was to make an action in the controller that recieved the json from external url and send it to the ajax call.
For exmaple
```
return $.ajax({
type: "post",
url: "/ActionInProject/ProjectController",
});
```
then in the Controller it will be different for whichever server side language is being used. For me it was C# so i did something like
```
[HttpPost]
public JsonResult ActionInProject()
{
using(HttpClient client = new HttpClient())
{
var response = client.GetAsync("someothersite.com/api/link");
return Json(client.GetAsync());
}
}
```
|
I tried your request with Postman
and I found it not valid json in the respone you can find below what is returned of the server
```
<!DOCTYPE html><html dir="ltr"><head><title>Duolingo</title><meta charset="utf-8"><meta name="viewport" content="width=device-width,initial-scale=1,user-scalable=no"><meta name="robots" content="NOODP"><noscript><meta http-equiv="refresh" content="0; url=/nojs/splash"></noscript><meta name="apple-mobile-web-app-capable" content="yes"><meta name="apple-mobile-web-app-status-bar-style" content="black"><meta name="apple-mobile-web-app-title" content="Duolingo"><meta name="google" content="notranslate"><meta name="mobile-web-app-capable" content="yes"><meta name="apple-itunes-app" content="app-id=570060128"><meta name="google-site-verification" content="nWyTCDRw4VJS_b6YSRZiFmmj56EawZpZMhHtKXt7lkU"><link rel="chrome-webstore-item" href="https://chrome.google.com/webstore/detail/aiahmijlpehemcpleichkcokhegllfjl"><link rel="apple-touch-icon" href="//d35aaqx5ub95lt.cloudfront.net/images/duolingo-touch-icon.png"><link rel="shortcut icon" type="image/x-icon" href="//d35aaqx5ub95lt.cloudfront.net/favicon.ico"><link href="//d35aaqx5ub95lt.cloudfront.net/css/ltr-9f45956e.css" rel="stylesheet"> <link rel="manifest" href="/gcm/manifest.json"></head><body><div id="root"></div><script>window.duo={"detUiLanguages":["en","es","pt","it","fr","de","ja","zs","zt","ko","ru","hi","hu","tr"],"troubleshootingForumId":647,"uiLanguage":"en","unsupportedDirections":[],"oldWebUrlWhitelist":["^/activity_stream$","^/admin_tools$","^/c/$","^/certification/","^/comment/","^/course/","^/course_announcement/","^/courses$","^/courses/","^/debug/","^/dictionary/","^/discussion$","^/event/","^/forgot_password$","^/guidelines$","^/help$","^/j$","^/mobile$","^/p/$","^/pm/","^/power_practice/","^/preview/.+/","^/quit_classroom_session","^/redirect/","^/referred","^/register_user$","^/reset_password","^/settings/reset_lang","^/skill_practice/","^/team/","^/teams$","^/topic/","^/translation/","^/translations$","^/translations/","^/troubleshooting$","^/ui_strings/","^/upload$","^/vocab","^/welcome$","^/welcome/","^/word","^/words$"]}</script><script>window.duo.version="c89bfb9"</script><script>!function(r){function n(t){if(e[t])return e[t].exports;var s=e[t]={i:t,l:!1,exports:{}};return r[t].call(s.exports,s,s.exports,n),s.l=!0,s.exports}var t=window.webpackJsonp;window.webpackJsonp=function(e,i,o){for(var c,a,f,d=0,u=[];d<e.length;d++)a=e[d],s[a]&&u.push(s[a][0]),s[a]=0;for(c in i)Object.prototype.hasOwnProperty.call(i,c)&&(r[c]=i[c]);for(t&&t(e,i,o);u.length;)u.shift()();if(o)for(d=0;d<o.length;d++)f=n(n.s=o[d]);return f};var e={},s={31:0};n.e=function(r){function t(){c.onerror=c.onload=null,clearTimeout(a);var n=s[r];0!==n&&(n&&n[1](new Error("Loading chunk "+r+" failed.")),s[r]=void 0)}var e=s[r];if(0===e)return new Promise(function(r){r()});if(e)return e[2];var i=new Promise(function(n,t){e=s[r]=[n,t]});e[2]=i;var o=document.getElementsByTagName("head")[0],c=document.createElement("script");c.type="text/javascript",c.charset="utf-8",c.async=!0,c.timeout=12e4,n.nc&&c.setAttribute("nonce",n.nc),c.src=n.p+""+({0:"js/vendor",1:"js/app",2:"strings/zh-TW",3:"strings/zh-CN",4:"strings/vi",5:"strings/uk",6:"strings/tr",7:"strings/tl",8:"strings/th",9:"strings/te",10:"strings/ta",11:"strings/ru",12:"strings/ro",13:"strings/pt",14:"strings/pl",15:"strings/pa",16:"strings/nl-NL",17:"strings/ko",18:"strings/ja",19:"strings/it",20:"strings/id",21:"strings/hu",22:"strings/hi",23:"strings/fr",24:"strings/es",25:"strings/en",26:"strings/el",27:"strings/de",28:"strings/cs",29:"strings/bn",30:"strings/ar"}[r]||r)+"-"+{0:"2b9feda7",1:"662ee5e7",2:"c444b0a9",3:"a5658bf8",4:"3ea447d8",5:"1573893a",6:"c32ed883",7:"52cac8bc",8:"2c58adbb",9:"681aaba6",10:"d05b78c6",11:"f4071afb",12:"a1349f5c",13:"6a57ec9f",14:"762dfc94",15:"8a02897a",16:"4e429b1e",17:"8e921ddf",18:"524cc86b",19:"8df42324",20:"7d8a8fc5",21:"4fde5d79",22:"509b8809",23:"9f09bcfb",24:"77da48d4",25:"44cfb321",26:"13b268cc",27:"c0cac402",28:"3ecdeec1",29:"dfd2b224",30:"074ffddd"}[r]+".js";var a=setTimeout(t,12e4);return c.onerror=c.onload=t,o.appendChild(c),i},n.m=r,n.c=e,n.d=function(r,t,e){n.o(r,t)||Object.defineProperty(r,t,{configurable:!1,enumerable:!0,get:e})},n.n=function(r){var t=r&&r.__esModule?function(){return r.default}:function(){return r};return n.d(t,"a",t),t},n.o=function(r,n){return Object.prototype.hasOwnProperty.call(r,n)},n.p="/",n.oe=function(r){throw console.error(r),r}}([])</script><script src="//d35aaqx5ub95lt.cloudfront.net/js/vendor-2b9feda7.js"></script> <script src="//d35aaqx5ub95lt.cloudfront.net/strings/en-44cfb321.js"></script> <script src="//d35aaqx5ub95lt.cloudfront.net/js/app-662ee5e7.js"></script></body></html>
```
as you can see the JSON is wrapped inside a HTML page which is not valid JSON syntax it something related to the API itself. it returns a page instead of JSON object .
|
26,106,358
|
I have this code at the top of my Google App Engine program:
```
from google.appengine.api import urlfetch
urlfetch.set_default_fetch_deadline(60)
```
I am using an opener to load stuff:
```
cj = CookieJar()
opener = urllib2.build_opener(urllib2.HTTPCookieProcessor( cj ) )
opener.addheaders = [ ( 'User-Agent', 'Mozilla/5.0 (Windows NT 6.1; WOW64)' ) ]
resp = opener.open( 'http://www.example.com/' )
```
an exception is being thrown after 5 seconds:
```
File "/base/data/home/runtimes/python27/python27_dist/lib/python2.7/urllib2.py", line 404, in open
response = self._open(req, data)
File "/base/data/home/runtimes/python27/python27_dist/lib/python2.7/urllib2.py", line 422, in _open
'_open', req)
File "/base/data/home/runtimes/python27/python27_dist/lib/python2.7/urllib2.py", line 382, in _call_chain
result = func(*args)
File "/base/data/home/runtimes/python27/python27_dist/lib/python2.7/urllib2.py", line 1222, in https_open
return self.do_open(httplib.HTTPSConnection, req)
File "/base/data/home/runtimes/python27/python27_dist/lib/python2.7/urllib2.py", line 1187, in do_open
r = h.getresponse(buffering=True)
File "/base/data/home/runtimes/python27/python27_dist/lib/python2.7/gae_override/httplib.py", line 524, in getresponse
raise HTTPException(str(e))
HTTPException: Deadline exceeded while waiting for HTTP response from URL: http://www.example.com
```
How can I avoid the error?
|
2014/09/29
|
[
"https://Stackoverflow.com/questions/26106358",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/847663/"
] |
Did you try setting the timeout on the `.open()` call?
```
resp = opener.open('http://example.com', None, 60)
```
If you reach the timeout as specified by `set_default_fetch_deadline`, Python will throw a `DownloadError` or `DeadlineExceededErrors` exception: <https://cloud.google.com/appengine/docs/python/urlfetch/exceptions>
|
You can also patch the httplib2 library and set the deadline to 60 seconds
```
httplib2/__init__.py:
def fixed_fetch(url, payload=None, method="GET", headers={},
allow_truncated=False, follow_redirects=True,
deadline=60):
return fetch(url, payload=payload, method=method, headers=headers,
allow_truncated=allow_truncated,
follow_redirects=follow_redirects, deadline=60,
validate_certificate=validate_certificate)
return fixed_fetch
```
It is a work around.
|
13,583,649
|
We're using EngineYard which has Python installed by default. But when we enabled SSL we received the following error message from our logentries chef recipe.
"WARNING: The "ssl" module is not present. Using unreliable workaround, host identity cannot be verified. Please install "ssl" module or newer version of Python (2.6) if possible."
I'm looking for a way to install the SSL module with chef recipe but I simply don't have enough experience. Could someone point me in the right direction?
Resources:
Logentries chef recipe: <https://github.com/logentries/le_chef>
Logentries EY docs: <https://logentries.com/doc/engineyard/>
SSL Module: <http://pypi.python.org/pypi/ssl/>
|
2012/11/27
|
[
"https://Stackoverflow.com/questions/13583649",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/421709/"
] |
I just wrote a recipe for this, and now am able to run the latest Logentries client on EngineYard. Here you go:
```
file_dir = "/mnt/src/python-ssl"
file_name = "ssl-1.15.tar.gz"
file_path = File.join(file_dir,file_name)
uncompressed_file_dir = File.join(file_dir, file_name.split(".tar.gz").first)
directory file_dir do
owner "deploy"
group "deploy"
mode "0755"
recursive true
action :create
end
remote_file file_path do
source "http://pypi.python.org/packages/source/s/ssl/ssl-1.15.tar.gz"
mode "0644"
not_if { File.exists?(file_path) }
end
execute "gunzip ssl" do
command "gunzip -c #{file_name} | tar xf -"
cwd file_dir
not_if { File.exists?(uncompressed_file_dir) }
end
installed_file_path = File.join(uncompressed_file_dir, "installed")
execute "install python ssl module" do
command "python setup.py install"
cwd uncompressed_file_dir
not_if { File.exists?(installed_file_path) }
end
execute "touch #{installed_file_path}" do
action :run
end
```
|
You could install a new Python using PythonBrew: <https://github.com/utahta/pythonbrew>. Just make you install libssl before you build, or it still won't be able to use SSL. However, based on the warning, it seems that SSL *might* work, but it won't be able to verify host. Of course, that is one major purposes of SSL, so that is likely a non-starter.
HTH
|
13,583,649
|
We're using EngineYard which has Python installed by default. But when we enabled SSL we received the following error message from our logentries chef recipe.
"WARNING: The "ssl" module is not present. Using unreliable workaround, host identity cannot be verified. Please install "ssl" module or newer version of Python (2.6) if possible."
I'm looking for a way to install the SSL module with chef recipe but I simply don't have enough experience. Could someone point me in the right direction?
Resources:
Logentries chef recipe: <https://github.com/logentries/le_chef>
Logentries EY docs: <https://logentries.com/doc/engineyard/>
SSL Module: <http://pypi.python.org/pypi/ssl/>
|
2012/11/27
|
[
"https://Stackoverflow.com/questions/13583649",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/421709/"
] |
There now appears to be a solution with better community support (based on the fact that it is documented on the [opscode website](http://docs.opscode.com/lwrp_python.html)).
You might try:
```
include_recipe 'python'
python_pip 'ssl'
```
As documented: [here](https://supermarket.getchef.com/cookbooks/python) or [here](https://github.com/poise/python)
|
You could install a new Python using PythonBrew: <https://github.com/utahta/pythonbrew>. Just make you install libssl before you build, or it still won't be able to use SSL. However, based on the warning, it seems that SSL *might* work, but it won't be able to verify host. Of course, that is one major purposes of SSL, so that is likely a non-starter.
HTH
|
13,583,649
|
We're using EngineYard which has Python installed by default. But when we enabled SSL we received the following error message from our logentries chef recipe.
"WARNING: The "ssl" module is not present. Using unreliable workaround, host identity cannot be verified. Please install "ssl" module or newer version of Python (2.6) if possible."
I'm looking for a way to install the SSL module with chef recipe but I simply don't have enough experience. Could someone point me in the right direction?
Resources:
Logentries chef recipe: <https://github.com/logentries/le_chef>
Logentries EY docs: <https://logentries.com/doc/engineyard/>
SSL Module: <http://pypi.python.org/pypi/ssl/>
|
2012/11/27
|
[
"https://Stackoverflow.com/questions/13583649",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/421709/"
] |
There now appears to be a solution with better community support (based on the fact that it is documented on the [opscode website](http://docs.opscode.com/lwrp_python.html)).
You might try:
```
include_recipe 'python'
python_pip 'ssl'
```
As documented: [here](https://supermarket.getchef.com/cookbooks/python) or [here](https://github.com/poise/python)
|
I just wrote a recipe for this, and now am able to run the latest Logentries client on EngineYard. Here you go:
```
file_dir = "/mnt/src/python-ssl"
file_name = "ssl-1.15.tar.gz"
file_path = File.join(file_dir,file_name)
uncompressed_file_dir = File.join(file_dir, file_name.split(".tar.gz").first)
directory file_dir do
owner "deploy"
group "deploy"
mode "0755"
recursive true
action :create
end
remote_file file_path do
source "http://pypi.python.org/packages/source/s/ssl/ssl-1.15.tar.gz"
mode "0644"
not_if { File.exists?(file_path) }
end
execute "gunzip ssl" do
command "gunzip -c #{file_name} | tar xf -"
cwd file_dir
not_if { File.exists?(uncompressed_file_dir) }
end
installed_file_path = File.join(uncompressed_file_dir, "installed")
execute "install python ssl module" do
command "python setup.py install"
cwd uncompressed_file_dir
not_if { File.exists?(installed_file_path) }
end
execute "touch #{installed_file_path}" do
action :run
end
```
|
65,534,980
|
This is a little niche, but I want to click on a discord reaction with selenium (python), but only the reaction that has a specific img src.
I had it working where it would be clicking on reactions however it was clicking on every reaction not just the one I wanted.
I've tried to make it only click on the certain element using the aria-label, however this didn't work either.
```
if i.get_attribute('aria-label') == "️, press to react":
```
My function:
```
def Bot():
elements = driver.find_elements_by_xpath("//div[contains(@class, 'message-2qnXI6')]/div[2]/div/div[1]/div/div/div")
for i in elements:
if i.find_element_by_xpath("//div[contains(@class, 'message-2qnXI6')]/div[2]/div/div[1]/div/div/div/img").get_attribute("src") == "https://discord.com/assets/e14bea9653868307f4d1e70fa17e2535.svg":
i.click()
continue
else:
break
```
The HTML:
```
<div class="reactionInner-15NvIl" aria-label="️, press to react" aria-pressed="true" role="button" tabindex="0">
<img src="/assets/e14bea9653868307f4d1e70fa17e2535.svg" alt="️" draggable="false" class="emoji">
<div class="reactionCount-2mvXRV" style="min-width: 9px;">2</div></div>
```
I'm just not exactly sure how to go about this and "verify" that the element I'm clicking on has the specific img src I want.
Any help is appreciated.
|
2021/01/02
|
[
"https://Stackoverflow.com/questions/65534980",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14924745/"
] |
The characters allowed in XML element names are given by the [W3C XML BNF for component names](https://www.w3.org/TR/xml/#NT-NameStartChar):
>
>
> ```
> NameStartChar ::= ":" | [A-Z] | "_" | [a-z] | [#xC0-#xD6] | [#xD8-#xF6] |
> [#xF8-#x2FF] | [#x370-#x37D] | [#x37F-#x1FFF] |
> [#x200C-#x200D] | [#x2070-#x218F] | [#x2C00-#x2FEF] |
> [#x3001-#xD7FF] | [#xF900-#xFDCF] | [#xFDF0-#xFFFD] |
> [#x10000-#xEFFFF]
> NameChar ::= NameStartChar | "-" | "." | [0-9] | #xB7 | [#x0300-#x036F] |
> [#x203F-#x2040]
> Name ::= NameStartChar (NameChar)*
>
> ```
>
>
See also
--------
* [Is a colon a legal first character in an XML tag name?](https://stackoverflow.com/q/40445735/290085)
* [Represent space and tab in XML tag](https://stackoverflow.com/q/514635/290085)
* [How to include ? and / in XML tag](https://stackoverflow.com/q/59621748/290085)
* [Encoding space character in XML name](https://stackoverflow.com/q/46634193/290085)
---
|
Right: Letter, digit, hyphen, underscore and period.
One may use any Unicode letter.
And of course one may prefix the names with *name space* + colon.
|
8,281,119
|
So I am trying to do this problem where I have to find the most frequent 6-letter string within some lines in python, so I realize one could do something like this:
```
>>> from collections import Counter
>>> x = Counter("ACGTGCA")
>>> x
Counter({'A': 2, 'C': 2, 'G': 2, 'T': 1})
```
Now then, the data I'm using is DNA files and the format of a file goes something like this:
```
> name of the protein
ACGTGCA ... < more sequences>
ACGTGCA ... < more sequences>
ACGTGCA ... < more sequences>
ACGTGCA ... < more sequences>
> another protein
AGTTTCAGGAC ... <more sequences>
AGTTTCAGGAC ... <more sequences>
AGTTTCAGGAC ... <more sequences>
AGTTTCAGGAC ... <more sequences>
```
We can start by going one protein at a time, but then how does one modify the block of code above to search for the most frequent 6-character string pattern? Thanks.
|
2011/11/26
|
[
"https://Stackoverflow.com/questions/8281119",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1067233/"
] |
Let us go through a slightly modified version of your test code as it is seen by `irb` and as a stand alone script:
```
def test_method;end
symbols = Symbol.all_symbols # This is already a "fixed" array, no need for map
puts symbols.include?(:test_method)
puts symbols.include?('test_method_nonexistent'.to_sym)
puts symbols.include?(:test_method_nonexistent)
eval 'puts symbols.include?(:really_not_there)'
```
When you try this in `irb`, each line will be parsed and evaluated before the next line. When you hit the second line, `symbols` will contain `:test_method` because `def test_method;end` has already been evaluated. But, the `:test_method_nonexistent` symbol hasn't been seen anywhere when we hit line 2 so lines 4 and 5 will say "false". Line 6 will, of course, give us another false because `:really_not_there` doesn't exist until after `eval` returns. So `irb` says this:
```
true
false
false
false
```
If you run this as a Ruby script, things happen in a slightly different order. First Ruby will parse the script into an internal format that the Ruby VM understands and then it goes back to the first line and starts executing the script. When the script is being parsed, the `:test_method` symbol will exist after the first line is parsed and `:test_method_nonexistent` will exist after the fifth line has been parsed; so, before the script runs, two of the symbols we're interested in are known. When we hit line six, Ruby just sees an `eval` and a string but it doesn't yet know that the `eval` cause a symbol to come into existence.
Now we have two of our symbols (`:test_method` and `:test_method_nonexistent`) and a simple string that, when fed to `eval`, will create a symbol (`:really_not_there`). Then we go back to the beginning and the VM starts running code. When we run line 2 and cache our symbols array, both `:test_method` and `:test_method_nonexistent` will exist and appear in the `symbols` array because the parser created them. So lines 3 through 5:
```
puts symbols.include?(:test_method)
puts symbols.include?('test_method_nonexistent'.to_sym)
puts symbols.include?(:test_method_nonexistent)
```
will print "true". Then we hit line 6:
```
eval 'puts symbols.include?(:really_not_there)'
```
and "false" is printed because `:really_not_there` is created by the `eval` at run-time rather than during parsing. The result is that Ruby says:
```
true
true
true
false
```
If we add this at the end:
```
symbols = Symbol.all_symbols
puts symbols.include?('really_not_there'.to_sym)
```
Then we'll get another "true" out of both `irb` and the stand-alone script because `eval` will have created `:really_not_there` and we will have grabbed a fresh copy of the symbol list.
|
The reason you have to convert symbols to strings when checking for existence of a symbol is that it will always return true otherwise. The argument being passed to the `include?` method gets evaluated first, so if you pass it a symbol, the a new symbol is instantiated and added into the heap, so `Symbol.all_symbols` does, in fact, have a copy of the symbol.
```
Symbol.all_symbols.include? :the_crow_flies_at_midnight #=> true
```
However, converting thousands of symbols to strings for comparison (which is much faster with symbols) is a poor solution. A better method would be to change the order that these statements get evaluated:
```
symbols = Symbol.all_symbols
symbols.include? :the_crow_flies_at_midnight #=> false
```
This "snapshot" of what symbols are in the dictionary is taken before our tested symbol is inserted onto the heap, so even though our argument exists on the heap at the time the `include?` method is called, we are getting the result we expect.
I don't know why it isn't working in your IRB console. Perhaps you mistyped.
|
9,917,628
|
I'm attempting to follow Heroku's python quickstart guide but am running into repeated problems. At the moment, "git push heroku master" is failing because it cannot install Bonjour. Does anyone know if this is a truly necessary requirement, and whether I can change the version required, or somehow otherwise fix this? Full text of push follows below.
```
(venv)172-26-12-64:helloflask Spike$ git push heroku master
Counting objects: 488, done.
Delta compression using up to 2 threads.
Compressing objects: 100% (444/444), done.
Writing objects: 100% (488/488), 1.43 MiB, done.
Total 488 (delta 33), reused 0 (delta 0)
-----> Heroku receiving push
-----> Python app detected
-----> Preparing virtualenv version 1.7
New python executable in ./bin/python
Installing distribute.............................................................................................................................................................................................done.
Installing pip...............done.
-----> Activating virtualenv
-----> Installing dependencies using pip version 1.0.2
Downloading/unpacking Flask==0.8 (from -r requirements.txt (line 1))
Creating supposed download cache at /app/tmp/repo.git/.cache/pip_downloads
Storing download in cache at /app/tmp/repo.git/.cache/pip_downloads/http%3A%2F%2Fpypi.python.org%2Fpackages%2Fsource%2FF%2FFlask%2FFlask-0.8.tar.gz
Running setup.py egg_info for package Flask
warning: no files found matching '*' under directory 'tests'
warning: no previously-included files matching '*.pyc' found under directory 'docs'
warning: no previously-included files matching '*.pyo' found under directory 'docs'
warning: no previously-included files matching '*.pyc' found under directory 'tests'
warning: no previously-included files matching '*.pyo' found under directory 'tests'
warning: no previously-included files matching '*.pyc' found under directory 'examples'
warning: no previously-included files matching '*.pyo' found under directory 'examples'
no previously-included directories found matching 'docs/_build'
no previously-included directories found matching 'docs/_themes/.git'
Downloading/unpacking IMAPClient==0.8 (from -r requirements.txt (line 2))
Storing download in cache at /app/tmp/repo.git/.cache/pip_downloads/http%3A%2F%2Ffreshfoo.com%2Fprojects%2FIMAPClient%2FIMAPClient-0.8.zip
Running setup.py egg_info for package IMAPClient
no previously-included directories found matching 'doc/doctrees/'
Downloading/unpacking Jinja2==2.6 (from -r requirements.txt (line 3))
Storing download in cache at /app/tmp/repo.git/.cache/pip_downloads/http%3A%2F%2Fpypi.python.org%2Fpackages%2Fsource%2FJ%2FJinja2%2FJinja2-2.6.tar.gz
Running setup.py egg_info for package Jinja2
warning: no previously-included files matching '*' found under directory 'docs/_build'
warning: no previously-included files matching '*.pyc' found under directory 'jinja2'
warning: no previously-included files matching '*.pyc' found under directory 'docs'
warning: no previously-included files matching '*.pyo' found under directory 'jinja2'
warning: no previously-included files matching '*.pyo' found under directory 'docs'
Downloading/unpacking PIL==1.1.7 (from -r requirements.txt (line 4))
Storing download in cache at /app/tmp/repo.git/.cache/pip_downloads/http%3A%2F%2Feffbot.org%2Fmedia%2Fdownloads%2FPIL-1.1.7.tar.gz
Running setup.py egg_info for package PIL
WARNING: '' not a valid package name; please use only.-separated package names in setup.py
Downloading/unpacking PyRSS2Gen==1.0.0 (from -r requirements.txt (line 5))
Downloading PyRSS2Gen-1.0.0.tar.gz
Storing download in cache at /app/tmp/repo.git/.cache/pip_downloads/http%3A%2F%2Fpypi.python.org%2Fpackages%2Fsource%2FP%2FPyRSS2Gen%2FPyRSS2Gen-1.0.0.tar.gz
Running setup.py egg_info for package PyRSS2Gen
Downloading/unpacking PyYAML==3.10 (from -r requirements.txt (line 6))
Storing download in cache at /app/tmp/repo.git/.cache/pip_downloads/http%3A%2F%2Fpypi.python.org%2Fpackages%2Fsource%2FP%2FPyYAML%2FPyYAML-3.10.tar.gz
Running setup.py egg_info for package PyYAML
Downloading/unpacking Twisted==11.0.0 (from -r requirements.txt (line 7))
Storing download in cache at /app/tmp/repo.git/.cache/pip_downloads/http%3A%2F%2Fpypi.python.org%2Fpackages%2Fsource%2FT%2FTwisted%2FTwisted-11.0.0.tar.bz2
Running setup.py egg_info for package Twisted
Downloading/unpacking WebOb==1.1.1 (from -r requirements.txt (line 8))
Storing download in cache at /app/tmp/repo.git/.cache/pip_downloads/http%3A%2F%2Fpypi.python.org%2Fpackages%2Fsource%2FW%2FWebOb%2FWebOb-1.1.1.zip
Running setup.py egg_info for package WebOb
no previously-included directories found matching '*.pyc'
no previously-included directories found matching '*.pyo'
Downloading/unpacking Werkzeug==0.8.3 (from -r requirements.txt (line 9))
Storing download in cache at /app/tmp/repo.git/.cache/pip_downloads/http%3A%2F%2Fpypi.python.org%2Fpackages%2Fsource%2FW%2FWerkzeug%2FWerkzeug-0.8.3.tar.gz
Running setup.py egg_info for package Werkzeug
warning: no files found matching '*' under directory 'werkzeug/debug/templates'
warning: no files found matching '*' under directory 'tests'
warning: no previously-included files matching '*.pyc' found under directory 'docs'
warning: no previously-included files matching '*.pyo' found under directory 'docs'
warning: no previously-included files matching '*.pyc' found under directory 'tests'
warning: no previously-included files matching '*.pyo' found under directory 'tests'
warning: no previously-included files matching '*.pyc' found under directory 'examples'
warning: no previously-included files matching '*.pyo' found under directory 'examples'
no previously-included directories found matching 'docs/_build'
Downloading/unpacking altgraph==0.7.2 (from -r requirements.txt (line 10))
Downloading altgraph-0.7.2.tar.gz
Storing download in cache at /app/tmp/repo.git/.cache/pip_downloads/http%3A%2F%2Fpypi.python.org%2Fpackages%2Fsource%2Fa%2Faltgraph%2Faltgraph-0.7.2.tar.gz
Running setup.py egg_info for package altgraph
warning: no files found matching '*.txt'
warning: no previously-included files matching '.DS_Store' found anywhere in distribution
warning: no previously-included files matching '*.pyc' found anywhere in distribution
warning: no previously-included files matching '*.so' found anywhere in distribution
Downloading/unpacking apipkg==1.0 (from -r requirements.txt (line 11))
Downloading apipkg-1.0.tar.gz
Storing download in cache at /app/tmp/repo.git/.cache/pip_downloads/http%3A%2F%2Fpypi.python.org%2Fpackages%2Fsource%2Fa%2Fapipkg%2Fapipkg-1.0.tar.gz
Running setup.py egg_info for package apipkg
no previously-included directories found matching '.svn'
no previously-included directories found matching '.hg'
Downloading/unpacking bdist-mpkg==0.4.4 (from -r requirements.txt (line 12))
Downloading bdist_mpkg-0.4.4.tar.gz
Storing download in cache at /app/tmp/repo.git/.cache/pip_downloads/http%3A%2F%2Fa.pypi.python.org%2Fpackages%2Fsource%2Fb%2Fbdist_mpkg%2Fbdist_mpkg-0.4.4.tar.gz
Running setup.py egg_info for package bdist-mpkg
Downloading/unpacking beautifulsoup4==4.0.1 (from -r requirements.txt (line 13))
Storing download in cache at /app/tmp/repo.git/.cache/pip_downloads/http%3A%2F%2Fpypi.python.org%2Fpackages%2Fsource%2Fb%2Fbeautifulsoup4%2Fbeautifulsoup4-4.0.1.tar.gz
Running setup.py egg_info for package beautifulsoup4
Downloading/unpacking bonjour-py==0.3 (from -r requirements.txt (line 14))
Could not find any downloads that satisfy the requirement bonjour-py==0.3 (from -r requirements.txt (line 14))
No distributions at all found for bonjour-py==0.3 (from -r requirements.txt (line 14))
Storing complete log in /app/.pip/pip.log
! Heroku push rejected, failed to compile Python app
To git@heroku.com:radiant-night-5176.git
! [remote rejected] master -> master (pre-receive hook declined)
error: failed to push some refs to 'git@heroku.com:radiant-night-5176.git'
```
|
2012/03/29
|
[
"https://Stackoverflow.com/questions/9917628",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/723212/"
] |
Are you trying to push the example app from the quickstart? Many of the requirements you're trying to install aren't required at all.
I suspect that you created your requirements file outside of the recommended virtualenv, and that twisted and bonjour-py are packages install in your system python installation.
|
Why is `bonjour-py` in your requirements.txt file? Removing it should fix your problem.
Furthermore, I can't seem to install it either, so it's no wonder Heroku fails there.
```
(so)modocache $ pip install bonjour-py
Downloading/unpacking bonjour-py
Could not find any downloads that satisfy the requirement bonjour-py
No distributions at all found for bonjour-py
Storing complete log in /Users/modocache/.pip/pip.log
```
|
9,917,628
|
I'm attempting to follow Heroku's python quickstart guide but am running into repeated problems. At the moment, "git push heroku master" is failing because it cannot install Bonjour. Does anyone know if this is a truly necessary requirement, and whether I can change the version required, or somehow otherwise fix this? Full text of push follows below.
```
(venv)172-26-12-64:helloflask Spike$ git push heroku master
Counting objects: 488, done.
Delta compression using up to 2 threads.
Compressing objects: 100% (444/444), done.
Writing objects: 100% (488/488), 1.43 MiB, done.
Total 488 (delta 33), reused 0 (delta 0)
-----> Heroku receiving push
-----> Python app detected
-----> Preparing virtualenv version 1.7
New python executable in ./bin/python
Installing distribute.............................................................................................................................................................................................done.
Installing pip...............done.
-----> Activating virtualenv
-----> Installing dependencies using pip version 1.0.2
Downloading/unpacking Flask==0.8 (from -r requirements.txt (line 1))
Creating supposed download cache at /app/tmp/repo.git/.cache/pip_downloads
Storing download in cache at /app/tmp/repo.git/.cache/pip_downloads/http%3A%2F%2Fpypi.python.org%2Fpackages%2Fsource%2FF%2FFlask%2FFlask-0.8.tar.gz
Running setup.py egg_info for package Flask
warning: no files found matching '*' under directory 'tests'
warning: no previously-included files matching '*.pyc' found under directory 'docs'
warning: no previously-included files matching '*.pyo' found under directory 'docs'
warning: no previously-included files matching '*.pyc' found under directory 'tests'
warning: no previously-included files matching '*.pyo' found under directory 'tests'
warning: no previously-included files matching '*.pyc' found under directory 'examples'
warning: no previously-included files matching '*.pyo' found under directory 'examples'
no previously-included directories found matching 'docs/_build'
no previously-included directories found matching 'docs/_themes/.git'
Downloading/unpacking IMAPClient==0.8 (from -r requirements.txt (line 2))
Storing download in cache at /app/tmp/repo.git/.cache/pip_downloads/http%3A%2F%2Ffreshfoo.com%2Fprojects%2FIMAPClient%2FIMAPClient-0.8.zip
Running setup.py egg_info for package IMAPClient
no previously-included directories found matching 'doc/doctrees/'
Downloading/unpacking Jinja2==2.6 (from -r requirements.txt (line 3))
Storing download in cache at /app/tmp/repo.git/.cache/pip_downloads/http%3A%2F%2Fpypi.python.org%2Fpackages%2Fsource%2FJ%2FJinja2%2FJinja2-2.6.tar.gz
Running setup.py egg_info for package Jinja2
warning: no previously-included files matching '*' found under directory 'docs/_build'
warning: no previously-included files matching '*.pyc' found under directory 'jinja2'
warning: no previously-included files matching '*.pyc' found under directory 'docs'
warning: no previously-included files matching '*.pyo' found under directory 'jinja2'
warning: no previously-included files matching '*.pyo' found under directory 'docs'
Downloading/unpacking PIL==1.1.7 (from -r requirements.txt (line 4))
Storing download in cache at /app/tmp/repo.git/.cache/pip_downloads/http%3A%2F%2Feffbot.org%2Fmedia%2Fdownloads%2FPIL-1.1.7.tar.gz
Running setup.py egg_info for package PIL
WARNING: '' not a valid package name; please use only.-separated package names in setup.py
Downloading/unpacking PyRSS2Gen==1.0.0 (from -r requirements.txt (line 5))
Downloading PyRSS2Gen-1.0.0.tar.gz
Storing download in cache at /app/tmp/repo.git/.cache/pip_downloads/http%3A%2F%2Fpypi.python.org%2Fpackages%2Fsource%2FP%2FPyRSS2Gen%2FPyRSS2Gen-1.0.0.tar.gz
Running setup.py egg_info for package PyRSS2Gen
Downloading/unpacking PyYAML==3.10 (from -r requirements.txt (line 6))
Storing download in cache at /app/tmp/repo.git/.cache/pip_downloads/http%3A%2F%2Fpypi.python.org%2Fpackages%2Fsource%2FP%2FPyYAML%2FPyYAML-3.10.tar.gz
Running setup.py egg_info for package PyYAML
Downloading/unpacking Twisted==11.0.0 (from -r requirements.txt (line 7))
Storing download in cache at /app/tmp/repo.git/.cache/pip_downloads/http%3A%2F%2Fpypi.python.org%2Fpackages%2Fsource%2FT%2FTwisted%2FTwisted-11.0.0.tar.bz2
Running setup.py egg_info for package Twisted
Downloading/unpacking WebOb==1.1.1 (from -r requirements.txt (line 8))
Storing download in cache at /app/tmp/repo.git/.cache/pip_downloads/http%3A%2F%2Fpypi.python.org%2Fpackages%2Fsource%2FW%2FWebOb%2FWebOb-1.1.1.zip
Running setup.py egg_info for package WebOb
no previously-included directories found matching '*.pyc'
no previously-included directories found matching '*.pyo'
Downloading/unpacking Werkzeug==0.8.3 (from -r requirements.txt (line 9))
Storing download in cache at /app/tmp/repo.git/.cache/pip_downloads/http%3A%2F%2Fpypi.python.org%2Fpackages%2Fsource%2FW%2FWerkzeug%2FWerkzeug-0.8.3.tar.gz
Running setup.py egg_info for package Werkzeug
warning: no files found matching '*' under directory 'werkzeug/debug/templates'
warning: no files found matching '*' under directory 'tests'
warning: no previously-included files matching '*.pyc' found under directory 'docs'
warning: no previously-included files matching '*.pyo' found under directory 'docs'
warning: no previously-included files matching '*.pyc' found under directory 'tests'
warning: no previously-included files matching '*.pyo' found under directory 'tests'
warning: no previously-included files matching '*.pyc' found under directory 'examples'
warning: no previously-included files matching '*.pyo' found under directory 'examples'
no previously-included directories found matching 'docs/_build'
Downloading/unpacking altgraph==0.7.2 (from -r requirements.txt (line 10))
Downloading altgraph-0.7.2.tar.gz
Storing download in cache at /app/tmp/repo.git/.cache/pip_downloads/http%3A%2F%2Fpypi.python.org%2Fpackages%2Fsource%2Fa%2Faltgraph%2Faltgraph-0.7.2.tar.gz
Running setup.py egg_info for package altgraph
warning: no files found matching '*.txt'
warning: no previously-included files matching '.DS_Store' found anywhere in distribution
warning: no previously-included files matching '*.pyc' found anywhere in distribution
warning: no previously-included files matching '*.so' found anywhere in distribution
Downloading/unpacking apipkg==1.0 (from -r requirements.txt (line 11))
Downloading apipkg-1.0.tar.gz
Storing download in cache at /app/tmp/repo.git/.cache/pip_downloads/http%3A%2F%2Fpypi.python.org%2Fpackages%2Fsource%2Fa%2Fapipkg%2Fapipkg-1.0.tar.gz
Running setup.py egg_info for package apipkg
no previously-included directories found matching '.svn'
no previously-included directories found matching '.hg'
Downloading/unpacking bdist-mpkg==0.4.4 (from -r requirements.txt (line 12))
Downloading bdist_mpkg-0.4.4.tar.gz
Storing download in cache at /app/tmp/repo.git/.cache/pip_downloads/http%3A%2F%2Fa.pypi.python.org%2Fpackages%2Fsource%2Fb%2Fbdist_mpkg%2Fbdist_mpkg-0.4.4.tar.gz
Running setup.py egg_info for package bdist-mpkg
Downloading/unpacking beautifulsoup4==4.0.1 (from -r requirements.txt (line 13))
Storing download in cache at /app/tmp/repo.git/.cache/pip_downloads/http%3A%2F%2Fpypi.python.org%2Fpackages%2Fsource%2Fb%2Fbeautifulsoup4%2Fbeautifulsoup4-4.0.1.tar.gz
Running setup.py egg_info for package beautifulsoup4
Downloading/unpacking bonjour-py==0.3 (from -r requirements.txt (line 14))
Could not find any downloads that satisfy the requirement bonjour-py==0.3 (from -r requirements.txt (line 14))
No distributions at all found for bonjour-py==0.3 (from -r requirements.txt (line 14))
Storing complete log in /app/.pip/pip.log
! Heroku push rejected, failed to compile Python app
To git@heroku.com:radiant-night-5176.git
! [remote rejected] master -> master (pre-receive hook declined)
error: failed to push some refs to 'git@heroku.com:radiant-night-5176.git'
```
|
2012/03/29
|
[
"https://Stackoverflow.com/questions/9917628",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/723212/"
] |
Maybe you are using a virtual environment and you just forgot to activate it
In this case, depending on how you stored the virtual environment in your project, you will probably first need to
```
source venv/bin/activate
```
and after that freeze your requirements
```
pip freeze > requirements.txt
```
finally
```
git push heroku master
```
|
Why is `bonjour-py` in your requirements.txt file? Removing it should fix your problem.
Furthermore, I can't seem to install it either, so it's no wonder Heroku fails there.
```
(so)modocache $ pip install bonjour-py
Downloading/unpacking bonjour-py
Could not find any downloads that satisfy the requirement bonjour-py
No distributions at all found for bonjour-py
Storing complete log in /Users/modocache/.pip/pip.log
```
|
9,917,628
|
I'm attempting to follow Heroku's python quickstart guide but am running into repeated problems. At the moment, "git push heroku master" is failing because it cannot install Bonjour. Does anyone know if this is a truly necessary requirement, and whether I can change the version required, or somehow otherwise fix this? Full text of push follows below.
```
(venv)172-26-12-64:helloflask Spike$ git push heroku master
Counting objects: 488, done.
Delta compression using up to 2 threads.
Compressing objects: 100% (444/444), done.
Writing objects: 100% (488/488), 1.43 MiB, done.
Total 488 (delta 33), reused 0 (delta 0)
-----> Heroku receiving push
-----> Python app detected
-----> Preparing virtualenv version 1.7
New python executable in ./bin/python
Installing distribute.............................................................................................................................................................................................done.
Installing pip...............done.
-----> Activating virtualenv
-----> Installing dependencies using pip version 1.0.2
Downloading/unpacking Flask==0.8 (from -r requirements.txt (line 1))
Creating supposed download cache at /app/tmp/repo.git/.cache/pip_downloads
Storing download in cache at /app/tmp/repo.git/.cache/pip_downloads/http%3A%2F%2Fpypi.python.org%2Fpackages%2Fsource%2FF%2FFlask%2FFlask-0.8.tar.gz
Running setup.py egg_info for package Flask
warning: no files found matching '*' under directory 'tests'
warning: no previously-included files matching '*.pyc' found under directory 'docs'
warning: no previously-included files matching '*.pyo' found under directory 'docs'
warning: no previously-included files matching '*.pyc' found under directory 'tests'
warning: no previously-included files matching '*.pyo' found under directory 'tests'
warning: no previously-included files matching '*.pyc' found under directory 'examples'
warning: no previously-included files matching '*.pyo' found under directory 'examples'
no previously-included directories found matching 'docs/_build'
no previously-included directories found matching 'docs/_themes/.git'
Downloading/unpacking IMAPClient==0.8 (from -r requirements.txt (line 2))
Storing download in cache at /app/tmp/repo.git/.cache/pip_downloads/http%3A%2F%2Ffreshfoo.com%2Fprojects%2FIMAPClient%2FIMAPClient-0.8.zip
Running setup.py egg_info for package IMAPClient
no previously-included directories found matching 'doc/doctrees/'
Downloading/unpacking Jinja2==2.6 (from -r requirements.txt (line 3))
Storing download in cache at /app/tmp/repo.git/.cache/pip_downloads/http%3A%2F%2Fpypi.python.org%2Fpackages%2Fsource%2FJ%2FJinja2%2FJinja2-2.6.tar.gz
Running setup.py egg_info for package Jinja2
warning: no previously-included files matching '*' found under directory 'docs/_build'
warning: no previously-included files matching '*.pyc' found under directory 'jinja2'
warning: no previously-included files matching '*.pyc' found under directory 'docs'
warning: no previously-included files matching '*.pyo' found under directory 'jinja2'
warning: no previously-included files matching '*.pyo' found under directory 'docs'
Downloading/unpacking PIL==1.1.7 (from -r requirements.txt (line 4))
Storing download in cache at /app/tmp/repo.git/.cache/pip_downloads/http%3A%2F%2Feffbot.org%2Fmedia%2Fdownloads%2FPIL-1.1.7.tar.gz
Running setup.py egg_info for package PIL
WARNING: '' not a valid package name; please use only.-separated package names in setup.py
Downloading/unpacking PyRSS2Gen==1.0.0 (from -r requirements.txt (line 5))
Downloading PyRSS2Gen-1.0.0.tar.gz
Storing download in cache at /app/tmp/repo.git/.cache/pip_downloads/http%3A%2F%2Fpypi.python.org%2Fpackages%2Fsource%2FP%2FPyRSS2Gen%2FPyRSS2Gen-1.0.0.tar.gz
Running setup.py egg_info for package PyRSS2Gen
Downloading/unpacking PyYAML==3.10 (from -r requirements.txt (line 6))
Storing download in cache at /app/tmp/repo.git/.cache/pip_downloads/http%3A%2F%2Fpypi.python.org%2Fpackages%2Fsource%2FP%2FPyYAML%2FPyYAML-3.10.tar.gz
Running setup.py egg_info for package PyYAML
Downloading/unpacking Twisted==11.0.0 (from -r requirements.txt (line 7))
Storing download in cache at /app/tmp/repo.git/.cache/pip_downloads/http%3A%2F%2Fpypi.python.org%2Fpackages%2Fsource%2FT%2FTwisted%2FTwisted-11.0.0.tar.bz2
Running setup.py egg_info for package Twisted
Downloading/unpacking WebOb==1.1.1 (from -r requirements.txt (line 8))
Storing download in cache at /app/tmp/repo.git/.cache/pip_downloads/http%3A%2F%2Fpypi.python.org%2Fpackages%2Fsource%2FW%2FWebOb%2FWebOb-1.1.1.zip
Running setup.py egg_info for package WebOb
no previously-included directories found matching '*.pyc'
no previously-included directories found matching '*.pyo'
Downloading/unpacking Werkzeug==0.8.3 (from -r requirements.txt (line 9))
Storing download in cache at /app/tmp/repo.git/.cache/pip_downloads/http%3A%2F%2Fpypi.python.org%2Fpackages%2Fsource%2FW%2FWerkzeug%2FWerkzeug-0.8.3.tar.gz
Running setup.py egg_info for package Werkzeug
warning: no files found matching '*' under directory 'werkzeug/debug/templates'
warning: no files found matching '*' under directory 'tests'
warning: no previously-included files matching '*.pyc' found under directory 'docs'
warning: no previously-included files matching '*.pyo' found under directory 'docs'
warning: no previously-included files matching '*.pyc' found under directory 'tests'
warning: no previously-included files matching '*.pyo' found under directory 'tests'
warning: no previously-included files matching '*.pyc' found under directory 'examples'
warning: no previously-included files matching '*.pyo' found under directory 'examples'
no previously-included directories found matching 'docs/_build'
Downloading/unpacking altgraph==0.7.2 (from -r requirements.txt (line 10))
Downloading altgraph-0.7.2.tar.gz
Storing download in cache at /app/tmp/repo.git/.cache/pip_downloads/http%3A%2F%2Fpypi.python.org%2Fpackages%2Fsource%2Fa%2Faltgraph%2Faltgraph-0.7.2.tar.gz
Running setup.py egg_info for package altgraph
warning: no files found matching '*.txt'
warning: no previously-included files matching '.DS_Store' found anywhere in distribution
warning: no previously-included files matching '*.pyc' found anywhere in distribution
warning: no previously-included files matching '*.so' found anywhere in distribution
Downloading/unpacking apipkg==1.0 (from -r requirements.txt (line 11))
Downloading apipkg-1.0.tar.gz
Storing download in cache at /app/tmp/repo.git/.cache/pip_downloads/http%3A%2F%2Fpypi.python.org%2Fpackages%2Fsource%2Fa%2Fapipkg%2Fapipkg-1.0.tar.gz
Running setup.py egg_info for package apipkg
no previously-included directories found matching '.svn'
no previously-included directories found matching '.hg'
Downloading/unpacking bdist-mpkg==0.4.4 (from -r requirements.txt (line 12))
Downloading bdist_mpkg-0.4.4.tar.gz
Storing download in cache at /app/tmp/repo.git/.cache/pip_downloads/http%3A%2F%2Fa.pypi.python.org%2Fpackages%2Fsource%2Fb%2Fbdist_mpkg%2Fbdist_mpkg-0.4.4.tar.gz
Running setup.py egg_info for package bdist-mpkg
Downloading/unpacking beautifulsoup4==4.0.1 (from -r requirements.txt (line 13))
Storing download in cache at /app/tmp/repo.git/.cache/pip_downloads/http%3A%2F%2Fpypi.python.org%2Fpackages%2Fsource%2Fb%2Fbeautifulsoup4%2Fbeautifulsoup4-4.0.1.tar.gz
Running setup.py egg_info for package beautifulsoup4
Downloading/unpacking bonjour-py==0.3 (from -r requirements.txt (line 14))
Could not find any downloads that satisfy the requirement bonjour-py==0.3 (from -r requirements.txt (line 14))
No distributions at all found for bonjour-py==0.3 (from -r requirements.txt (line 14))
Storing complete log in /app/.pip/pip.log
! Heroku push rejected, failed to compile Python app
To git@heroku.com:radiant-night-5176.git
! [remote rejected] master -> master (pre-receive hook declined)
error: failed to push some refs to 'git@heroku.com:radiant-night-5176.git'
```
|
2012/03/29
|
[
"https://Stackoverflow.com/questions/9917628",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/723212/"
] |
Why is `bonjour-py` in your requirements.txt file? Removing it should fix your problem.
Furthermore, I can't seem to install it either, so it's no wonder Heroku fails there.
```
(so)modocache $ pip install bonjour-py
Downloading/unpacking bonjour-py
Could not find any downloads that satisfy the requirement bonjour-py
No distributions at all found for bonjour-py
Storing complete log in /Users/modocache/.pip/pip.log
```
|
I usually had same issue with some project referencing `bonjour-py` in their `requirements.txt`, didn't know which or how to track that one at the moment.
And someone told me that **[pip-tool](https://github.com/nvie/pip-tools)**. It's actually a great alternative to identify which you have, and if you want to update them. And as a bonus it ignored well the `bonjour-py` error.
|
9,917,628
|
I'm attempting to follow Heroku's python quickstart guide but am running into repeated problems. At the moment, "git push heroku master" is failing because it cannot install Bonjour. Does anyone know if this is a truly necessary requirement, and whether I can change the version required, or somehow otherwise fix this? Full text of push follows below.
```
(venv)172-26-12-64:helloflask Spike$ git push heroku master
Counting objects: 488, done.
Delta compression using up to 2 threads.
Compressing objects: 100% (444/444), done.
Writing objects: 100% (488/488), 1.43 MiB, done.
Total 488 (delta 33), reused 0 (delta 0)
-----> Heroku receiving push
-----> Python app detected
-----> Preparing virtualenv version 1.7
New python executable in ./bin/python
Installing distribute.............................................................................................................................................................................................done.
Installing pip...............done.
-----> Activating virtualenv
-----> Installing dependencies using pip version 1.0.2
Downloading/unpacking Flask==0.8 (from -r requirements.txt (line 1))
Creating supposed download cache at /app/tmp/repo.git/.cache/pip_downloads
Storing download in cache at /app/tmp/repo.git/.cache/pip_downloads/http%3A%2F%2Fpypi.python.org%2Fpackages%2Fsource%2FF%2FFlask%2FFlask-0.8.tar.gz
Running setup.py egg_info for package Flask
warning: no files found matching '*' under directory 'tests'
warning: no previously-included files matching '*.pyc' found under directory 'docs'
warning: no previously-included files matching '*.pyo' found under directory 'docs'
warning: no previously-included files matching '*.pyc' found under directory 'tests'
warning: no previously-included files matching '*.pyo' found under directory 'tests'
warning: no previously-included files matching '*.pyc' found under directory 'examples'
warning: no previously-included files matching '*.pyo' found under directory 'examples'
no previously-included directories found matching 'docs/_build'
no previously-included directories found matching 'docs/_themes/.git'
Downloading/unpacking IMAPClient==0.8 (from -r requirements.txt (line 2))
Storing download in cache at /app/tmp/repo.git/.cache/pip_downloads/http%3A%2F%2Ffreshfoo.com%2Fprojects%2FIMAPClient%2FIMAPClient-0.8.zip
Running setup.py egg_info for package IMAPClient
no previously-included directories found matching 'doc/doctrees/'
Downloading/unpacking Jinja2==2.6 (from -r requirements.txt (line 3))
Storing download in cache at /app/tmp/repo.git/.cache/pip_downloads/http%3A%2F%2Fpypi.python.org%2Fpackages%2Fsource%2FJ%2FJinja2%2FJinja2-2.6.tar.gz
Running setup.py egg_info for package Jinja2
warning: no previously-included files matching '*' found under directory 'docs/_build'
warning: no previously-included files matching '*.pyc' found under directory 'jinja2'
warning: no previously-included files matching '*.pyc' found under directory 'docs'
warning: no previously-included files matching '*.pyo' found under directory 'jinja2'
warning: no previously-included files matching '*.pyo' found under directory 'docs'
Downloading/unpacking PIL==1.1.7 (from -r requirements.txt (line 4))
Storing download in cache at /app/tmp/repo.git/.cache/pip_downloads/http%3A%2F%2Feffbot.org%2Fmedia%2Fdownloads%2FPIL-1.1.7.tar.gz
Running setup.py egg_info for package PIL
WARNING: '' not a valid package name; please use only.-separated package names in setup.py
Downloading/unpacking PyRSS2Gen==1.0.0 (from -r requirements.txt (line 5))
Downloading PyRSS2Gen-1.0.0.tar.gz
Storing download in cache at /app/tmp/repo.git/.cache/pip_downloads/http%3A%2F%2Fpypi.python.org%2Fpackages%2Fsource%2FP%2FPyRSS2Gen%2FPyRSS2Gen-1.0.0.tar.gz
Running setup.py egg_info for package PyRSS2Gen
Downloading/unpacking PyYAML==3.10 (from -r requirements.txt (line 6))
Storing download in cache at /app/tmp/repo.git/.cache/pip_downloads/http%3A%2F%2Fpypi.python.org%2Fpackages%2Fsource%2FP%2FPyYAML%2FPyYAML-3.10.tar.gz
Running setup.py egg_info for package PyYAML
Downloading/unpacking Twisted==11.0.0 (from -r requirements.txt (line 7))
Storing download in cache at /app/tmp/repo.git/.cache/pip_downloads/http%3A%2F%2Fpypi.python.org%2Fpackages%2Fsource%2FT%2FTwisted%2FTwisted-11.0.0.tar.bz2
Running setup.py egg_info for package Twisted
Downloading/unpacking WebOb==1.1.1 (from -r requirements.txt (line 8))
Storing download in cache at /app/tmp/repo.git/.cache/pip_downloads/http%3A%2F%2Fpypi.python.org%2Fpackages%2Fsource%2FW%2FWebOb%2FWebOb-1.1.1.zip
Running setup.py egg_info for package WebOb
no previously-included directories found matching '*.pyc'
no previously-included directories found matching '*.pyo'
Downloading/unpacking Werkzeug==0.8.3 (from -r requirements.txt (line 9))
Storing download in cache at /app/tmp/repo.git/.cache/pip_downloads/http%3A%2F%2Fpypi.python.org%2Fpackages%2Fsource%2FW%2FWerkzeug%2FWerkzeug-0.8.3.tar.gz
Running setup.py egg_info for package Werkzeug
warning: no files found matching '*' under directory 'werkzeug/debug/templates'
warning: no files found matching '*' under directory 'tests'
warning: no previously-included files matching '*.pyc' found under directory 'docs'
warning: no previously-included files matching '*.pyo' found under directory 'docs'
warning: no previously-included files matching '*.pyc' found under directory 'tests'
warning: no previously-included files matching '*.pyo' found under directory 'tests'
warning: no previously-included files matching '*.pyc' found under directory 'examples'
warning: no previously-included files matching '*.pyo' found under directory 'examples'
no previously-included directories found matching 'docs/_build'
Downloading/unpacking altgraph==0.7.2 (from -r requirements.txt (line 10))
Downloading altgraph-0.7.2.tar.gz
Storing download in cache at /app/tmp/repo.git/.cache/pip_downloads/http%3A%2F%2Fpypi.python.org%2Fpackages%2Fsource%2Fa%2Faltgraph%2Faltgraph-0.7.2.tar.gz
Running setup.py egg_info for package altgraph
warning: no files found matching '*.txt'
warning: no previously-included files matching '.DS_Store' found anywhere in distribution
warning: no previously-included files matching '*.pyc' found anywhere in distribution
warning: no previously-included files matching '*.so' found anywhere in distribution
Downloading/unpacking apipkg==1.0 (from -r requirements.txt (line 11))
Downloading apipkg-1.0.tar.gz
Storing download in cache at /app/tmp/repo.git/.cache/pip_downloads/http%3A%2F%2Fpypi.python.org%2Fpackages%2Fsource%2Fa%2Fapipkg%2Fapipkg-1.0.tar.gz
Running setup.py egg_info for package apipkg
no previously-included directories found matching '.svn'
no previously-included directories found matching '.hg'
Downloading/unpacking bdist-mpkg==0.4.4 (from -r requirements.txt (line 12))
Downloading bdist_mpkg-0.4.4.tar.gz
Storing download in cache at /app/tmp/repo.git/.cache/pip_downloads/http%3A%2F%2Fa.pypi.python.org%2Fpackages%2Fsource%2Fb%2Fbdist_mpkg%2Fbdist_mpkg-0.4.4.tar.gz
Running setup.py egg_info for package bdist-mpkg
Downloading/unpacking beautifulsoup4==4.0.1 (from -r requirements.txt (line 13))
Storing download in cache at /app/tmp/repo.git/.cache/pip_downloads/http%3A%2F%2Fpypi.python.org%2Fpackages%2Fsource%2Fb%2Fbeautifulsoup4%2Fbeautifulsoup4-4.0.1.tar.gz
Running setup.py egg_info for package beautifulsoup4
Downloading/unpacking bonjour-py==0.3 (from -r requirements.txt (line 14))
Could not find any downloads that satisfy the requirement bonjour-py==0.3 (from -r requirements.txt (line 14))
No distributions at all found for bonjour-py==0.3 (from -r requirements.txt (line 14))
Storing complete log in /app/.pip/pip.log
! Heroku push rejected, failed to compile Python app
To git@heroku.com:radiant-night-5176.git
! [remote rejected] master -> master (pre-receive hook declined)
error: failed to push some refs to 'git@heroku.com:radiant-night-5176.git'
```
|
2012/03/29
|
[
"https://Stackoverflow.com/questions/9917628",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/723212/"
] |
Are you trying to push the example app from the quickstart? Many of the requirements you're trying to install aren't required at all.
I suspect that you created your requirements file outside of the recommended virtualenv, and that twisted and bonjour-py are packages install in your system python installation.
|
I usually had same issue with some project referencing `bonjour-py` in their `requirements.txt`, didn't know which or how to track that one at the moment.
And someone told me that **[pip-tool](https://github.com/nvie/pip-tools)**. It's actually a great alternative to identify which you have, and if you want to update them. And as a bonus it ignored well the `bonjour-py` error.
|
9,917,628
|
I'm attempting to follow Heroku's python quickstart guide but am running into repeated problems. At the moment, "git push heroku master" is failing because it cannot install Bonjour. Does anyone know if this is a truly necessary requirement, and whether I can change the version required, or somehow otherwise fix this? Full text of push follows below.
```
(venv)172-26-12-64:helloflask Spike$ git push heroku master
Counting objects: 488, done.
Delta compression using up to 2 threads.
Compressing objects: 100% (444/444), done.
Writing objects: 100% (488/488), 1.43 MiB, done.
Total 488 (delta 33), reused 0 (delta 0)
-----> Heroku receiving push
-----> Python app detected
-----> Preparing virtualenv version 1.7
New python executable in ./bin/python
Installing distribute.............................................................................................................................................................................................done.
Installing pip...............done.
-----> Activating virtualenv
-----> Installing dependencies using pip version 1.0.2
Downloading/unpacking Flask==0.8 (from -r requirements.txt (line 1))
Creating supposed download cache at /app/tmp/repo.git/.cache/pip_downloads
Storing download in cache at /app/tmp/repo.git/.cache/pip_downloads/http%3A%2F%2Fpypi.python.org%2Fpackages%2Fsource%2FF%2FFlask%2FFlask-0.8.tar.gz
Running setup.py egg_info for package Flask
warning: no files found matching '*' under directory 'tests'
warning: no previously-included files matching '*.pyc' found under directory 'docs'
warning: no previously-included files matching '*.pyo' found under directory 'docs'
warning: no previously-included files matching '*.pyc' found under directory 'tests'
warning: no previously-included files matching '*.pyo' found under directory 'tests'
warning: no previously-included files matching '*.pyc' found under directory 'examples'
warning: no previously-included files matching '*.pyo' found under directory 'examples'
no previously-included directories found matching 'docs/_build'
no previously-included directories found matching 'docs/_themes/.git'
Downloading/unpacking IMAPClient==0.8 (from -r requirements.txt (line 2))
Storing download in cache at /app/tmp/repo.git/.cache/pip_downloads/http%3A%2F%2Ffreshfoo.com%2Fprojects%2FIMAPClient%2FIMAPClient-0.8.zip
Running setup.py egg_info for package IMAPClient
no previously-included directories found matching 'doc/doctrees/'
Downloading/unpacking Jinja2==2.6 (from -r requirements.txt (line 3))
Storing download in cache at /app/tmp/repo.git/.cache/pip_downloads/http%3A%2F%2Fpypi.python.org%2Fpackages%2Fsource%2FJ%2FJinja2%2FJinja2-2.6.tar.gz
Running setup.py egg_info for package Jinja2
warning: no previously-included files matching '*' found under directory 'docs/_build'
warning: no previously-included files matching '*.pyc' found under directory 'jinja2'
warning: no previously-included files matching '*.pyc' found under directory 'docs'
warning: no previously-included files matching '*.pyo' found under directory 'jinja2'
warning: no previously-included files matching '*.pyo' found under directory 'docs'
Downloading/unpacking PIL==1.1.7 (from -r requirements.txt (line 4))
Storing download in cache at /app/tmp/repo.git/.cache/pip_downloads/http%3A%2F%2Feffbot.org%2Fmedia%2Fdownloads%2FPIL-1.1.7.tar.gz
Running setup.py egg_info for package PIL
WARNING: '' not a valid package name; please use only.-separated package names in setup.py
Downloading/unpacking PyRSS2Gen==1.0.0 (from -r requirements.txt (line 5))
Downloading PyRSS2Gen-1.0.0.tar.gz
Storing download in cache at /app/tmp/repo.git/.cache/pip_downloads/http%3A%2F%2Fpypi.python.org%2Fpackages%2Fsource%2FP%2FPyRSS2Gen%2FPyRSS2Gen-1.0.0.tar.gz
Running setup.py egg_info for package PyRSS2Gen
Downloading/unpacking PyYAML==3.10 (from -r requirements.txt (line 6))
Storing download in cache at /app/tmp/repo.git/.cache/pip_downloads/http%3A%2F%2Fpypi.python.org%2Fpackages%2Fsource%2FP%2FPyYAML%2FPyYAML-3.10.tar.gz
Running setup.py egg_info for package PyYAML
Downloading/unpacking Twisted==11.0.0 (from -r requirements.txt (line 7))
Storing download in cache at /app/tmp/repo.git/.cache/pip_downloads/http%3A%2F%2Fpypi.python.org%2Fpackages%2Fsource%2FT%2FTwisted%2FTwisted-11.0.0.tar.bz2
Running setup.py egg_info for package Twisted
Downloading/unpacking WebOb==1.1.1 (from -r requirements.txt (line 8))
Storing download in cache at /app/tmp/repo.git/.cache/pip_downloads/http%3A%2F%2Fpypi.python.org%2Fpackages%2Fsource%2FW%2FWebOb%2FWebOb-1.1.1.zip
Running setup.py egg_info for package WebOb
no previously-included directories found matching '*.pyc'
no previously-included directories found matching '*.pyo'
Downloading/unpacking Werkzeug==0.8.3 (from -r requirements.txt (line 9))
Storing download in cache at /app/tmp/repo.git/.cache/pip_downloads/http%3A%2F%2Fpypi.python.org%2Fpackages%2Fsource%2FW%2FWerkzeug%2FWerkzeug-0.8.3.tar.gz
Running setup.py egg_info for package Werkzeug
warning: no files found matching '*' under directory 'werkzeug/debug/templates'
warning: no files found matching '*' under directory 'tests'
warning: no previously-included files matching '*.pyc' found under directory 'docs'
warning: no previously-included files matching '*.pyo' found under directory 'docs'
warning: no previously-included files matching '*.pyc' found under directory 'tests'
warning: no previously-included files matching '*.pyo' found under directory 'tests'
warning: no previously-included files matching '*.pyc' found under directory 'examples'
warning: no previously-included files matching '*.pyo' found under directory 'examples'
no previously-included directories found matching 'docs/_build'
Downloading/unpacking altgraph==0.7.2 (from -r requirements.txt (line 10))
Downloading altgraph-0.7.2.tar.gz
Storing download in cache at /app/tmp/repo.git/.cache/pip_downloads/http%3A%2F%2Fpypi.python.org%2Fpackages%2Fsource%2Fa%2Faltgraph%2Faltgraph-0.7.2.tar.gz
Running setup.py egg_info for package altgraph
warning: no files found matching '*.txt'
warning: no previously-included files matching '.DS_Store' found anywhere in distribution
warning: no previously-included files matching '*.pyc' found anywhere in distribution
warning: no previously-included files matching '*.so' found anywhere in distribution
Downloading/unpacking apipkg==1.0 (from -r requirements.txt (line 11))
Downloading apipkg-1.0.tar.gz
Storing download in cache at /app/tmp/repo.git/.cache/pip_downloads/http%3A%2F%2Fpypi.python.org%2Fpackages%2Fsource%2Fa%2Fapipkg%2Fapipkg-1.0.tar.gz
Running setup.py egg_info for package apipkg
no previously-included directories found matching '.svn'
no previously-included directories found matching '.hg'
Downloading/unpacking bdist-mpkg==0.4.4 (from -r requirements.txt (line 12))
Downloading bdist_mpkg-0.4.4.tar.gz
Storing download in cache at /app/tmp/repo.git/.cache/pip_downloads/http%3A%2F%2Fa.pypi.python.org%2Fpackages%2Fsource%2Fb%2Fbdist_mpkg%2Fbdist_mpkg-0.4.4.tar.gz
Running setup.py egg_info for package bdist-mpkg
Downloading/unpacking beautifulsoup4==4.0.1 (from -r requirements.txt (line 13))
Storing download in cache at /app/tmp/repo.git/.cache/pip_downloads/http%3A%2F%2Fpypi.python.org%2Fpackages%2Fsource%2Fb%2Fbeautifulsoup4%2Fbeautifulsoup4-4.0.1.tar.gz
Running setup.py egg_info for package beautifulsoup4
Downloading/unpacking bonjour-py==0.3 (from -r requirements.txt (line 14))
Could not find any downloads that satisfy the requirement bonjour-py==0.3 (from -r requirements.txt (line 14))
No distributions at all found for bonjour-py==0.3 (from -r requirements.txt (line 14))
Storing complete log in /app/.pip/pip.log
! Heroku push rejected, failed to compile Python app
To git@heroku.com:radiant-night-5176.git
! [remote rejected] master -> master (pre-receive hook declined)
error: failed to push some refs to 'git@heroku.com:radiant-night-5176.git'
```
|
2012/03/29
|
[
"https://Stackoverflow.com/questions/9917628",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/723212/"
] |
Maybe you are using a virtual environment and you just forgot to activate it
In this case, depending on how you stored the virtual environment in your project, you will probably first need to
```
source venv/bin/activate
```
and after that freeze your requirements
```
pip freeze > requirements.txt
```
finally
```
git push heroku master
```
|
I usually had same issue with some project referencing `bonjour-py` in their `requirements.txt`, didn't know which or how to track that one at the moment.
And someone told me that **[pip-tool](https://github.com/nvie/pip-tools)**. It's actually a great alternative to identify which you have, and if you want to update them. And as a bonus it ignored well the `bonjour-py` error.
|
6,632,954
|
[Why is else clause needed for try statement in python ?](https://stackoverflow.com/questions/4836139/why-is-else-clause-needed-for-try-statement-in-python)
Taking it forward:
```
try:
f = open('foo', 'r')
except IOError as e:
error_log.write('Unable to open foo : %s\n' % e)
else:
data = f.read()
f.close()
```
It occurs to me that the corner case solved by `else clause` still can be avoided by a `nested try...except` avoiding the need of `else`? :
```
try:
f = open('foo', 'r')
try:
data = f.read()
f.close()
except:
pass
except IOError as e:
error_log.write('Unable to open foo : %s\n' % e)
```
|
2011/07/09
|
[
"https://Stackoverflow.com/questions/6632954",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1140524/"
] |
`try..except..else` may not be *needed*, but it can be nice. In this case, the `try..except..else` form is distinctly nicer, in my opinion.
Just because you can do without an element of syntax, doesn't make it useless. Decorator syntax is purely syntax sugar (the most obvious example, I think), `for` loops are just glorified `while` loops, et cetera. There's a good place for `try..except..else` and I would say this is one such place.
Besides, those two blocks of code are far from equivalent. If `f.read()` raises an exception (disk read error, data corruption inside the file or some other such problem), the first will raise an exception as it should but the second will lose it and think that everything has worked. For myself, I would prefer something more along these lines, which is shorter and still easier to understand:
```
try:
with open('foo', 'r') as f:
data = f.read()
except IOError as e:
error_log.write('Unable to open foo : %s\n' % e)
```
(This assumes that you want to catch errors in `file.read` and `file.close`. And I can't really see why you wouldn't.)
|
Actually, is not always needed you can simply do:
```
f = None
try:
f = open('foo', 'r')
except IOError:
error_log.write('Unable to open foo\n')
if f:
data = f.read()
f.close()
```
|
70,108,696
|
So i was solving a question that is in my Lab practical Syllabus. Below is the question:-
>
> Write a python class to reverse a sentence (initialized via
> constructor) word by word. Example: “I am here” should be reversed as
> “here am I”. Create instances of this class for each of the three
> strings input by the user and display the reversed string for each, in
> descending order of number of vowels in the string.
>
>
>
Below is code for the implementation of above question:-
```
class sentenceReverser:
vowels = ['a','e','i','o','u']
vowelCount =0
sentence=""
reversed_string = ""
def __init__(self,sentence):
self.sentence = sentence
self.reverser()
def reverser(self):
self.reversed_string = " ".join(reversed(self.sentence.split()))
return self.reversed_string
def getVowelCount(self):
for i in self.sentence:
if i.lower() in self.vowels:
self.vowelCount += 1
return self.vowelCount
inp = []
for i in range(2):
temp = input("Enter string:- ")
ob = sentenceReverser(temp)
inp.append(ob)
sorted_item = sorted(inp,key = lambda inp:inp.getVowelCount(),reverse=True)
for i in range (len(sorted_item)):
print('Reversed String: ',sorted_item[i].reverser(),'Vowel count: ',sorted_item[i].getVowelCount())
```
Below is output i am getting for the above code:-
[](https://i.stack.imgur.com/Ksjli.png)
**issue:-**
Could someone tell me why i am getting double the vowel count???
Any help would be appreciated!!
|
2021/11/25
|
[
"https://Stackoverflow.com/questions/70108696",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/16369432/"
] |
You are calling `getVowelCount()` twice. Instead you can use the variable instead of calling in the print command
```
for i in range (len(sorted_item)):
print('Reversed String: ',sorted_item[i].reverser(),'Vowel count: ',sorted_item[i].vowelCount)
```
|
This is because you don't reset vowel count in the method. So if you execute the method once (here in sort), you'll get correct count. If you execute it twice (in printing), you will get twice as much. If you execute it once more, you'll get 3x correct amount. And so on.
The solution is to reset the number:
```py
def getVowelCount(self):
self.vowelCount = 0
for i in self.sentence:
if i.lower() in self.vowels:
self.vowelCount += 1
return self.vowelCount
```
Or to calculate it only once - set it to None, then calculate only `if self.vowelCount is None`, otherwise return existing value.
|
2,971,198
|
In one of my Django projects that use MySQL as the database, I need to have a *date* fields that accept also "partial" dates like only year (YYYY) and year and month (YYYY-MM) plus normal date (YYYY-MM-DD).
The *date* field in MySQL can deal with that by accepting *00* for the month and the day. So *2010-00-00* is valid in MySQL and it represent 2010. Same thing for *2010-05-00* that represent May 2010.
So I started to create a `PartialDateField` to support this feature. But I hit a wall because, by default, and Django use the default, MySQLdb, the python driver to MySQL, return a `datetime.date` object for a *date* field AND `datetime.date()` support only real date. So it's possible to modify the converter for the *date* field used by MySQLdb and return only a string in this format *'YYYY-MM-DD'*. Unfortunately the converter use by MySQLdb is set at the connection level so it's use for all MySQL *date* fields. But Django `DateField` rely on the fact that the database return a `datetime.date` object, so if I change the converter to return a string, Django is not happy at all.
Someone have an idea or advice to solve this problem? How to create a `PartialDateField` in Django ?
EDIT
----
Also I should add that I already thought of 2 solutions, create 3 integer fields for year, month and day (as mention by *Alison R.*) or use a *varchar* field to keep date as string in this format *YYYY-MM-DD*.
But in both solutions, if I'm not wrong, I will loose the *special* properties of a *date* field like doing query of this kind on them: *Get all entries after this date*. I can probably re-implement this functionality on the client side but that will not be a valid solution in my case because the database can be query from other systems (mysql client, MS Access, etc.)
|
2010/06/04
|
[
"https://Stackoverflow.com/questions/2971198",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/146481/"
] |
You could store the partial date as an integer (preferably in a field named for the portion of the date you are storing, such as `year,` `month` or `day`) and do validation and conversion to a date object in the model.
**EDIT**
If you need real date functionality, you probably need real, not partial, dates. For instance, does "get everything after 2010-0-0" return dates inclusive of 2010 or only dates in 2011 and beyond? The same goes for your other example of May 2010. The ways in which different languages/clients deal with partial dates (if they support them at all) are likely to be highly idiosyncratic, and they are unlikely to match MySQL's implementation.
On the other hand, if you store a `year` integer such as 2010, it is easy to ask the database for "all records with year > 2010" and understand exactly what the result should be, from any client, on any platform. You can even combine this approach for more complicated dates/queries, such as "all records with year > 2010 AND month > 5".
**SECOND EDIT**
Your only other (and perhaps best) option is to store truly valid dates and come up with a convention in your application for what they mean. A DATETIME field named like `date_month` could have a value of 2010-05-01, but you would treat that as representing all dates in May, 2010. You would need to accommodate this when programming. If you had `date_month` in Python as a datetime object, you would need to call a function like `date_month.end_of_month()` to query dates following that month. (That is pseudocode, but could be easily implemented with something like the [calendar](http://docs.python.org/library/calendar.html) module.)
|
Can you store the date together with a flag that tells how much of the date is valid?
Something like this:
```
YEAR_VALID = 0x04
MONTH_VALID = 0x02
DAY_VALID = 0x01
Y_VALID = YEAR_VALID
YM_VALID = YEAR_VALID | MONTH_VALID
YMD_VALID = YEAR_VALID | MONTH_VALID | DAY_VALID
```
Then, if you have a date like 2010-00-00, convert that to 2010-01-01 and set the flag to Y\_VALID. If you have a date like 2010-06-00, convert that to 2010-06-01 and set the flag to YM\_VALID.
So, then, `PartialDateField` would be a class that bundles together a date and the date-valid flag described above.
P.S. You don't actually need to use the flags the way I showed it; that's the old C programmer in me coming to the surface. You could use Y\_VALID, YM\_VALID, YMD\_VALID = range(3) and it would work about as well. The key is to have some kind of flag that tells you how much of the date to trust.
|
2,971,198
|
In one of my Django projects that use MySQL as the database, I need to have a *date* fields that accept also "partial" dates like only year (YYYY) and year and month (YYYY-MM) plus normal date (YYYY-MM-DD).
The *date* field in MySQL can deal with that by accepting *00* for the month and the day. So *2010-00-00* is valid in MySQL and it represent 2010. Same thing for *2010-05-00* that represent May 2010.
So I started to create a `PartialDateField` to support this feature. But I hit a wall because, by default, and Django use the default, MySQLdb, the python driver to MySQL, return a `datetime.date` object for a *date* field AND `datetime.date()` support only real date. So it's possible to modify the converter for the *date* field used by MySQLdb and return only a string in this format *'YYYY-MM-DD'*. Unfortunately the converter use by MySQLdb is set at the connection level so it's use for all MySQL *date* fields. But Django `DateField` rely on the fact that the database return a `datetime.date` object, so if I change the converter to return a string, Django is not happy at all.
Someone have an idea or advice to solve this problem? How to create a `PartialDateField` in Django ?
EDIT
----
Also I should add that I already thought of 2 solutions, create 3 integer fields for year, month and day (as mention by *Alison R.*) or use a *varchar* field to keep date as string in this format *YYYY-MM-DD*.
But in both solutions, if I'm not wrong, I will loose the *special* properties of a *date* field like doing query of this kind on them: *Get all entries after this date*. I can probably re-implement this functionality on the client side but that will not be a valid solution in my case because the database can be query from other systems (mysql client, MS Access, etc.)
|
2010/06/04
|
[
"https://Stackoverflow.com/questions/2971198",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/146481/"
] |
First, thanks for all your answers. None of them, as is, was a good solution for my problem, but, for your defense, I should add that I didn't give all the requirements. But each one help me think about my problem and some of your ideas are part of my final solution.
So my final solution, on the DB side, is to use a *varchar* field (limited to 10 chars) and storing the date in it, as a string, in the ISO format (YYYY-MM-DD) with *00* for month and day when there's no month and/or day (like a *date* field in MySQL). This way, this field can work with any databases, the data can be read, understand and edited directly and easily by a human using a simple client (like mysql client, phpmyadmin, etc.). That was a requirement. It can also be exported to Excel/CSV without any conversion, etc. The disadvantage is that the format is not enforce (except in Django). Someone could write *'not a date'* or do a mistake in the format and the DB will accept it (if you have an idea about this problem...).
This way it's also possible to do all of the *special* queries of a *date* field relatively easily. For queries with WHERE: <, >, <=, >= and = work directly. The IN and BETWEEN queries work directly also. For querying by day or month you just have to do it with EXTRACT (DAY|MONTH ...). Ordering work also directly. So I think it covers all the query needs and with mostly no complication.
On the Django side, I did 2 things. First, I have created a `PartialDate` object that look mostly like `datetime.date` but supporting date without month and/or day. Inside this object I use a datetime.datetime object to keep the date. I'm using the hours and minutes as flag that tell if the month and day are valid when they are set to 1. It's the same idea that *steveha* propose but with a different implementation (and only on the client side). Using a `datetime.datetime` object gives me a lot of nice features for working with dates (validation, comparaison, etc.).
Secondly, I have created a `PartialDateField` that mostly deal with the conversion between the `PartialDate` object and the database.
So far, it works pretty well (I have mostly finish my extensive unit tests).
|
You could store the partial date as an integer (preferably in a field named for the portion of the date you are storing, such as `year,` `month` or `day`) and do validation and conversion to a date object in the model.
**EDIT**
If you need real date functionality, you probably need real, not partial, dates. For instance, does "get everything after 2010-0-0" return dates inclusive of 2010 or only dates in 2011 and beyond? The same goes for your other example of May 2010. The ways in which different languages/clients deal with partial dates (if they support them at all) are likely to be highly idiosyncratic, and they are unlikely to match MySQL's implementation.
On the other hand, if you store a `year` integer such as 2010, it is easy to ask the database for "all records with year > 2010" and understand exactly what the result should be, from any client, on any platform. You can even combine this approach for more complicated dates/queries, such as "all records with year > 2010 AND month > 5".
**SECOND EDIT**
Your only other (and perhaps best) option is to store truly valid dates and come up with a convention in your application for what they mean. A DATETIME field named like `date_month` could have a value of 2010-05-01, but you would treat that as representing all dates in May, 2010. You would need to accommodate this when programming. If you had `date_month` in Python as a datetime object, you would need to call a function like `date_month.end_of_month()` to query dates following that month. (That is pseudocode, but could be easily implemented with something like the [calendar](http://docs.python.org/library/calendar.html) module.)
|
2,971,198
|
In one of my Django projects that use MySQL as the database, I need to have a *date* fields that accept also "partial" dates like only year (YYYY) and year and month (YYYY-MM) plus normal date (YYYY-MM-DD).
The *date* field in MySQL can deal with that by accepting *00* for the month and the day. So *2010-00-00* is valid in MySQL and it represent 2010. Same thing for *2010-05-00* that represent May 2010.
So I started to create a `PartialDateField` to support this feature. But I hit a wall because, by default, and Django use the default, MySQLdb, the python driver to MySQL, return a `datetime.date` object for a *date* field AND `datetime.date()` support only real date. So it's possible to modify the converter for the *date* field used by MySQLdb and return only a string in this format *'YYYY-MM-DD'*. Unfortunately the converter use by MySQLdb is set at the connection level so it's use for all MySQL *date* fields. But Django `DateField` rely on the fact that the database return a `datetime.date` object, so if I change the converter to return a string, Django is not happy at all.
Someone have an idea or advice to solve this problem? How to create a `PartialDateField` in Django ?
EDIT
----
Also I should add that I already thought of 2 solutions, create 3 integer fields for year, month and day (as mention by *Alison R.*) or use a *varchar* field to keep date as string in this format *YYYY-MM-DD*.
But in both solutions, if I'm not wrong, I will loose the *special* properties of a *date* field like doing query of this kind on them: *Get all entries after this date*. I can probably re-implement this functionality on the client side but that will not be a valid solution in my case because the database can be query from other systems (mysql client, MS Access, etc.)
|
2010/06/04
|
[
"https://Stackoverflow.com/questions/2971198",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/146481/"
] |
You could store the partial date as an integer (preferably in a field named for the portion of the date you are storing, such as `year,` `month` or `day`) and do validation and conversion to a date object in the model.
**EDIT**
If you need real date functionality, you probably need real, not partial, dates. For instance, does "get everything after 2010-0-0" return dates inclusive of 2010 or only dates in 2011 and beyond? The same goes for your other example of May 2010. The ways in which different languages/clients deal with partial dates (if they support them at all) are likely to be highly idiosyncratic, and they are unlikely to match MySQL's implementation.
On the other hand, if you store a `year` integer such as 2010, it is easy to ask the database for "all records with year > 2010" and understand exactly what the result should be, from any client, on any platform. You can even combine this approach for more complicated dates/queries, such as "all records with year > 2010 AND month > 5".
**SECOND EDIT**
Your only other (and perhaps best) option is to store truly valid dates and come up with a convention in your application for what they mean. A DATETIME field named like `date_month` could have a value of 2010-05-01, but you would treat that as representing all dates in May, 2010. You would need to accommodate this when programming. If you had `date_month` in Python as a datetime object, you would need to call a function like `date_month.end_of_month()` to query dates following that month. (That is pseudocode, but could be easily implemented with something like the [calendar](http://docs.python.org/library/calendar.html) module.)
|
Although not in Python - here's an example of how the same problem was solved in Ruby - using a single Integer value - and bitwise operators to store year, month and day - with month and day optional.
<https://github.com/58bits/partial-date>
Look at the source in lib for date.rb and bits.rb.
I'm sure a similar solution could be written in Python.
To persist the date (sortable) you just save the Integer to the database.
|
2,971,198
|
In one of my Django projects that use MySQL as the database, I need to have a *date* fields that accept also "partial" dates like only year (YYYY) and year and month (YYYY-MM) plus normal date (YYYY-MM-DD).
The *date* field in MySQL can deal with that by accepting *00* for the month and the day. So *2010-00-00* is valid in MySQL and it represent 2010. Same thing for *2010-05-00* that represent May 2010.
So I started to create a `PartialDateField` to support this feature. But I hit a wall because, by default, and Django use the default, MySQLdb, the python driver to MySQL, return a `datetime.date` object for a *date* field AND `datetime.date()` support only real date. So it's possible to modify the converter for the *date* field used by MySQLdb and return only a string in this format *'YYYY-MM-DD'*. Unfortunately the converter use by MySQLdb is set at the connection level so it's use for all MySQL *date* fields. But Django `DateField` rely on the fact that the database return a `datetime.date` object, so if I change the converter to return a string, Django is not happy at all.
Someone have an idea or advice to solve this problem? How to create a `PartialDateField` in Django ?
EDIT
----
Also I should add that I already thought of 2 solutions, create 3 integer fields for year, month and day (as mention by *Alison R.*) or use a *varchar* field to keep date as string in this format *YYYY-MM-DD*.
But in both solutions, if I'm not wrong, I will loose the *special* properties of a *date* field like doing query of this kind on them: *Get all entries after this date*. I can probably re-implement this functionality on the client side but that will not be a valid solution in my case because the database can be query from other systems (mysql client, MS Access, etc.)
|
2010/06/04
|
[
"https://Stackoverflow.com/questions/2971198",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/146481/"
] |
It sounds like you want to store a date interval. In Python this would (to my still-somewhat-noob understanding) most readily be implemented by storing two datetime.datetime objects, one specifying the start of the date range and the other specifying the end. In a manner similar to that used to specify list slices, the endpoint would not itself be included in the date range.
For example, this code would implement a date range as a named tuple:
```
>>> from datetime import datetime
>>> from collections import namedtuple
>>> DateRange = namedtuple('DateRange', 'start end')
>>> the_year_2010 = DateRange(datetime(2010, 1, 1), datetime(2011, 1, 1))
>>> the_year_2010.start <= datetime(2010, 4, 20) < the_year_2010.end
True
>>> the_year_2010.start <= datetime(2009, 12, 31) < the_year_2010.end
False
>>> the_year_2010.start <= datetime(2011, 1, 1) < the_year_2010.end
False
```
Or even add some magic:
```
>>> DateRange.__contains__ = lambda self, x: self.start <= x < self.end
>>> datetime(2010, 4, 20) in the_year_2010
True
>>> datetime(2011, 4, 20) in the_year_2010
False
```
This is such a useful concept that I'm pretty sure that somebody has already made an implementation available. For example, a quick glance suggests that the `relativedate` class from the [dateutil](http://pypi.python.org/pypi/python-dateutil/1.5) package will do this, and more expressively, by allowing a 'years' keyword argument to be passed to the constructor.
However, mapping such an object into database fields is somewhat more complicated, so you might be better off implementing it simply by just pulling both fields separately and then combining them. I guess this depends on the DB framework; I'm not very familiar with that aspect of Python yet.
In any case, I think the key is to think of a "partial date" as a range rather than as a simple value.
### edit
It's tempting, but I think inappropriate, to add more magic methods that will handle uses of the `>` and `<` operators. There's a bit of ambiguity there: does a date that's "greater than" a given range occur after the range's end, or after its beginning? It initially seems appropriate to use `<=` to indicate that the date on the right-hand side of the equation is after the start of the range, and `<` to indicate that it's after the end.
However, this implies equality between the range and a date within the range, which is incorrect, since it implies that the month of May, 2010 is equal to the year 2010, because May the 4th, 2010 equates to the both of them. IE you would end up with falsisms like `2010-04-20 == 2010 == 2010-05-04` being true.
So probably it would be better to implement a method like `isafterstart` to explicitly check if a date is after the beginning of the range. But again, somebody's probably already done it, so it's probably worth a look on [pypi](http://pypi.python.org) to see what's considered production-ready. This is indicated by the presence of "Development Status :: 5 - Production/Stable" in the "Categories" section of a given module's pypi page. Note that not all modules have been given a development status.
Or you could just keep it simple, and using the basic namedtuple implementation, explicitly check
```
>>> datetime(2012, 12, 21) >= the_year_2010.start
True
```
|
Can you store the date together with a flag that tells how much of the date is valid?
Something like this:
```
YEAR_VALID = 0x04
MONTH_VALID = 0x02
DAY_VALID = 0x01
Y_VALID = YEAR_VALID
YM_VALID = YEAR_VALID | MONTH_VALID
YMD_VALID = YEAR_VALID | MONTH_VALID | DAY_VALID
```
Then, if you have a date like 2010-00-00, convert that to 2010-01-01 and set the flag to Y\_VALID. If you have a date like 2010-06-00, convert that to 2010-06-01 and set the flag to YM\_VALID.
So, then, `PartialDateField` would be a class that bundles together a date and the date-valid flag described above.
P.S. You don't actually need to use the flags the way I showed it; that's the old C programmer in me coming to the surface. You could use Y\_VALID, YM\_VALID, YMD\_VALID = range(3) and it would work about as well. The key is to have some kind of flag that tells you how much of the date to trust.
|
2,971,198
|
In one of my Django projects that use MySQL as the database, I need to have a *date* fields that accept also "partial" dates like only year (YYYY) and year and month (YYYY-MM) plus normal date (YYYY-MM-DD).
The *date* field in MySQL can deal with that by accepting *00* for the month and the day. So *2010-00-00* is valid in MySQL and it represent 2010. Same thing for *2010-05-00* that represent May 2010.
So I started to create a `PartialDateField` to support this feature. But I hit a wall because, by default, and Django use the default, MySQLdb, the python driver to MySQL, return a `datetime.date` object for a *date* field AND `datetime.date()` support only real date. So it's possible to modify the converter for the *date* field used by MySQLdb and return only a string in this format *'YYYY-MM-DD'*. Unfortunately the converter use by MySQLdb is set at the connection level so it's use for all MySQL *date* fields. But Django `DateField` rely on the fact that the database return a `datetime.date` object, so if I change the converter to return a string, Django is not happy at all.
Someone have an idea or advice to solve this problem? How to create a `PartialDateField` in Django ?
EDIT
----
Also I should add that I already thought of 2 solutions, create 3 integer fields for year, month and day (as mention by *Alison R.*) or use a *varchar* field to keep date as string in this format *YYYY-MM-DD*.
But in both solutions, if I'm not wrong, I will loose the *special* properties of a *date* field like doing query of this kind on them: *Get all entries after this date*. I can probably re-implement this functionality on the client side but that will not be a valid solution in my case because the database can be query from other systems (mysql client, MS Access, etc.)
|
2010/06/04
|
[
"https://Stackoverflow.com/questions/2971198",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/146481/"
] |
First, thanks for all your answers. None of them, as is, was a good solution for my problem, but, for your defense, I should add that I didn't give all the requirements. But each one help me think about my problem and some of your ideas are part of my final solution.
So my final solution, on the DB side, is to use a *varchar* field (limited to 10 chars) and storing the date in it, as a string, in the ISO format (YYYY-MM-DD) with *00* for month and day when there's no month and/or day (like a *date* field in MySQL). This way, this field can work with any databases, the data can be read, understand and edited directly and easily by a human using a simple client (like mysql client, phpmyadmin, etc.). That was a requirement. It can also be exported to Excel/CSV without any conversion, etc. The disadvantage is that the format is not enforce (except in Django). Someone could write *'not a date'* or do a mistake in the format and the DB will accept it (if you have an idea about this problem...).
This way it's also possible to do all of the *special* queries of a *date* field relatively easily. For queries with WHERE: <, >, <=, >= and = work directly. The IN and BETWEEN queries work directly also. For querying by day or month you just have to do it with EXTRACT (DAY|MONTH ...). Ordering work also directly. So I think it covers all the query needs and with mostly no complication.
On the Django side, I did 2 things. First, I have created a `PartialDate` object that look mostly like `datetime.date` but supporting date without month and/or day. Inside this object I use a datetime.datetime object to keep the date. I'm using the hours and minutes as flag that tell if the month and day are valid when they are set to 1. It's the same idea that *steveha* propose but with a different implementation (and only on the client side). Using a `datetime.datetime` object gives me a lot of nice features for working with dates (validation, comparaison, etc.).
Secondly, I have created a `PartialDateField` that mostly deal with the conversion between the `PartialDate` object and the database.
So far, it works pretty well (I have mostly finish my extensive unit tests).
|
Can you store the date together with a flag that tells how much of the date is valid?
Something like this:
```
YEAR_VALID = 0x04
MONTH_VALID = 0x02
DAY_VALID = 0x01
Y_VALID = YEAR_VALID
YM_VALID = YEAR_VALID | MONTH_VALID
YMD_VALID = YEAR_VALID | MONTH_VALID | DAY_VALID
```
Then, if you have a date like 2010-00-00, convert that to 2010-01-01 and set the flag to Y\_VALID. If you have a date like 2010-06-00, convert that to 2010-06-01 and set the flag to YM\_VALID.
So, then, `PartialDateField` would be a class that bundles together a date and the date-valid flag described above.
P.S. You don't actually need to use the flags the way I showed it; that's the old C programmer in me coming to the surface. You could use Y\_VALID, YM\_VALID, YMD\_VALID = range(3) and it would work about as well. The key is to have some kind of flag that tells you how much of the date to trust.
|
2,971,198
|
In one of my Django projects that use MySQL as the database, I need to have a *date* fields that accept also "partial" dates like only year (YYYY) and year and month (YYYY-MM) plus normal date (YYYY-MM-DD).
The *date* field in MySQL can deal with that by accepting *00* for the month and the day. So *2010-00-00* is valid in MySQL and it represent 2010. Same thing for *2010-05-00* that represent May 2010.
So I started to create a `PartialDateField` to support this feature. But I hit a wall because, by default, and Django use the default, MySQLdb, the python driver to MySQL, return a `datetime.date` object for a *date* field AND `datetime.date()` support only real date. So it's possible to modify the converter for the *date* field used by MySQLdb and return only a string in this format *'YYYY-MM-DD'*. Unfortunately the converter use by MySQLdb is set at the connection level so it's use for all MySQL *date* fields. But Django `DateField` rely on the fact that the database return a `datetime.date` object, so if I change the converter to return a string, Django is not happy at all.
Someone have an idea or advice to solve this problem? How to create a `PartialDateField` in Django ?
EDIT
----
Also I should add that I already thought of 2 solutions, create 3 integer fields for year, month and day (as mention by *Alison R.*) or use a *varchar* field to keep date as string in this format *YYYY-MM-DD*.
But in both solutions, if I'm not wrong, I will loose the *special* properties of a *date* field like doing query of this kind on them: *Get all entries after this date*. I can probably re-implement this functionality on the client side but that will not be a valid solution in my case because the database can be query from other systems (mysql client, MS Access, etc.)
|
2010/06/04
|
[
"https://Stackoverflow.com/questions/2971198",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/146481/"
] |
First, thanks for all your answers. None of them, as is, was a good solution for my problem, but, for your defense, I should add that I didn't give all the requirements. But each one help me think about my problem and some of your ideas are part of my final solution.
So my final solution, on the DB side, is to use a *varchar* field (limited to 10 chars) and storing the date in it, as a string, in the ISO format (YYYY-MM-DD) with *00* for month and day when there's no month and/or day (like a *date* field in MySQL). This way, this field can work with any databases, the data can be read, understand and edited directly and easily by a human using a simple client (like mysql client, phpmyadmin, etc.). That was a requirement. It can also be exported to Excel/CSV without any conversion, etc. The disadvantage is that the format is not enforce (except in Django). Someone could write *'not a date'* or do a mistake in the format and the DB will accept it (if you have an idea about this problem...).
This way it's also possible to do all of the *special* queries of a *date* field relatively easily. For queries with WHERE: <, >, <=, >= and = work directly. The IN and BETWEEN queries work directly also. For querying by day or month you just have to do it with EXTRACT (DAY|MONTH ...). Ordering work also directly. So I think it covers all the query needs and with mostly no complication.
On the Django side, I did 2 things. First, I have created a `PartialDate` object that look mostly like `datetime.date` but supporting date without month and/or day. Inside this object I use a datetime.datetime object to keep the date. I'm using the hours and minutes as flag that tell if the month and day are valid when they are set to 1. It's the same idea that *steveha* propose but with a different implementation (and only on the client side). Using a `datetime.datetime` object gives me a lot of nice features for working with dates (validation, comparaison, etc.).
Secondly, I have created a `PartialDateField` that mostly deal with the conversion between the `PartialDate` object and the database.
So far, it works pretty well (I have mostly finish my extensive unit tests).
|
It sounds like you want to store a date interval. In Python this would (to my still-somewhat-noob understanding) most readily be implemented by storing two datetime.datetime objects, one specifying the start of the date range and the other specifying the end. In a manner similar to that used to specify list slices, the endpoint would not itself be included in the date range.
For example, this code would implement a date range as a named tuple:
```
>>> from datetime import datetime
>>> from collections import namedtuple
>>> DateRange = namedtuple('DateRange', 'start end')
>>> the_year_2010 = DateRange(datetime(2010, 1, 1), datetime(2011, 1, 1))
>>> the_year_2010.start <= datetime(2010, 4, 20) < the_year_2010.end
True
>>> the_year_2010.start <= datetime(2009, 12, 31) < the_year_2010.end
False
>>> the_year_2010.start <= datetime(2011, 1, 1) < the_year_2010.end
False
```
Or even add some magic:
```
>>> DateRange.__contains__ = lambda self, x: self.start <= x < self.end
>>> datetime(2010, 4, 20) in the_year_2010
True
>>> datetime(2011, 4, 20) in the_year_2010
False
```
This is such a useful concept that I'm pretty sure that somebody has already made an implementation available. For example, a quick glance suggests that the `relativedate` class from the [dateutil](http://pypi.python.org/pypi/python-dateutil/1.5) package will do this, and more expressively, by allowing a 'years' keyword argument to be passed to the constructor.
However, mapping such an object into database fields is somewhat more complicated, so you might be better off implementing it simply by just pulling both fields separately and then combining them. I guess this depends on the DB framework; I'm not very familiar with that aspect of Python yet.
In any case, I think the key is to think of a "partial date" as a range rather than as a simple value.
### edit
It's tempting, but I think inappropriate, to add more magic methods that will handle uses of the `>` and `<` operators. There's a bit of ambiguity there: does a date that's "greater than" a given range occur after the range's end, or after its beginning? It initially seems appropriate to use `<=` to indicate that the date on the right-hand side of the equation is after the start of the range, and `<` to indicate that it's after the end.
However, this implies equality between the range and a date within the range, which is incorrect, since it implies that the month of May, 2010 is equal to the year 2010, because May the 4th, 2010 equates to the both of them. IE you would end up with falsisms like `2010-04-20 == 2010 == 2010-05-04` being true.
So probably it would be better to implement a method like `isafterstart` to explicitly check if a date is after the beginning of the range. But again, somebody's probably already done it, so it's probably worth a look on [pypi](http://pypi.python.org) to see what's considered production-ready. This is indicated by the presence of "Development Status :: 5 - Production/Stable" in the "Categories" section of a given module's pypi page. Note that not all modules have been given a development status.
Or you could just keep it simple, and using the basic namedtuple implementation, explicitly check
```
>>> datetime(2012, 12, 21) >= the_year_2010.start
True
```
|
2,971,198
|
In one of my Django projects that use MySQL as the database, I need to have a *date* fields that accept also "partial" dates like only year (YYYY) and year and month (YYYY-MM) plus normal date (YYYY-MM-DD).
The *date* field in MySQL can deal with that by accepting *00* for the month and the day. So *2010-00-00* is valid in MySQL and it represent 2010. Same thing for *2010-05-00* that represent May 2010.
So I started to create a `PartialDateField` to support this feature. But I hit a wall because, by default, and Django use the default, MySQLdb, the python driver to MySQL, return a `datetime.date` object for a *date* field AND `datetime.date()` support only real date. So it's possible to modify the converter for the *date* field used by MySQLdb and return only a string in this format *'YYYY-MM-DD'*. Unfortunately the converter use by MySQLdb is set at the connection level so it's use for all MySQL *date* fields. But Django `DateField` rely on the fact that the database return a `datetime.date` object, so if I change the converter to return a string, Django is not happy at all.
Someone have an idea or advice to solve this problem? How to create a `PartialDateField` in Django ?
EDIT
----
Also I should add that I already thought of 2 solutions, create 3 integer fields for year, month and day (as mention by *Alison R.*) or use a *varchar* field to keep date as string in this format *YYYY-MM-DD*.
But in both solutions, if I'm not wrong, I will loose the *special* properties of a *date* field like doing query of this kind on them: *Get all entries after this date*. I can probably re-implement this functionality on the client side but that will not be a valid solution in my case because the database can be query from other systems (mysql client, MS Access, etc.)
|
2010/06/04
|
[
"https://Stackoverflow.com/questions/2971198",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/146481/"
] |
It sounds like you want to store a date interval. In Python this would (to my still-somewhat-noob understanding) most readily be implemented by storing two datetime.datetime objects, one specifying the start of the date range and the other specifying the end. In a manner similar to that used to specify list slices, the endpoint would not itself be included in the date range.
For example, this code would implement a date range as a named tuple:
```
>>> from datetime import datetime
>>> from collections import namedtuple
>>> DateRange = namedtuple('DateRange', 'start end')
>>> the_year_2010 = DateRange(datetime(2010, 1, 1), datetime(2011, 1, 1))
>>> the_year_2010.start <= datetime(2010, 4, 20) < the_year_2010.end
True
>>> the_year_2010.start <= datetime(2009, 12, 31) < the_year_2010.end
False
>>> the_year_2010.start <= datetime(2011, 1, 1) < the_year_2010.end
False
```
Or even add some magic:
```
>>> DateRange.__contains__ = lambda self, x: self.start <= x < self.end
>>> datetime(2010, 4, 20) in the_year_2010
True
>>> datetime(2011, 4, 20) in the_year_2010
False
```
This is such a useful concept that I'm pretty sure that somebody has already made an implementation available. For example, a quick glance suggests that the `relativedate` class from the [dateutil](http://pypi.python.org/pypi/python-dateutil/1.5) package will do this, and more expressively, by allowing a 'years' keyword argument to be passed to the constructor.
However, mapping such an object into database fields is somewhat more complicated, so you might be better off implementing it simply by just pulling both fields separately and then combining them. I guess this depends on the DB framework; I'm not very familiar with that aspect of Python yet.
In any case, I think the key is to think of a "partial date" as a range rather than as a simple value.
### edit
It's tempting, but I think inappropriate, to add more magic methods that will handle uses of the `>` and `<` operators. There's a bit of ambiguity there: does a date that's "greater than" a given range occur after the range's end, or after its beginning? It initially seems appropriate to use `<=` to indicate that the date on the right-hand side of the equation is after the start of the range, and `<` to indicate that it's after the end.
However, this implies equality between the range and a date within the range, which is incorrect, since it implies that the month of May, 2010 is equal to the year 2010, because May the 4th, 2010 equates to the both of them. IE you would end up with falsisms like `2010-04-20 == 2010 == 2010-05-04` being true.
So probably it would be better to implement a method like `isafterstart` to explicitly check if a date is after the beginning of the range. But again, somebody's probably already done it, so it's probably worth a look on [pypi](http://pypi.python.org) to see what's considered production-ready. This is indicated by the presence of "Development Status :: 5 - Production/Stable" in the "Categories" section of a given module's pypi page. Note that not all modules have been given a development status.
Or you could just keep it simple, and using the basic namedtuple implementation, explicitly check
```
>>> datetime(2012, 12, 21) >= the_year_2010.start
True
```
|
Although not in Python - here's an example of how the same problem was solved in Ruby - using a single Integer value - and bitwise operators to store year, month and day - with month and day optional.
<https://github.com/58bits/partial-date>
Look at the source in lib for date.rb and bits.rb.
I'm sure a similar solution could be written in Python.
To persist the date (sortable) you just save the Integer to the database.
|
2,971,198
|
In one of my Django projects that use MySQL as the database, I need to have a *date* fields that accept also "partial" dates like only year (YYYY) and year and month (YYYY-MM) plus normal date (YYYY-MM-DD).
The *date* field in MySQL can deal with that by accepting *00* for the month and the day. So *2010-00-00* is valid in MySQL and it represent 2010. Same thing for *2010-05-00* that represent May 2010.
So I started to create a `PartialDateField` to support this feature. But I hit a wall because, by default, and Django use the default, MySQLdb, the python driver to MySQL, return a `datetime.date` object for a *date* field AND `datetime.date()` support only real date. So it's possible to modify the converter for the *date* field used by MySQLdb and return only a string in this format *'YYYY-MM-DD'*. Unfortunately the converter use by MySQLdb is set at the connection level so it's use for all MySQL *date* fields. But Django `DateField` rely on the fact that the database return a `datetime.date` object, so if I change the converter to return a string, Django is not happy at all.
Someone have an idea or advice to solve this problem? How to create a `PartialDateField` in Django ?
EDIT
----
Also I should add that I already thought of 2 solutions, create 3 integer fields for year, month and day (as mention by *Alison R.*) or use a *varchar* field to keep date as string in this format *YYYY-MM-DD*.
But in both solutions, if I'm not wrong, I will loose the *special* properties of a *date* field like doing query of this kind on them: *Get all entries after this date*. I can probably re-implement this functionality on the client side but that will not be a valid solution in my case because the database can be query from other systems (mysql client, MS Access, etc.)
|
2010/06/04
|
[
"https://Stackoverflow.com/questions/2971198",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/146481/"
] |
First, thanks for all your answers. None of them, as is, was a good solution for my problem, but, for your defense, I should add that I didn't give all the requirements. But each one help me think about my problem and some of your ideas are part of my final solution.
So my final solution, on the DB side, is to use a *varchar* field (limited to 10 chars) and storing the date in it, as a string, in the ISO format (YYYY-MM-DD) with *00* for month and day when there's no month and/or day (like a *date* field in MySQL). This way, this field can work with any databases, the data can be read, understand and edited directly and easily by a human using a simple client (like mysql client, phpmyadmin, etc.). That was a requirement. It can also be exported to Excel/CSV without any conversion, etc. The disadvantage is that the format is not enforce (except in Django). Someone could write *'not a date'* or do a mistake in the format and the DB will accept it (if you have an idea about this problem...).
This way it's also possible to do all of the *special* queries of a *date* field relatively easily. For queries with WHERE: <, >, <=, >= and = work directly. The IN and BETWEEN queries work directly also. For querying by day or month you just have to do it with EXTRACT (DAY|MONTH ...). Ordering work also directly. So I think it covers all the query needs and with mostly no complication.
On the Django side, I did 2 things. First, I have created a `PartialDate` object that look mostly like `datetime.date` but supporting date without month and/or day. Inside this object I use a datetime.datetime object to keep the date. I'm using the hours and minutes as flag that tell if the month and day are valid when they are set to 1. It's the same idea that *steveha* propose but with a different implementation (and only on the client side). Using a `datetime.datetime` object gives me a lot of nice features for working with dates (validation, comparaison, etc.).
Secondly, I have created a `PartialDateField` that mostly deal with the conversion between the `PartialDate` object and the database.
So far, it works pretty well (I have mostly finish my extensive unit tests).
|
Although not in Python - here's an example of how the same problem was solved in Ruby - using a single Integer value - and bitwise operators to store year, month and day - with month and day optional.
<https://github.com/58bits/partial-date>
Look at the source in lib for date.rb and bits.rb.
I'm sure a similar solution could be written in Python.
To persist the date (sortable) you just save the Integer to the database.
|
50,389,852
|
My Visual Studio Code's Intellisense is not working properly. Every time I try to use it with `Ctrl + Shift`, it only displays a loading message. I'm using Python (with Django) and have installed `ms-python.python`. I also have `Djaneiro`. It is still not working.
[](https://i.stack.imgur.com/2aejV.png)
What seems to be the problem here?
|
2018/05/17
|
[
"https://Stackoverflow.com/questions/50389852",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6463232/"
] |
In my case , what worked was reinstalling **python extension on vscode** which auto-install **pylance**.
then I just hit **"ctrl + , "** on my keyboard to take me to vscode setting
typed: **pylance** .
clicked on the edit setting.json and change the "python.languageServer" to default
```
},
"terminal.integrated.sendKeybindingsToShell": true,
"python.defaultInterpreterPath": "C:source\\env\\Scripts\\python.exe",
"python.disableInstallationCheck": true,
"terminal.integrated.defaultProfile.windows": "Command Prompt",
**"python.languageServer": "Default"**,
"python.analysis.completeFunctionParens": true,
"[jsonc]": {
"editor.defaultFormatter": "esbenp.prettier-vscode"
},
"python.autoComplete.addBrackets": true,
"diffEditor.ignoreTrimWhitespace": false
```
and I hope this works for you also else you can try reinstall intellicode and python extension then restart your vscode.
I hope you find this useful :)
|
Assuming you're using `Pylance` as your language server, what worked for me (based on [this](https://github.com/microsoft/pylance-release/issues/275) conversation) was adding
```
"python.analysis.extraPaths": [
"./src/lib"
],
```
(where `.src/lib/` contains your modules and an `__init__.py` file) to your `settings.json`.
**Note:** the setting is called `python.analysis.extraPaths` and not `python.autoComplete.extraPaths`!
|
50,389,852
|
My Visual Studio Code's Intellisense is not working properly. Every time I try to use it with `Ctrl + Shift`, it only displays a loading message. I'm using Python (with Django) and have installed `ms-python.python`. I also have `Djaneiro`. It is still not working.
[](https://i.stack.imgur.com/2aejV.png)
What seems to be the problem here?
|
2018/05/17
|
[
"https://Stackoverflow.com/questions/50389852",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6463232/"
] |
"If you find IntelliSense has stopped working, the language service may not be running. Try restarting VS Code and this should solve the issue."
From <https://code.visualstudio.com/docs/editor/intellisense> and helped to me
|
In my case there was a problem with the python language server after a vs code update. The download of the language server was stuck every time. See [enter link description here](https://github.com/microsoft/vscode-python/issues/10854) What I had to do was to open settings and type "languge server ". Then there are a few options like css, html etc. ( whatever you might have installed )
Select Python and search for lanuge server. Also if it is mentioned that the default is pylance, I had to switch it to Pylance and restart the ide ( update the window )
After this action my intellysense worked again.
|
50,389,852
|
My Visual Studio Code's Intellisense is not working properly. Every time I try to use it with `Ctrl + Shift`, it only displays a loading message. I'm using Python (with Django) and have installed `ms-python.python`. I also have `Djaneiro`. It is still not working.
[](https://i.stack.imgur.com/2aejV.png)
What seems to be the problem here?
|
2018/05/17
|
[
"https://Stackoverflow.com/questions/50389852",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6463232/"
] |
In my case , what worked was reinstalling **python extension on vscode** which auto-install **pylance**.
then I just hit **"ctrl + , "** on my keyboard to take me to vscode setting
typed: **pylance** .
clicked on the edit setting.json and change the "python.languageServer" to default
```
},
"terminal.integrated.sendKeybindingsToShell": true,
"python.defaultInterpreterPath": "C:source\\env\\Scripts\\python.exe",
"python.disableInstallationCheck": true,
"terminal.integrated.defaultProfile.windows": "Command Prompt",
**"python.languageServer": "Default"**,
"python.analysis.completeFunctionParens": true,
"[jsonc]": {
"editor.defaultFormatter": "esbenp.prettier-vscode"
},
"python.autoComplete.addBrackets": true,
"diffEditor.ignoreTrimWhitespace": false
```
and I hope this works for you also else you can try reinstall intellicode and python extension then restart your vscode.
I hope you find this useful :)
|
What worked for me was to uninstall the Python extension & restart VS Code. IntelliSense
sprang to life! This may not make sense as I left the extension uninstalled. Your comments are welcome!
|
50,389,852
|
My Visual Studio Code's Intellisense is not working properly. Every time I try to use it with `Ctrl + Shift`, it only displays a loading message. I'm using Python (with Django) and have installed `ms-python.python`. I also have `Djaneiro`. It is still not working.
[](https://i.stack.imgur.com/2aejV.png)
What seems to be the problem here?
|
2018/05/17
|
[
"https://Stackoverflow.com/questions/50389852",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6463232/"
] |
Installing `Pylance` addon caused Vscode Intellisense to stop.
On disabling Pylance and enabling the `Default Microsoft Python extension` along with `Visual Studio IntelliCode(Microsoft)` and reverting to the `Jedi server`(prompted by Vscode) ,restarted intellisense detection.
|
My settings were all in order, but still not auto-completing in-line.
I disabled the Kite extension & a couple of other ones that I didn't really need, & IntelliSense started auto-completing.
Not 100% sure, but I reckon it might have been my "Kite" extension settings that were getting in the way.
|
50,389,852
|
My Visual Studio Code's Intellisense is not working properly. Every time I try to use it with `Ctrl + Shift`, it only displays a loading message. I'm using Python (with Django) and have installed `ms-python.python`. I also have `Djaneiro`. It is still not working.
[](https://i.stack.imgur.com/2aejV.png)
What seems to be the problem here?
|
2018/05/17
|
[
"https://Stackoverflow.com/questions/50389852",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6463232/"
] |
Installing `Pylance` addon caused Vscode Intellisense to stop.
On disabling Pylance and enabling the `Default Microsoft Python extension` along with `Visual Studio IntelliCode(Microsoft)` and reverting to the `Jedi server`(prompted by Vscode) ,restarted intellisense detection.
|
In my case there was a problem with the python language server after a vs code update. The download of the language server was stuck every time. See [enter link description here](https://github.com/microsoft/vscode-python/issues/10854) What I had to do was to open settings and type "languge server ". Then there are a few options like css, html etc. ( whatever you might have installed )
Select Python and search for lanuge server. Also if it is mentioned that the default is pylance, I had to switch it to Pylance and restart the ide ( update the window )
After this action my intellysense worked again.
|
50,389,852
|
My Visual Studio Code's Intellisense is not working properly. Every time I try to use it with `Ctrl + Shift`, it only displays a loading message. I'm using Python (with Django) and have installed `ms-python.python`. I also have `Djaneiro`. It is still not working.
[](https://i.stack.imgur.com/2aejV.png)
What seems to be the problem here?
|
2018/05/17
|
[
"https://Stackoverflow.com/questions/50389852",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6463232/"
] |
In my case, Pylance was stuck at `Loading...` since the workspace contained too many files (Large dataset and many log files). Upon inspecting the python language server logs:
```
Enumeration of workspace source files is taking longer than 10 seconds.
```
Solution
--------
Add a `pyrightconfig.json` to your workspace root, and create a list with directories to exclude from the Pylance search (<https://github.com/microsoft/pyright/blob/main/docs/configuration.md>):
```json
{
"exclude": [
"**/data",
"**/.crash_reports",
"**/runs",
"**/wandb",
"**/notebooks",
"**/models",
".git"
]
}
```
Restart the language server
|
I just disabled Pylance in the extensions and suggestions worked again instantly.
|
50,389,852
|
My Visual Studio Code's Intellisense is not working properly. Every time I try to use it with `Ctrl + Shift`, it only displays a loading message. I'm using Python (with Django) and have installed `ms-python.python`. I also have `Djaneiro`. It is still not working.
[](https://i.stack.imgur.com/2aejV.png)
What seems to be the problem here?
|
2018/05/17
|
[
"https://Stackoverflow.com/questions/50389852",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6463232/"
] |
"If you find IntelliSense has stopped working, the language service may not be running. Try restarting VS Code and this should solve the issue."
From <https://code.visualstudio.com/docs/editor/intellisense> and helped to me
|
What worked for me was to uninstall the Python extension & restart VS Code. IntelliSense
sprang to life! This may not make sense as I left the extension uninstalled. Your comments are welcome!
|
50,389,852
|
My Visual Studio Code's Intellisense is not working properly. Every time I try to use it with `Ctrl + Shift`, it only displays a loading message. I'm using Python (with Django) and have installed `ms-python.python`. I also have `Djaneiro`. It is still not working.
[](https://i.stack.imgur.com/2aejV.png)
What seems to be the problem here?
|
2018/05/17
|
[
"https://Stackoverflow.com/questions/50389852",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6463232/"
] |
If you've tried everything but still if it doesn't work, in my case installing the extension [Visual Studio IntelliCode](https://marketplace.visualstudio.com/items?itemName=VisualStudioExptTeam.vscodeintellicode) with the [Python](https://marketplace.visualstudio.com/items?itemName=ms-python.python) extension worked.
|
go to Extensions (Ctrl+Shift+X).
you should have installed following extensions:
[](https://i.stack.imgur.com/9789s.png)
\*:Note: some of them may needs to be restarted, in this way it is shown in the right side of extension.
|
50,389,852
|
My Visual Studio Code's Intellisense is not working properly. Every time I try to use it with `Ctrl + Shift`, it only displays a loading message. I'm using Python (with Django) and have installed `ms-python.python`. I also have `Djaneiro`. It is still not working.
[](https://i.stack.imgur.com/2aejV.png)
What seems to be the problem here?
|
2018/05/17
|
[
"https://Stackoverflow.com/questions/50389852",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6463232/"
] |
Installing `Pylance` addon caused Vscode Intellisense to stop.
On disabling Pylance and enabling the `Default Microsoft Python extension` along with `Visual Studio IntelliCode(Microsoft)` and reverting to the `Jedi server`(prompted by Vscode) ,restarted intellisense detection.
|
For me, what fixed it was reinstalling the default Python extension and setting the python language server in the seetings to `Default`.
Steps
=====
1. Open the commands window by simultaneously pressing `CTRL/CMD + Shift + P`. Now search for `install extensions`.
2. Select `Python` and install it (uninstall first if already installed).
* Python is the default Python extension in VSCode and it installs Pylance by default.
3. Open the commands window again and search this time for `Settings`. Select `Prefernces: Open settings (JSON)`.
4. In the JSON file search (`CTRL/CMD + F`) for `python.languageServer` and set the value to `Default` or `Pylance` as the default is currently Pylance (mine was somehow set to `None`!).
5. Once you save the file, a prompt will ask you if you want to reload vscode for the changes to take effect, click `Yes`!
That's it, intelliSense should be working now using Pylance!
|
50,389,852
|
My Visual Studio Code's Intellisense is not working properly. Every time I try to use it with `Ctrl + Shift`, it only displays a loading message. I'm using Python (with Django) and have installed `ms-python.python`. I also have `Djaneiro`. It is still not working.
[](https://i.stack.imgur.com/2aejV.png)
What seems to be the problem here?
|
2018/05/17
|
[
"https://Stackoverflow.com/questions/50389852",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6463232/"
] |
First of all if you've installed virtualenv on your project run vscode from there. Then in your vscode settings, i mean settings.json, you can follow my configuration or trace in which one you have a problem. Most of time this problem stem from putting incorrect path in pythonPath setting
```
{
"python.pythonPath": "${workspaceFolder}/env/bin/python3",
"editor.formatOnSave": true,
"python.linting.pep8Enabled": true,
"python.linting.pylintPath": "pylint",
"python.linting.pylintArgs": ["--load-plugins", "pylint_django"],
"python.linting.pylintEnabled": true,
"python.linting.pep8Args": ["--ignore=E501"],
"files.exclude": {
"**/*.pyc": true
}
}
```
**Update:**
If you want to have a well-configuration of Vscode That i personally use for developing my Django and React web-applications you can glance [this](https://github.com/rainyman2012/vscode_django_reactjs_configuration).
|
"If you find IntelliSense has stopped working, the language service may not be running. Try restarting VS Code and this should solve the issue."
From <https://code.visualstudio.com/docs/editor/intellisense> and helped to me
|
54,493,369
|
I am running a DB job using Maven liquibase plugin on circle ci. I need to read the parameters like username,password, dburl etc from AWS Parameter store.But when I try to set the value returned by aws cli to a custom variable, its always blank/empty. I know the value exists because the same command on mac terminal returns a value.
I am using a Bash script to install AWS CLI with circle ci job.When I echo the password with in the .sh file I see the value but when I echo it on my config.yml I see blank empty value. I also try to get the value using aws ssm with the config.yml file but even there the value is blank empty.
My Config.yml
```
version: 2
references:
defaults: &defaults
working_directory: ~/tmp
environment:
PROJECT_NAME: DB Job
build-filters: &filters
filters:
tags:
only: /^v[0-9]{1,2}.[0-9]{1,2}.[0-9]{1,2}-(dev)/
branches:
ignore: /.*/
jobs:
checkout-code:
<<: *defaults
docker:
- image: circleci/openjdk:8-jdk-node
steps:
- attach_workspace:
at: ~/tmp
- checkout
- restore_cache:
key: cache-{{ checksum "pom.xml" }}
- save_cache:
paths:
- ~/.m2
key: cache-{{ checksum "pom.xml" }}
- persist_to_workspace:
root: ~/tmp
paths: ./
build-app:
<<: *defaults
docker:
- image: circleci/openjdk:8-jdk-node
steps:
- attach_workspace:
at: ~/tmp
- restore_cache:
key: cache-{{ checksum "pom.xml" }}
- run: chmod 700 resources/circleci/*.sh
- run:
name: Getting DB password
command: resources/circleci/env-setup.sh
- run: echo 'export ENV="$(echo $CIRCLE_TAG | cut -d '-' -f 2)"' >> $BASH_ENV
- run: echo $ENV
- run: echo $dbPasswordDev
- run: export PASS=$(aws ssm get-parameters --names "/enterprise/org/dev/spring.datasource.password" --with-decryption --query "Parameters[0].Value" | tr -d '"') >> $BASH_ENV
- run: echo $PASS
- run: mvn resources:resources liquibase:update -P$ENV,pre-release
workflows:
version: 2
build-deploy:
jobs:
- checkout-code:
<<: *filters
- build-app:
requires:
- checkout-code
<<: *filters
```
env-setup.sh
```
sudo apt-get update
sudo apt-get install -y python-pip python-dev
sudo pip install awscli
aws configure set aws_access_key_id $AWS_ACCESS_KEY_ID
aws configure set aws_secret_access_key $AWS_SECRET_ACCESS_KEY
aws configure set aws_region $AWS_DEFAULT_REGION
dbPassword=`aws ssm get-parameters --names "/enterprise/org/dev/spring.datasource.password" --with-decryption --query "Parameters[0].Value" | tr -d '"' `
echo "dbPassword = ${dbPassword}"
export dbPasswordDev=$dbPassword >> $BASH_ENV
echo $"Custom = $dbPasswordDev"
```
When I echo $dbPasswordDev with in env-set.sh I see the value however in config.yml I don't see the value and I see blank/empty string. Also when I try to echo $PASS with in config.yml I expect to see the value however I see blank empty string
|
2019/02/02
|
[
"https://Stackoverflow.com/questions/54493369",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3553141/"
] |
According to
[their official documentation](https://circleci.com/docs/2.0/env-vars/), you need to **echo** the "export foo=bar" into $BASH\_ENV (which is just a file that runs when the a bash session starts):
so in your env-setup.sh file:
```sh
echo "export dbPasswordDev=$dbPassword" >> $BASH_ENV
```
|
Here is a more advanced case where variables can be loaded from `.env` file.
```
version: 2
aliases:
- &step_process_dotenv
run:
name: Process .env file variables
command: echo "export $(grep -v '^#' .env | xargs)" >> $BASH_ENV
jobs:
build:
working_directory: ~/project
docker:
- image: php
steps:
- checkout
- *step_process_dotenv
```
Source repository with tests: <https://github.com/integratedexperts/circleci-sandbox/tree/feature/dotenv>
CI run result: <https://circleci.com/gh/integratedexperts/circleci-sandbox/14>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.