qid
int64 46k
74.7M
| question
stringlengths 54
37.8k
| date
stringlengths 10
10
| metadata
listlengths 3
3
| response_j
stringlengths 17
26k
| response_k
stringlengths 26
26k
|
|---|---|---|---|---|---|
20,383,924
|
What I am trying to do is access the traffic meter data on my local netgear router. It's easy enough to login to it and click on the link, but ideally I would like a little app that sits down in the system tray (windows) that I can check whenever I want to see what my network traffic is.
I'm using python to try to access the router's web page, but I've run into some snags. I originally tried modified a script that would reboot the router (found here <https://github.com/ncw/router-rebooter/blob/master/router_rebooter.py>) but it just serves up the raw html and I need it after the onload javascript functions have run. This type of thing is described in many posts about web scraping and people suggested using selenium.
I tried selenium and have run into two problems. First, it actually opens the browser window, which is not what I want. Second, it skips the stuff I put in to pass the HTTP authentication and pops up the login window anyway. Here is the code:
```
from selenium import webdriver
baseAddress = '192.168.1.1'
baseURL = 'http://%(user)s:%(pwd)s@%(host)s/traffic_meter.htm'
username = 'admin'
pwd = 'thisisnotmyrealpassword'
url = baseURL % {
'user': username,
'pwd': pwd,
'host': baseAddress
}
profile = webdriver.FirefoxProfile()
profile.set_preference('network.http.phishy-userpass-length', 255)
driver = webdriver.Firefox(firefox_profile=profile)
driver.get(url)
```
So, my question is, what is the best way to accomplish what I want without having it launch a visible web browser window?
**Update:**
Okay, I tried sircapsalot's suggestion and modified the script to this:
```
from selenium import webdriver
from contextlib import closing
url = 'http://admin:notmyrealpassword@192.168.1.1/start.htm'
with closing(webdriver.Remote(desired_capabilities = webdriver.DesiredCapabilities.HTMLUNIT)) as driver:
driver.get(url)
print(driver.page_source)
```
This fixes the web browser being loaded, but it failed the authentication. Any suggestions?
|
2013/12/04
|
[
"https://Stackoverflow.com/questions/20383924",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1777330/"
] |
Okay, I found the solution and it was way easier than I thought. I did try John1024's suggestion and was able to download the proper webpage from the router using wget. However I didn't like the fact that wget saved the result to a file, which I would then have to open and parse.
I ended up going back to the original reboot\_router.py script I had attempted to modify unsuccessfully the first time. My problem was I was trying to make it too complicated. This is the final script I ended up using:
```
import urllib2
user = 'admin'
pwd = 'notmyrealpassword'
host = '192.168.1.1'
url = 'http://' + host + '/traffic_meter_2nd.htm'
passman = urllib2.HTTPPasswordMgrWithDefaultRealm()
passman.add_password(None, host, user, pwd)
authhandler = urllib2.HTTPBasicAuthHandler(passman)
opener = urllib2.build_opener(authhandler)
response = opener.open(url)
stuff = response.read()
response.close()
print stuff
```
This prints out the entire traffic meter webpage from my router, with its proper values loaded. I can then take this and parse the values out of it. The nice thing about this is it has no external dependencies like selenium, wget or other libraries that needs to be installed. Clean is good.
Thank you, everyone, for your suggestions. I wouldn't have gotten to this answer without them.
|
The web interface for my Netgear router (WNDR3700) is also filled with javascript. Yours may differ but I have found that my scripts can get all the info they need without javascript.
The first step is finding the correct URL. Using FireFox, I went to the traffic page and then used "This Frame -> Show only this frame" to discover that the URL for the traffic page on my router is:
```
http://my_router_address/traffic.htm
```
After finding this URL, no web browswer and no javascript is needed. I can, for example, capture this page with `wget`:
```
wget http://my_router_address/traffic.htm
```
Using a text editor on the resulting traffic.htm file, I see that the traffic data is available in a lengthy block that starts:
```
var traffic_today_time="1486:37";
var traffic_today_up="1,959";
var traffic_today_down="1,945";
var traffic_today_total="3,904";
. . . .
```
Thus, the `traffic.htm` file can be easily captured and parsed with the scripting language of your choice. No javascript ever needs to be executed.
UPDATE: I have a `~/.netrc` file with a line in it like:
```
machine my_router_address login someloginname password somepassword
```
Before `wget` downloads from the router, it retrieves the login info from this file. This has security advantages. If one runs `wget http://name@password...`, then the password is viewable to all on your machine via the process list (`ps a`). Using `.netrc`, this never happens. Restrictive permissions can be set on `.netrc`, e.g. readable only by user (`chmod 400 ~/.netrc`).
|
29,516,084
|
Getting the following error when trying to install Pandas (0.16.0), which is in my requirements.txt file, on AWS Elastic Beanstalk EC2 instance:
```
building 'pandas.msgpack' extension
gcc -pthread -fno-strict-aliasing -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fwrapv -DNDEBUG -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fwrapv -fPIC -D__LITTLE_ENDIAN__=1 -Ipandas/src/klib -Ipandas/src -I/opt/python/run/venv/local/lib/python2.7/site-packages/numpy/core/include -I/usr/include/python2.7 -c pandas/msgpack.cpp -o build/temp.linux-x86_64-2.7/pandas/msgpack.o
gcc: error trying to exec 'cc1plus': execvp: No such file or directory
error: command 'gcc' failed with exit status 1
```
I'm running on `64bit Amazon Linux 2015.03 v1.3.0 running Python 2.7` and previously ran into this same error on a t1.micro instance, which was resolved when I change to a m3.medium, but I'm running an m3.xlarge so can't be a memory issue.
I have also ensured that gcc is installed as a package in `.ebextensions/00_gcc.config`:
```
packages:
yum:
gcc: []
gcc-c++: []
```
|
2015/04/08
|
[
"https://Stackoverflow.com/questions/29516084",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1177562/"
] |
For pandas being compiled on Elastic Beanstalk, make sure to have both packages: `gcc-c++` *and* `python-devel`
```
packages:
yum:
gcc-c++: []
python-devel: []
```
|
Install `python-dev`
```
sudo apt-get install python-dev
```
For `python3`
```
sudo apt-get install python3-dev
```
|
29,516,084
|
Getting the following error when trying to install Pandas (0.16.0), which is in my requirements.txt file, on AWS Elastic Beanstalk EC2 instance:
```
building 'pandas.msgpack' extension
gcc -pthread -fno-strict-aliasing -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fwrapv -DNDEBUG -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fwrapv -fPIC -D__LITTLE_ENDIAN__=1 -Ipandas/src/klib -Ipandas/src -I/opt/python/run/venv/local/lib/python2.7/site-packages/numpy/core/include -I/usr/include/python2.7 -c pandas/msgpack.cpp -o build/temp.linux-x86_64-2.7/pandas/msgpack.o
gcc: error trying to exec 'cc1plus': execvp: No such file or directory
error: command 'gcc' failed with exit status 1
```
I'm running on `64bit Amazon Linux 2015.03 v1.3.0 running Python 2.7` and previously ran into this same error on a t1.micro instance, which was resolved when I change to a m3.medium, but I'm running an m3.xlarge so can't be a memory issue.
I have also ensured that gcc is installed as a package in `.ebextensions/00_gcc.config`:
```
packages:
yum:
gcc: []
gcc-c++: []
```
|
2015/04/08
|
[
"https://Stackoverflow.com/questions/29516084",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1177562/"
] |
For pandas being compiled on Elastic Beanstalk, make sure to have both packages: `gcc-c++` *and* `python-devel`
```
packages:
yum:
gcc-c++: []
python-devel: []
```
|
on ec2 instances if you run into gcc error; try this
1. sudo yum install gcc python-setuptools python-devel postgresql-devel
2. sudo su -
3. sudo pip install
|
29,516,084
|
Getting the following error when trying to install Pandas (0.16.0), which is in my requirements.txt file, on AWS Elastic Beanstalk EC2 instance:
```
building 'pandas.msgpack' extension
gcc -pthread -fno-strict-aliasing -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fwrapv -DNDEBUG -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fwrapv -fPIC -D__LITTLE_ENDIAN__=1 -Ipandas/src/klib -Ipandas/src -I/opt/python/run/venv/local/lib/python2.7/site-packages/numpy/core/include -I/usr/include/python2.7 -c pandas/msgpack.cpp -o build/temp.linux-x86_64-2.7/pandas/msgpack.o
gcc: error trying to exec 'cc1plus': execvp: No such file or directory
error: command 'gcc' failed with exit status 1
```
I'm running on `64bit Amazon Linux 2015.03 v1.3.0 running Python 2.7` and previously ran into this same error on a t1.micro instance, which was resolved when I change to a m3.medium, but I'm running an m3.xlarge so can't be a memory issue.
I have also ensured that gcc is installed as a package in `.ebextensions/00_gcc.config`:
```
packages:
yum:
gcc: []
gcc-c++: []
```
|
2015/04/08
|
[
"https://Stackoverflow.com/questions/29516084",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1177562/"
] |
For pandas being compiled on Elastic Beanstalk, make sure to have both packages: `gcc-c++` *and* `python-devel`
```
packages:
yum:
gcc-c++: []
python-devel: []
```
|
I had to upgrade amazon's EC2 pip. You can do this by editing the .config file in .ebextensions:
`sh
commands:
00_update_pip:
command: "/opt/python/run/venv/bin/pip install --upgrade pip"`
|
29,516,084
|
Getting the following error when trying to install Pandas (0.16.0), which is in my requirements.txt file, on AWS Elastic Beanstalk EC2 instance:
```
building 'pandas.msgpack' extension
gcc -pthread -fno-strict-aliasing -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fwrapv -DNDEBUG -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fwrapv -fPIC -D__LITTLE_ENDIAN__=1 -Ipandas/src/klib -Ipandas/src -I/opt/python/run/venv/local/lib/python2.7/site-packages/numpy/core/include -I/usr/include/python2.7 -c pandas/msgpack.cpp -o build/temp.linux-x86_64-2.7/pandas/msgpack.o
gcc: error trying to exec 'cc1plus': execvp: No such file or directory
error: command 'gcc' failed with exit status 1
```
I'm running on `64bit Amazon Linux 2015.03 v1.3.0 running Python 2.7` and previously ran into this same error on a t1.micro instance, which was resolved when I change to a m3.medium, but I'm running an m3.xlarge so can't be a memory issue.
I have also ensured that gcc is installed as a package in `.ebextensions/00_gcc.config`:
```
packages:
yum:
gcc: []
gcc-c++: []
```
|
2015/04/08
|
[
"https://Stackoverflow.com/questions/29516084",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1177562/"
] |
For pandas being compiled on Elastic Beanstalk, make sure to have both packages: `gcc-c++` *and* `python-devel`
```
packages:
yum:
gcc-c++: []
python-devel: []
```
|
I solved this issue by ssh'ing in to the EBS machine and updating pip
```
pip install -U pip
```
|
68,341,995
|
I have a python file that basically runs a main() function which scrapes data given a given keyword.
I am trying to do multiprocessing so my main function in my Jupyter Notebook looks like
```
p1 = multiprocessing.Process(target=main, args=['cosmetics'])
p2 = multiprocessing.Process(target=main, args=['airpod pro case'])
if __name__ == '__main__':
p1.start()
p2.start()
```
Now this runs perfectly, it basically scrapes data given a keyword, saves it to a csv file (in this case, 'cosmetics.csv' and 'airpod pro case.csv' then calls another function to go through each column that contains a url from the saved csv file.
However, I wanted to run this on my terminal/cmd and I changed the above code to:
```
def multi_process(item_1, item_2):
p1 = multiprocessing.Process(target=main, args=[item_1])
p2 = multiprocessing.Process(target=main, args=[item_2])
#p3 = multiprocessing.Process(target=main, args=['hair dryer'])
if __name__ == '__main__':
p1.start()
p2.start()
#p3.start()
item_1 = sys.argv[1]
item_2 = sys.argv[2]
multi_process(item_1, item_2)
```
then I saved the file as a .py file and ran this line on my terminal/cmd
```
> python3 /Users/Name/Desktop/DE/Scrape.py "cosmetics" "airpod pro case"
FileNotFoundError: [Errno 2] File b'/Users/Name/Desktop/DE/cosmetics 1.csv' does not exist: b'/Users/Name/Desktop/DE/cosmetics 1.csv'
FileNotFoundError: [Errno 2] File b'/Users/Name/Desktop/DE/airpod pro case 1.csv' does not exist: b'/Users/Name/Desktop/DE/airpod pro case 1.csv'
```
and I get this error saying that it can't find the csv files which led me to check the folder and find out that the csv files are not being saved which should be.
Does running python file on termina/cmd not allow saving csv files?
I can't seem to figure out what the problem is.
|
2021/07/12
|
[
"https://Stackoverflow.com/questions/68341995",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/16404222/"
] |
No, it's not that running the module using a terminal won't save files, that's actually the way you run python modules. Check your main() whether it is actually writing the CSV file or not.
|
the only question i saw in here was `Does running python file on termina/cmd not allow saving csv files?` the answer is no... its fine to save csv files in the command line typically
```
open("output.csv","w") as f:
f.write("this,is\na,csv")
```
|
68,341,995
|
I have a python file that basically runs a main() function which scrapes data given a given keyword.
I am trying to do multiprocessing so my main function in my Jupyter Notebook looks like
```
p1 = multiprocessing.Process(target=main, args=['cosmetics'])
p2 = multiprocessing.Process(target=main, args=['airpod pro case'])
if __name__ == '__main__':
p1.start()
p2.start()
```
Now this runs perfectly, it basically scrapes data given a keyword, saves it to a csv file (in this case, 'cosmetics.csv' and 'airpod pro case.csv' then calls another function to go through each column that contains a url from the saved csv file.
However, I wanted to run this on my terminal/cmd and I changed the above code to:
```
def multi_process(item_1, item_2):
p1 = multiprocessing.Process(target=main, args=[item_1])
p2 = multiprocessing.Process(target=main, args=[item_2])
#p3 = multiprocessing.Process(target=main, args=['hair dryer'])
if __name__ == '__main__':
p1.start()
p2.start()
#p3.start()
item_1 = sys.argv[1]
item_2 = sys.argv[2]
multi_process(item_1, item_2)
```
then I saved the file as a .py file and ran this line on my terminal/cmd
```
> python3 /Users/Name/Desktop/DE/Scrape.py "cosmetics" "airpod pro case"
FileNotFoundError: [Errno 2] File b'/Users/Name/Desktop/DE/cosmetics 1.csv' does not exist: b'/Users/Name/Desktop/DE/cosmetics 1.csv'
FileNotFoundError: [Errno 2] File b'/Users/Name/Desktop/DE/airpod pro case 1.csv' does not exist: b'/Users/Name/Desktop/DE/airpod pro case 1.csv'
```
and I get this error saying that it can't find the csv files which led me to check the folder and find out that the csv files are not being saved which should be.
Does running python file on termina/cmd not allow saving csv files?
I can't seem to figure out what the problem is.
|
2021/07/12
|
[
"https://Stackoverflow.com/questions/68341995",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/16404222/"
] |
My bad guys, it was a stupid mistake on my part. I didn't actually do cd Desktop/DE to go to the DE folder on my terminal/cmd line and ran the code when I just opened the terminal/cmd, making the files automatically save somewhere else.
|
the only question i saw in here was `Does running python file on termina/cmd not allow saving csv files?` the answer is no... its fine to save csv files in the command line typically
```
open("output.csv","w") as f:
f.write("this,is\na,csv")
```
|
5,436,227
|
I have a sandbox PAYPAL area,
My language is python - Django and I use django-paypal
ipn tes on my server works but
when someone try to buy something, after paypal process in sandbox I don't receive signal and in my paypal\_ipn I don't see the transaction.
So the problem is that I don't receive the signal.
This is my signal code in models.py
```
from paypal.standard.ipn.signals import payment_was_successful
def show_me_the_money(sender, **kwargs):
code = sender.item_number
type, number_product, pagamento_corso_id = code.split('_')
obj = get_object_or_404(PagamentoCorso, int(pagamento_corso_id))
obj.pagamento = True
obj.save()
payment_was_successful.connect(show_me_the_money)
```
Please help me because is 7 days... and I'm very frustrated! :-)
|
2011/03/25
|
[
"https://Stackoverflow.com/questions/5436227",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/677220/"
] |
I think you can use the Paypal Test Tools to send fake notifactions to your notify url. That might make it easier to debug.
<http://www.friendly-stranger.com/pictures/paypal.jpg>
What you can also do is run the tests that come with django-paypal. It should be something like
```
python manage.py test paypal
```
You can also take a look at the test code and create your own tests to debug your problem.
If you still can't figure it out post your url configuration and the view that passes the IPN form to the template.
|
I had a similar problem. In my case, I could see from access logs that PayPal *is* accessing my payment notification URL, and the requests are 200 OK, but no signals triggered on Django side.
Turned out that payments had "Pending" status instead of "Completed" (and I wasn't listening on `paypal.standard.ipn.signals.payment_was_flagged` signal).
The reason my payments were flagged, was incorrect `settings.PAYPAL_RECEIVER_EMAIL` and incorrect `paypal_dict["business"]` email addresses. Exact same issue [as for this guy](http://jayngo.blogspot.com/2011/04/using-django-paypal.html).
|
5,436,227
|
I have a sandbox PAYPAL area,
My language is python - Django and I use django-paypal
ipn tes on my server works but
when someone try to buy something, after paypal process in sandbox I don't receive signal and in my paypal\_ipn I don't see the transaction.
So the problem is that I don't receive the signal.
This is my signal code in models.py
```
from paypal.standard.ipn.signals import payment_was_successful
def show_me_the_money(sender, **kwargs):
code = sender.item_number
type, number_product, pagamento_corso_id = code.split('_')
obj = get_object_or_404(PagamentoCorso, int(pagamento_corso_id))
obj.pagamento = True
obj.save()
payment_was_successful.connect(show_me_the_money)
```
Please help me because is 7 days... and I'm very frustrated! :-)
|
2011/03/25
|
[
"https://Stackoverflow.com/questions/5436227",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/677220/"
] |
I think you can use the Paypal Test Tools to send fake notifactions to your notify url. That might make it easier to debug.
<http://www.friendly-stranger.com/pictures/paypal.jpg>
What you can also do is run the tests that come with django-paypal. It should be something like
```
python manage.py test paypal
```
You can also take a look at the test code and create your own tests to debug your problem.
If you still can't figure it out post your url configuration and the view that passes the IPN form to the template.
|
In my case the reason was trivial - I've added the signal handler code to `views.py` which is not loaded automatically so the handler was never really used and connected to the signal. The simplest but ugly solution is to move them to `models.py`, recommended is to use Django's [ready()](https://docs.djangoproject.com/en/dev/ref/applications/#django.apps.AppConfig.ready) method.
|
5,436,227
|
I have a sandbox PAYPAL area,
My language is python - Django and I use django-paypal
ipn tes on my server works but
when someone try to buy something, after paypal process in sandbox I don't receive signal and in my paypal\_ipn I don't see the transaction.
So the problem is that I don't receive the signal.
This is my signal code in models.py
```
from paypal.standard.ipn.signals import payment_was_successful
def show_me_the_money(sender, **kwargs):
code = sender.item_number
type, number_product, pagamento_corso_id = code.split('_')
obj = get_object_or_404(PagamentoCorso, int(pagamento_corso_id))
obj.pagamento = True
obj.save()
payment_was_successful.connect(show_me_the_money)
```
Please help me because is 7 days... and I'm very frustrated! :-)
|
2011/03/25
|
[
"https://Stackoverflow.com/questions/5436227",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/677220/"
] |
I had a similar problem. In my case, I could see from access logs that PayPal *is* accessing my payment notification URL, and the requests are 200 OK, but no signals triggered on Django side.
Turned out that payments had "Pending" status instead of "Completed" (and I wasn't listening on `paypal.standard.ipn.signals.payment_was_flagged` signal).
The reason my payments were flagged, was incorrect `settings.PAYPAL_RECEIVER_EMAIL` and incorrect `paypal_dict["business"]` email addresses. Exact same issue [as for this guy](http://jayngo.blogspot.com/2011/04/using-django-paypal.html).
|
In my case the reason was trivial - I've added the signal handler code to `views.py` which is not loaded automatically so the handler was never really used and connected to the signal. The simplest but ugly solution is to move them to `models.py`, recommended is to use Django's [ready()](https://docs.djangoproject.com/en/dev/ref/applications/#django.apps.AppConfig.ready) method.
|
12,138,298
|
I am trying to use "With open()" with python 2.6 and it is giving error(Syntax error) while it works fine with python 2.7.3
Am I missing something or some import to make my program work!
Any help would be appreciated.
Br
My code is here:
```
def compare_some_text_of_a_file(self, exportfileTransferFolder, exportfileCheckFilesFolder) :
flag = 0
error = ""
with open("check_files/"+exportfileCheckFilesFolder+".txt") as f1,open("transfer-out/"+exportfileTransferFolder) as f2:
if f1.read().strip() in f2.read():
print ""
else:
flag = 1
error = exportfileCheckFilesFolder
error = "Data of file " + error + " do not match with exported data\n"
if flag == 1:
raise AssertionError(error)
```
|
2012/08/27
|
[
"https://Stackoverflow.com/questions/12138298",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/864924/"
] |
The `with open()` statement is supported in Python 2.6, you must have a different error.
See [PEP 343](http://www.python.org/dev/peps/pep-0343/) and the python [File Objects documentation](http://docs.python.org/library/stdtypes.html#file-objects) for the details.
Quick demo:
```
Python 2.6.8 (unknown, Apr 19 2012, 01:24:00)
[GCC 4.2.1 (Based on Apple Inc. build 5658) (LLVM build 2335.15.00)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> with open('/tmp/test/a.txt') as f:
... print f.readline()
...
foo
>>>
```
You are trying to use the `with` statement with multiple context managers though, which was only [added in Python 2.7](http://docs.python.org/reference/compound_stmts.html#grammar-token-with_stmt):
>
> Changed in version 2.7: Support for multiple context expressions.
>
>
>
Use nested statements instead in 2.6:
```
with open("check_files/"+exportfileCheckFilesFolder+".txt") as f1:
with open("transfer-out/"+exportfileTransferFolder) as f2:
# f1 and f2 are now both open.
```
|
The `with open()` syntax is supported by Python 2.6. On Python 2.4 it is not supported and gives a syntax error. If you need to support PYthon 2.4, I would suggest something like:
```
def readfile(filename, mode='r'):
f = open(filename, mode)
try:
for line in f:
yield f
except e:
f.close()
raise e
f.close()
for line in readfile(myfile):
print line
```
|
12,138,298
|
I am trying to use "With open()" with python 2.6 and it is giving error(Syntax error) while it works fine with python 2.7.3
Am I missing something or some import to make my program work!
Any help would be appreciated.
Br
My code is here:
```
def compare_some_text_of_a_file(self, exportfileTransferFolder, exportfileCheckFilesFolder) :
flag = 0
error = ""
with open("check_files/"+exportfileCheckFilesFolder+".txt") as f1,open("transfer-out/"+exportfileTransferFolder) as f2:
if f1.read().strip() in f2.read():
print ""
else:
flag = 1
error = exportfileCheckFilesFolder
error = "Data of file " + error + " do not match with exported data\n"
if flag == 1:
raise AssertionError(error)
```
|
2012/08/27
|
[
"https://Stackoverflow.com/questions/12138298",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/864924/"
] |
The `with open()` statement is supported in Python 2.6, you must have a different error.
See [PEP 343](http://www.python.org/dev/peps/pep-0343/) and the python [File Objects documentation](http://docs.python.org/library/stdtypes.html#file-objects) for the details.
Quick demo:
```
Python 2.6.8 (unknown, Apr 19 2012, 01:24:00)
[GCC 4.2.1 (Based on Apple Inc. build 5658) (LLVM build 2335.15.00)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> with open('/tmp/test/a.txt') as f:
... print f.readline()
...
foo
>>>
```
You are trying to use the `with` statement with multiple context managers though, which was only [added in Python 2.7](http://docs.python.org/reference/compound_stmts.html#grammar-token-with_stmt):
>
> Changed in version 2.7: Support for multiple context expressions.
>
>
>
Use nested statements instead in 2.6:
```
with open("check_files/"+exportfileCheckFilesFolder+".txt") as f1:
with open("transfer-out/"+exportfileTransferFolder) as f2:
# f1 and f2 are now both open.
```
|
It is the "extended" `with` statement with multiple context expressions which causes your trouble.
In 2.6, instead of
```
with open(...) as f1, open(...) as f2:
do_stuff()
```
you should add a nesting level and write
```
with open(...) as f1:
with open(...) as f2:
do.stuff()
```
[The docu](http://docs.python.org/reference/compound_stmts.html#the-with-statement) says
>
> *Changed in version 2.7:* Support for multiple context expressions.
>
>
>
|
12,138,298
|
I am trying to use "With open()" with python 2.6 and it is giving error(Syntax error) while it works fine with python 2.7.3
Am I missing something or some import to make my program work!
Any help would be appreciated.
Br
My code is here:
```
def compare_some_text_of_a_file(self, exportfileTransferFolder, exportfileCheckFilesFolder) :
flag = 0
error = ""
with open("check_files/"+exportfileCheckFilesFolder+".txt") as f1,open("transfer-out/"+exportfileTransferFolder) as f2:
if f1.read().strip() in f2.read():
print ""
else:
flag = 1
error = exportfileCheckFilesFolder
error = "Data of file " + error + " do not match with exported data\n"
if flag == 1:
raise AssertionError(error)
```
|
2012/08/27
|
[
"https://Stackoverflow.com/questions/12138298",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/864924/"
] |
It is the "extended" `with` statement with multiple context expressions which causes your trouble.
In 2.6, instead of
```
with open(...) as f1, open(...) as f2:
do_stuff()
```
you should add a nesting level and write
```
with open(...) as f1:
with open(...) as f2:
do.stuff()
```
[The docu](http://docs.python.org/reference/compound_stmts.html#the-with-statement) says
>
> *Changed in version 2.7:* Support for multiple context expressions.
>
>
>
|
The `with open()` syntax is supported by Python 2.6. On Python 2.4 it is not supported and gives a syntax error. If you need to support PYthon 2.4, I would suggest something like:
```
def readfile(filename, mode='r'):
f = open(filename, mode)
try:
for line in f:
yield f
except e:
f.close()
raise e
f.close()
for line in readfile(myfile):
print line
```
|
63,805,737
|
i am trying to implement the following code but it is throwing an error
TypeError
```
Traceback (most recent call last)
<ipython-input-29-6ed67c712ed4> in <module>()
28 return model
29
---> 30 model = nvidia_model()
31 print(model.summary())
32 # dead relu problem: when a node in the network essentially dies and only feeds value of 0 to the following nodes.
5 frames
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/utils/generic_utils.py in validate_kwargs(kwargs, allowed_kwargs, error_message)
776 for kwarg in kwargs:
777 if kwarg not in allowed_kwargs:
--> 778 raise TypeError(error_message, kwarg)
779
780
TypeError: ('Keyword argument not understood:', 'subsample')
```
Defining Nvidia Model
=====================
```
def nvidia_model():
model= Sequential()
model.add(Convolution2D(24, 5, 5, subsample=(2, 2), input_shape=(66, 200, 3), activation='relu'))
model.add(Convolution2D(36, 5, 5, subsample=(2, 2), activation='relu'))
model.add(Convolution2D(48, 5, 5, subsample=(2, 2), activation='relu'))
model.add(Convolution2D(64, 3, 3, activation='relu'))
model.add(Dropout(0.5))
model.add(Flatten())
model.add(Dense(100, activation = 'relu'))
model.add(Dropout(0.5))
model.add(Dense(50, activation = 'relu'))
model.add(Dropout(0.5))
model.add(Dense(10, activation = 'relu'))
model.add(Dropout(0.5))
model.add(Dense(1)) # output the predicted steering angle
optimizer = Adam(lr=1e-4)
model.compile(loss='mse', optimizer=optimizer)
return model
model = nvidia_model()
print(model.summary())
```
|
2020/09/09
|
[
"https://Stackoverflow.com/questions/63805737",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14206313/"
] |
I was stuck on the same error. Here's what I found:
`Convolution2D` is a Keras v1 API (docs: <https://faroit.com/keras-docs/1.2.2/layers/convolutional/>)
`Conv2D` is the newer v2 API (docs: <https://faroit.com/keras-docs/2.0.5/layers/convolutional/>).
I got it working with the following changes:
1. Changed `import Convolution2D` to `import Conv2D`
2. Changed
```
model.add(Convolution2D(24, 5, 5, subsample=(2, 2), input_shape=(66, 200, 3), activation='relu'))
model.add(Convolution2D(36, 5, 5, subsample=(2, 2), activation='relu'))
model.add(Convolution2D(48, 5, 5, subsample=(2, 2), activation='relu'))
model.add(Convolution2D(64, 3, 3, activation='relu'))
```
to
```
model.add(Conv2D(24, 5, 2, input_shape = (66, 200, 3), activation = 'relu'))
model.add(Conv2D(36, 5, 2, activation = 'relu'))
model.add(Conv2D(48, 5, 2, activation = 'relu'))
model.add(Conv2D(64, 3, activation = 'relu'))
model.add(Conv2D(64, 3, activation = 'relu'))
```
|
Try using pooling instead of subsample. Usually that will work.
|
63,805,737
|
i am trying to implement the following code but it is throwing an error
TypeError
```
Traceback (most recent call last)
<ipython-input-29-6ed67c712ed4> in <module>()
28 return model
29
---> 30 model = nvidia_model()
31 print(model.summary())
32 # dead relu problem: when a node in the network essentially dies and only feeds value of 0 to the following nodes.
5 frames
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/utils/generic_utils.py in validate_kwargs(kwargs, allowed_kwargs, error_message)
776 for kwarg in kwargs:
777 if kwarg not in allowed_kwargs:
--> 778 raise TypeError(error_message, kwarg)
779
780
TypeError: ('Keyword argument not understood:', 'subsample')
```
Defining Nvidia Model
=====================
```
def nvidia_model():
model= Sequential()
model.add(Convolution2D(24, 5, 5, subsample=(2, 2), input_shape=(66, 200, 3), activation='relu'))
model.add(Convolution2D(36, 5, 5, subsample=(2, 2), activation='relu'))
model.add(Convolution2D(48, 5, 5, subsample=(2, 2), activation='relu'))
model.add(Convolution2D(64, 3, 3, activation='relu'))
model.add(Dropout(0.5))
model.add(Flatten())
model.add(Dense(100, activation = 'relu'))
model.add(Dropout(0.5))
model.add(Dense(50, activation = 'relu'))
model.add(Dropout(0.5))
model.add(Dense(10, activation = 'relu'))
model.add(Dropout(0.5))
model.add(Dense(1)) # output the predicted steering angle
optimizer = Adam(lr=1e-4)
model.compile(loss='mse', optimizer=optimizer)
return model
model = nvidia_model()
print(model.summary())
```
|
2020/09/09
|
[
"https://Stackoverflow.com/questions/63805737",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14206313/"
] |
```py
def nvidia_model():
model= Sequential()
model.add(Convolution2D(24, 5, 5, subsample=(2, 2), input_shape=(66, 200, 3), activation='relu'))
model.add(Convolution2D(36, 5, 5, subsample=(2, 2), activation='relu'))
model.add(Convolution2D(48, 5, 5, subsample=(2, 2), activation='relu'))
model.add(Convolution2D(64, 3, 3, activation='relu'))
model.add(Dropout(0.5))
model.add(Flatten())
model.add(Dense(100, activation = 'relu'))
model.add(Dropout(0.5))
model.add(Dense(50, activation = 'relu'))
model.add(Dropout(0.5))
model.add(Dense(10, activation = 'relu'))
model.add(Dropout(0.5))
model.add(Dense(1)) # output the predicted steering angle
optimizer = Adam(lr=1e-4)
model.compile(loss='mse', optimizer=optimizer)
return model
model = nvidia_model()
print(model.summary())
```
Instead of this, replace the code with the following:
```py
def nvidia_model():
model= Sequential()
model.add(Convolution2D(24, (5, 5), subsample=(2, 2), input_shape=(66, 200, 3), activation='relu'))
model.add(Convolution2D(36, (5, 5), subsample=(2, 2), activation='relu'))
model.add(Convolution2D(48, (5, 5), subsample=(2, 2), activation='relu'))
model.add(Convolution2D(64, (3, 3), activation='relu'))
model.add(Dropout(0.5))
model.add(Flatten())
model.add(Dense(100, activation = 'relu'))
model.add(Dropout(0.5))
model.add(Dense(50, activation = 'relu'))
model.add(Dropout(0.5))
model.add(Dense(10, activation = 'relu'))
model.add(Dropout(0.5))
model.add(Dense(1)) # output the predicted steering angle
optimizer = Adam(lr=1e-4)
model.compile(loss='mse', optimizer=optimizer)
return model
model = nvidia_model()
print(model.summary())
```
(Notice I added parentheses.) If still the code doesn't work then replace `subsample` with `strides` everywhere
|
Try using pooling instead of subsample. Usually that will work.
|
63,805,737
|
i am trying to implement the following code but it is throwing an error
TypeError
```
Traceback (most recent call last)
<ipython-input-29-6ed67c712ed4> in <module>()
28 return model
29
---> 30 model = nvidia_model()
31 print(model.summary())
32 # dead relu problem: when a node in the network essentially dies and only feeds value of 0 to the following nodes.
5 frames
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/utils/generic_utils.py in validate_kwargs(kwargs, allowed_kwargs, error_message)
776 for kwarg in kwargs:
777 if kwarg not in allowed_kwargs:
--> 778 raise TypeError(error_message, kwarg)
779
780
TypeError: ('Keyword argument not understood:', 'subsample')
```
Defining Nvidia Model
=====================
```
def nvidia_model():
model= Sequential()
model.add(Convolution2D(24, 5, 5, subsample=(2, 2), input_shape=(66, 200, 3), activation='relu'))
model.add(Convolution2D(36, 5, 5, subsample=(2, 2), activation='relu'))
model.add(Convolution2D(48, 5, 5, subsample=(2, 2), activation='relu'))
model.add(Convolution2D(64, 3, 3, activation='relu'))
model.add(Dropout(0.5))
model.add(Flatten())
model.add(Dense(100, activation = 'relu'))
model.add(Dropout(0.5))
model.add(Dense(50, activation = 'relu'))
model.add(Dropout(0.5))
model.add(Dense(10, activation = 'relu'))
model.add(Dropout(0.5))
model.add(Dense(1)) # output the predicted steering angle
optimizer = Adam(lr=1e-4)
model.compile(loss='mse', optimizer=optimizer)
return model
model = nvidia_model()
print(model.summary())
```
|
2020/09/09
|
[
"https://Stackoverflow.com/questions/63805737",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14206313/"
] |
I was stuck on the same error. Here's what I found:
`Convolution2D` is a Keras v1 API (docs: <https://faroit.com/keras-docs/1.2.2/layers/convolutional/>)
`Conv2D` is the newer v2 API (docs: <https://faroit.com/keras-docs/2.0.5/layers/convolutional/>).
I got it working with the following changes:
1. Changed `import Convolution2D` to `import Conv2D`
2. Changed
```
model.add(Convolution2D(24, 5, 5, subsample=(2, 2), input_shape=(66, 200, 3), activation='relu'))
model.add(Convolution2D(36, 5, 5, subsample=(2, 2), activation='relu'))
model.add(Convolution2D(48, 5, 5, subsample=(2, 2), activation='relu'))
model.add(Convolution2D(64, 3, 3, activation='relu'))
```
to
```
model.add(Conv2D(24, 5, 2, input_shape = (66, 200, 3), activation = 'relu'))
model.add(Conv2D(36, 5, 2, activation = 'relu'))
model.add(Conv2D(48, 5, 2, activation = 'relu'))
model.add(Conv2D(64, 3, activation = 'relu'))
model.add(Conv2D(64, 3, activation = 'relu'))
```
|
Found the solution!!
It is because use the argument for Kernel as a tuple. So yah doing (5,5) for it worked for me.
p.s. this marks the first answer I ever gave, yay
|
63,805,737
|
i am trying to implement the following code but it is throwing an error
TypeError
```
Traceback (most recent call last)
<ipython-input-29-6ed67c712ed4> in <module>()
28 return model
29
---> 30 model = nvidia_model()
31 print(model.summary())
32 # dead relu problem: when a node in the network essentially dies and only feeds value of 0 to the following nodes.
5 frames
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/utils/generic_utils.py in validate_kwargs(kwargs, allowed_kwargs, error_message)
776 for kwarg in kwargs:
777 if kwarg not in allowed_kwargs:
--> 778 raise TypeError(error_message, kwarg)
779
780
TypeError: ('Keyword argument not understood:', 'subsample')
```
Defining Nvidia Model
=====================
```
def nvidia_model():
model= Sequential()
model.add(Convolution2D(24, 5, 5, subsample=(2, 2), input_shape=(66, 200, 3), activation='relu'))
model.add(Convolution2D(36, 5, 5, subsample=(2, 2), activation='relu'))
model.add(Convolution2D(48, 5, 5, subsample=(2, 2), activation='relu'))
model.add(Convolution2D(64, 3, 3, activation='relu'))
model.add(Dropout(0.5))
model.add(Flatten())
model.add(Dense(100, activation = 'relu'))
model.add(Dropout(0.5))
model.add(Dense(50, activation = 'relu'))
model.add(Dropout(0.5))
model.add(Dense(10, activation = 'relu'))
model.add(Dropout(0.5))
model.add(Dense(1)) # output the predicted steering angle
optimizer = Adam(lr=1e-4)
model.compile(loss='mse', optimizer=optimizer)
return model
model = nvidia_model()
print(model.summary())
```
|
2020/09/09
|
[
"https://Stackoverflow.com/questions/63805737",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14206313/"
] |
```py
def nvidia_model():
model= Sequential()
model.add(Convolution2D(24, 5, 5, subsample=(2, 2), input_shape=(66, 200, 3), activation='relu'))
model.add(Convolution2D(36, 5, 5, subsample=(2, 2), activation='relu'))
model.add(Convolution2D(48, 5, 5, subsample=(2, 2), activation='relu'))
model.add(Convolution2D(64, 3, 3, activation='relu'))
model.add(Dropout(0.5))
model.add(Flatten())
model.add(Dense(100, activation = 'relu'))
model.add(Dropout(0.5))
model.add(Dense(50, activation = 'relu'))
model.add(Dropout(0.5))
model.add(Dense(10, activation = 'relu'))
model.add(Dropout(0.5))
model.add(Dense(1)) # output the predicted steering angle
optimizer = Adam(lr=1e-4)
model.compile(loss='mse', optimizer=optimizer)
return model
model = nvidia_model()
print(model.summary())
```
Instead of this, replace the code with the following:
```py
def nvidia_model():
model= Sequential()
model.add(Convolution2D(24, (5, 5), subsample=(2, 2), input_shape=(66, 200, 3), activation='relu'))
model.add(Convolution2D(36, (5, 5), subsample=(2, 2), activation='relu'))
model.add(Convolution2D(48, (5, 5), subsample=(2, 2), activation='relu'))
model.add(Convolution2D(64, (3, 3), activation='relu'))
model.add(Dropout(0.5))
model.add(Flatten())
model.add(Dense(100, activation = 'relu'))
model.add(Dropout(0.5))
model.add(Dense(50, activation = 'relu'))
model.add(Dropout(0.5))
model.add(Dense(10, activation = 'relu'))
model.add(Dropout(0.5))
model.add(Dense(1)) # output the predicted steering angle
optimizer = Adam(lr=1e-4)
model.compile(loss='mse', optimizer=optimizer)
return model
model = nvidia_model()
print(model.summary())
```
(Notice I added parentheses.) If still the code doesn't work then replace `subsample` with `strides` everywhere
|
I was stuck on the same error. Here's what I found:
`Convolution2D` is a Keras v1 API (docs: <https://faroit.com/keras-docs/1.2.2/layers/convolutional/>)
`Conv2D` is the newer v2 API (docs: <https://faroit.com/keras-docs/2.0.5/layers/convolutional/>).
I got it working with the following changes:
1. Changed `import Convolution2D` to `import Conv2D`
2. Changed
```
model.add(Convolution2D(24, 5, 5, subsample=(2, 2), input_shape=(66, 200, 3), activation='relu'))
model.add(Convolution2D(36, 5, 5, subsample=(2, 2), activation='relu'))
model.add(Convolution2D(48, 5, 5, subsample=(2, 2), activation='relu'))
model.add(Convolution2D(64, 3, 3, activation='relu'))
```
to
```
model.add(Conv2D(24, 5, 2, input_shape = (66, 200, 3), activation = 'relu'))
model.add(Conv2D(36, 5, 2, activation = 'relu'))
model.add(Conv2D(48, 5, 2, activation = 'relu'))
model.add(Conv2D(64, 3, activation = 'relu'))
model.add(Conv2D(64, 3, activation = 'relu'))
```
|
63,805,737
|
i am trying to implement the following code but it is throwing an error
TypeError
```
Traceback (most recent call last)
<ipython-input-29-6ed67c712ed4> in <module>()
28 return model
29
---> 30 model = nvidia_model()
31 print(model.summary())
32 # dead relu problem: when a node in the network essentially dies and only feeds value of 0 to the following nodes.
5 frames
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/utils/generic_utils.py in validate_kwargs(kwargs, allowed_kwargs, error_message)
776 for kwarg in kwargs:
777 if kwarg not in allowed_kwargs:
--> 778 raise TypeError(error_message, kwarg)
779
780
TypeError: ('Keyword argument not understood:', 'subsample')
```
Defining Nvidia Model
=====================
```
def nvidia_model():
model= Sequential()
model.add(Convolution2D(24, 5, 5, subsample=(2, 2), input_shape=(66, 200, 3), activation='relu'))
model.add(Convolution2D(36, 5, 5, subsample=(2, 2), activation='relu'))
model.add(Convolution2D(48, 5, 5, subsample=(2, 2), activation='relu'))
model.add(Convolution2D(64, 3, 3, activation='relu'))
model.add(Dropout(0.5))
model.add(Flatten())
model.add(Dense(100, activation = 'relu'))
model.add(Dropout(0.5))
model.add(Dense(50, activation = 'relu'))
model.add(Dropout(0.5))
model.add(Dense(10, activation = 'relu'))
model.add(Dropout(0.5))
model.add(Dense(1)) # output the predicted steering angle
optimizer = Adam(lr=1e-4)
model.compile(loss='mse', optimizer=optimizer)
return model
model = nvidia_model()
print(model.summary())
```
|
2020/09/09
|
[
"https://Stackoverflow.com/questions/63805737",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14206313/"
] |
```py
def nvidia_model():
model= Sequential()
model.add(Convolution2D(24, 5, 5, subsample=(2, 2), input_shape=(66, 200, 3), activation='relu'))
model.add(Convolution2D(36, 5, 5, subsample=(2, 2), activation='relu'))
model.add(Convolution2D(48, 5, 5, subsample=(2, 2), activation='relu'))
model.add(Convolution2D(64, 3, 3, activation='relu'))
model.add(Dropout(0.5))
model.add(Flatten())
model.add(Dense(100, activation = 'relu'))
model.add(Dropout(0.5))
model.add(Dense(50, activation = 'relu'))
model.add(Dropout(0.5))
model.add(Dense(10, activation = 'relu'))
model.add(Dropout(0.5))
model.add(Dense(1)) # output the predicted steering angle
optimizer = Adam(lr=1e-4)
model.compile(loss='mse', optimizer=optimizer)
return model
model = nvidia_model()
print(model.summary())
```
Instead of this, replace the code with the following:
```py
def nvidia_model():
model= Sequential()
model.add(Convolution2D(24, (5, 5), subsample=(2, 2), input_shape=(66, 200, 3), activation='relu'))
model.add(Convolution2D(36, (5, 5), subsample=(2, 2), activation='relu'))
model.add(Convolution2D(48, (5, 5), subsample=(2, 2), activation='relu'))
model.add(Convolution2D(64, (3, 3), activation='relu'))
model.add(Dropout(0.5))
model.add(Flatten())
model.add(Dense(100, activation = 'relu'))
model.add(Dropout(0.5))
model.add(Dense(50, activation = 'relu'))
model.add(Dropout(0.5))
model.add(Dense(10, activation = 'relu'))
model.add(Dropout(0.5))
model.add(Dense(1)) # output the predicted steering angle
optimizer = Adam(lr=1e-4)
model.compile(loss='mse', optimizer=optimizer)
return model
model = nvidia_model()
print(model.summary())
```
(Notice I added parentheses.) If still the code doesn't work then replace `subsample` with `strides` everywhere
|
Found the solution!!
It is because use the argument for Kernel as a tuple. So yah doing (5,5) for it worked for me.
p.s. this marks the first answer I ever gave, yay
|
12,521,189
|
I'm familiar with programming but new to python:
>
> mem = [0] \* memloadsize
>
>
>
what does the '[0]' represent?
Is it a built-in array?
|
2012/09/20
|
[
"https://Stackoverflow.com/questions/12521189",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1493578/"
] |
The `[` and `]` characters, in this context, are used to construct array literals:
```
>>> []
[]
>>> [0]
[0]
>>> ['a', 'b', 'c']
['a', 'b', 'c']
```
Multiplying arrays is idiomatic, and generates an array gotten by repeating the elements in the array by the given factor:
```
>>> ['a'] * 4
['a', 'a', 'a', 'a']
>>> [0] * 9
[0, 0, 0, 0, 0, 0, 0, 0, 0]
>>> [0, 1, 2] * 2
[0, 1, 2, 0, 1, 2]
```
Note that `[` and `]` are also used to index into an existing array. In that case, `[0]` accesses the first element in the array:
```
>>> a = ['first', 'second', 'third']
>>> a[0]
'first'
>>> a[2]
'third'
>>>
```
|
It just means a one element `list` containing just a 0. Multiplying by `memloadsize` gives you a `list` of `memloadsize` zeros.
|
12,521,189
|
I'm familiar with programming but new to python:
>
> mem = [0] \* memloadsize
>
>
>
what does the '[0]' represent?
Is it a built-in array?
|
2012/09/20
|
[
"https://Stackoverflow.com/questions/12521189",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1493578/"
] |
It just means a one element `list` containing just a 0. Multiplying by `memloadsize` gives you a `list` of `memloadsize` zeros.
|
This command is conceptually equivalent to this:
```
mem = []
for i in xrange(memloadsize):
mem.append(0)
```
|
12,521,189
|
I'm familiar with programming but new to python:
>
> mem = [0] \* memloadsize
>
>
>
what does the '[0]' represent?
Is it a built-in array?
|
2012/09/20
|
[
"https://Stackoverflow.com/questions/12521189",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1493578/"
] |
The `[` and `]` characters, in this context, are used to construct array literals:
```
>>> []
[]
>>> [0]
[0]
>>> ['a', 'b', 'c']
['a', 'b', 'c']
```
Multiplying arrays is idiomatic, and generates an array gotten by repeating the elements in the array by the given factor:
```
>>> ['a'] * 4
['a', 'a', 'a', 'a']
>>> [0] * 9
[0, 0, 0, 0, 0, 0, 0, 0, 0]
>>> [0, 1, 2] * 2
[0, 1, 2, 0, 1, 2]
```
Note that `[` and `]` are also used to index into an existing array. In that case, `[0]` accesses the first element in the array:
```
>>> a = ['first', 'second', 'third']
>>> a[0]
'first'
>>> a[2]
'third'
>>>
```
|
This command is conceptually equivalent to this:
```
mem = []
for i in xrange(memloadsize):
mem.append(0)
```
|
6,522,281
|
I've got a program that downloads part01, then part02 etc of a rar file split across the internet.
My program downloads part01 first, then part02 and so on.
After some tests, I found out that using, on example, UnRAR2 for python I can extract the first part of the file (an .avi file) contained in the archive and I'm able to play it for the first minutes. When I add another file it extracts a bit more and so on. What I wonder is: is it possible to make it extract single files WHILE downloading them?
I'd need it to start extracting part01 without having to wait for it to finish downloading... is that possible?
Thank you very much!
Matteo
|
2011/06/29
|
[
"https://Stackoverflow.com/questions/6522281",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/774236/"
] |
You are talking about an .avi file inside the rar archives. Are you sure the archives are actually compressed? [Video files released by the warez scene do not use compression:](http://en.wikipedia.org/wiki/Standard_%28warez%29#Packaging)
>
> Ripped movies are still packaged due to the large filesize, but compression is disallowed and the RAR format is used only as a container. Because of this, modern playback software can easily play a release directly from the packaged files, and even stream it as the release is downloaded (if the network is fast enough).
>
>
>
(I'm thinking VLC, BSPlayer, KMPlayer, Dziobas Rar Player, rarfilesource, rarfs,...)
You can check for the compression as follows:
* Open the first .rar archive in WinRAR. (name.part01.rar or name.rar for *old style volumes names*)
* Click the info button.
If *Version to extract* indicates 2.0, then the archive uses no compression. (unless you have decade old rars) You can see *Total size* and *Packed size* will be equal.
>
> is it possible to make it extract
> single files WHILE downloading them?
>
>
>
**Yes.** When no compression is used, you can write your own program to extract the files. (I know of someone who wrote a script to directly download the movie from external rar files; but it's not public and I don't have it.) Because you mentioned Python I suggest you take a look at [rarfile 2.2](http://pypi.python.org/pypi/rarfile/2.2) by Marko Kreen like the author of [pyarrfs](http://pypi.python.org/pypi/pyarrfs/0.5.0) did. The archive is just the file chopped up with headers (rar blocks) added. It will be a copy operation that you need to pause until the next archive is downloaded.
I strongly believe it is also possible for compressed files. Your approach here will be different because you must use *unrar* to extract the compressed files. I have to add that there is also [a free RARv3 implementation](http://news.slashdot.org/story/11/05/11/0324246/Unarchiver-Provides-LGPL-RARv3-Extraction-Tool) to extract rars implemented in The Unarchiver.
I think this parameter for (un)rar will make it possible:
>
>
> ```
> -vp Pause before each volume
>
> By default RAR asks for confirmation before creating
> or unpacking next volume only for removable disks.
> This switch forces RAR to ask such confirmation always.
> It can be useful if disk space is limited and you wish
> to copy each volume to another media immediately after
> creation.
>
> ```
>
>
It will give you the possibility to pause the extraction until the next archive is downloaded.
>
> I believe that this won't work if the rar was created with the 'solid' option enabled.
>
>
>
When the solid option is used for rars, all packed files are treated as [one big file stream](http://www.win-rar.com/solidarchive.html). This should not cause any problems if you always start from the first file even if it doesn't contain the file you want to extract.
I also think it will work with passworded archives.
|
I highly doubt it. By nature of compression (from my understanding), every bit is needed to uncompress it. It seems that the source of where you are downloading from has intentionally broken the avi into pieces before compression, but by the time you apply compression, whatever you compressed is now one atomic unit. So they kindly broke the whole avi into Parts, but each Part is still an atomic nit.
But I'm not an expert in compression.
The only test I can currently think of is something like: `curl <http://example.com/Part01> | unrar`.
|
6,522,281
|
I've got a program that downloads part01, then part02 etc of a rar file split across the internet.
My program downloads part01 first, then part02 and so on.
After some tests, I found out that using, on example, UnRAR2 for python I can extract the first part of the file (an .avi file) contained in the archive and I'm able to play it for the first minutes. When I add another file it extracts a bit more and so on. What I wonder is: is it possible to make it extract single files WHILE downloading them?
I'd need it to start extracting part01 without having to wait for it to finish downloading... is that possible?
Thank you very much!
Matteo
|
2011/06/29
|
[
"https://Stackoverflow.com/questions/6522281",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/774236/"
] |
I highly doubt it. By nature of compression (from my understanding), every bit is needed to uncompress it. It seems that the source of where you are downloading from has intentionally broken the avi into pieces before compression, but by the time you apply compression, whatever you compressed is now one atomic unit. So they kindly broke the whole avi into Parts, but each Part is still an atomic nit.
But I'm not an expert in compression.
The only test I can currently think of is something like: `curl <http://example.com/Part01> | unrar`.
|
I don't know if this was asked with a specific language in mind, but it is possible to stream a compressed RAR directly from the internet and have it decompressed on the fly. I can do this with my C# library <http://sharpcompress.codeplex.com/>
The RAR format is actually kind of nice. It has headers preceding each entry and the compressed data itself does not require random access on the stream of bytes.
Do it multi-part files, you'd have to fully extract part 1 first, then continue writing when part 2 is available.
All of this is possible with my RarReader API. Solid archive are also streamable (in fact, they're only streamable. You can't randomly access files in a solid archive. You pretty much have to extract them all at once.)
|
6,522,281
|
I've got a program that downloads part01, then part02 etc of a rar file split across the internet.
My program downloads part01 first, then part02 and so on.
After some tests, I found out that using, on example, UnRAR2 for python I can extract the first part of the file (an .avi file) contained in the archive and I'm able to play it for the first minutes. When I add another file it extracts a bit more and so on. What I wonder is: is it possible to make it extract single files WHILE downloading them?
I'd need it to start extracting part01 without having to wait for it to finish downloading... is that possible?
Thank you very much!
Matteo
|
2011/06/29
|
[
"https://Stackoverflow.com/questions/6522281",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/774236/"
] |
You are talking about an .avi file inside the rar archives. Are you sure the archives are actually compressed? [Video files released by the warez scene do not use compression:](http://en.wikipedia.org/wiki/Standard_%28warez%29#Packaging)
>
> Ripped movies are still packaged due to the large filesize, but compression is disallowed and the RAR format is used only as a container. Because of this, modern playback software can easily play a release directly from the packaged files, and even stream it as the release is downloaded (if the network is fast enough).
>
>
>
(I'm thinking VLC, BSPlayer, KMPlayer, Dziobas Rar Player, rarfilesource, rarfs,...)
You can check for the compression as follows:
* Open the first .rar archive in WinRAR. (name.part01.rar or name.rar for *old style volumes names*)
* Click the info button.
If *Version to extract* indicates 2.0, then the archive uses no compression. (unless you have decade old rars) You can see *Total size* and *Packed size* will be equal.
>
> is it possible to make it extract
> single files WHILE downloading them?
>
>
>
**Yes.** When no compression is used, you can write your own program to extract the files. (I know of someone who wrote a script to directly download the movie from external rar files; but it's not public and I don't have it.) Because you mentioned Python I suggest you take a look at [rarfile 2.2](http://pypi.python.org/pypi/rarfile/2.2) by Marko Kreen like the author of [pyarrfs](http://pypi.python.org/pypi/pyarrfs/0.5.0) did. The archive is just the file chopped up with headers (rar blocks) added. It will be a copy operation that you need to pause until the next archive is downloaded.
I strongly believe it is also possible for compressed files. Your approach here will be different because you must use *unrar* to extract the compressed files. I have to add that there is also [a free RARv3 implementation](http://news.slashdot.org/story/11/05/11/0324246/Unarchiver-Provides-LGPL-RARv3-Extraction-Tool) to extract rars implemented in The Unarchiver.
I think this parameter for (un)rar will make it possible:
>
>
> ```
> -vp Pause before each volume
>
> By default RAR asks for confirmation before creating
> or unpacking next volume only for removable disks.
> This switch forces RAR to ask such confirmation always.
> It can be useful if disk space is limited and you wish
> to copy each volume to another media immediately after
> creation.
>
> ```
>
>
It will give you the possibility to pause the extraction until the next archive is downloaded.
>
> I believe that this won't work if the rar was created with the 'solid' option enabled.
>
>
>
When the solid option is used for rars, all packed files are treated as [one big file stream](http://www.win-rar.com/solidarchive.html). This should not cause any problems if you always start from the first file even if it doesn't contain the file you want to extract.
I also think it will work with passworded archives.
|
I don't know if this was asked with a specific language in mind, but it is possible to stream a compressed RAR directly from the internet and have it decompressed on the fly. I can do this with my C# library <http://sharpcompress.codeplex.com/>
The RAR format is actually kind of nice. It has headers preceding each entry and the compressed data itself does not require random access on the stream of bytes.
Do it multi-part files, you'd have to fully extract part 1 first, then continue writing when part 2 is available.
All of this is possible with my RarReader API. Solid archive are also streamable (in fact, they're only streamable. You can't randomly access files in a solid archive. You pretty much have to extract them all at once.)
|
57,604,719
|
I was trying to install thrift(0.11.0) over my system(macOs 10.14.5).For which I downloaded and extracted tar file. Then I ran following commands :
```
./bootstrap.sh
./configure
make
make install
```
But **make install** throwed the following error :
```
error: could not create '/usr/lib/python2.7/site-packages': Operation not permitted
```
then I also tried manually creating site-package inside /usr/lib/python2.7 but still the error message was same.
I have also tried **sudo** while running **make install** but it didn't helped much.
|
2019/08/22
|
[
"https://Stackoverflow.com/questions/57604719",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6729549/"
] |
1.open thrift's subfolder lib/py/ and modify the Makefile as follow:
PY\_PREFIX=/usr
change to
PY\_PREFIX = /Users/amy/python
2.sudo make install
|
I faced the same problem trying to install thrift on Mac OS.
I found a separate guide for installing thrift on Mac OS, I tried it and it finally worked successfully:
1- Download the boost library from [boost.org](http://www.boost.org/) untar compile with
```
./bootstrap.sh
sudo ./b2 threading=multi address-model=64 variant=release stage install
```
2- Download [libevent](http://monkey.org/~provos/libevent), untar and compile with
```
./configure --prefix=/usr/local
make
sudo make install
```
3- Download the latest version of [Apache Thrift](https://thrift.apache.org/download), untar and compile with
```
./configure --prefix=/usr/local/ --with-boost=/usr/local --with-libevent=/usr/local
```
Try it and let me know your results.
*Reference: [Apache Thrift - OS X Install](https://thrift.apache.org/docs/install/os_x)*
|
17,577,403
|
I am trying to write an HTML parser in Python that takes as its input a URL or list of URLs and outputs specific data about each of those URLs in the format:
URL: data1: data2
The data points can be found at the exact same HTML node in each of the URLs. They are consistently between the same starting tags and ending tags. If anyone out there would like to help an amateur python programmer get the job done, it would be greatly appreciated. Extra points if you can come up with a way to output the information that can be easily copied and pasted into an excel document for subsequent data analysis!
For example, lets say I would like to output the view count for a particular YouTube video. For the URL <http://www.youtube.com/watch?v=QOdW1OuZ1U0>, the view count is around 3.6 million. For all YouTube videos, this number is found in the following format within the page's source:
```
<span class="watch-view-count ">
3,595,057
</span>
```
Fortunately, these exact tags are found only once on a particular YouTube video's page. These starting and ending tags can be inputted into the program or built-in and modified when necessary. The output of the program would be:
<http://www.youtube.com/watch?v=QOdW1OuZ1U0>: 3,595,057 (or 3595057).
|
2013/07/10
|
[
"https://Stackoverflow.com/questions/17577403",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2479631/"
] |
```
import urllib2
from bs4 import BeautifulSoup
url = 'http://www.youtube.com/watch?v=QOdW1OuZ1U0'
f = urllib2.urlopen(url)
data = f.read()
soup = BeautifulSoup(data)
span = soup.find('span', attrs={'class':'watch-view-count'})
print '{}:{}'.format(url, span.text)
```
If you do not want to use `BeautifulSoup`, you can use `re`:
```
import urllib2
import re
url = 'http://www.youtube.com/watch?v=QOdW1OuZ1U0'
f = urllib2.urlopen(url)
data = f.read()
pattern = re.compile('<span class="watch-view-count.*?([\d,]+).*?</span>', re.DOTALL)
r = pattern.search(data)
print '{}:{}'.format(url, r.group(1))
```
As for the outputs, I think you can store them in a csv file.
|
I prefer `HTMLParser` over `re` for this type of task. However, `HTMLParser` can be a bit tricky. I use immutable objects to store data... I'm sure this this the wrong way of doing it. But its worked with several projects for me in the past.
```
import urllib2
from HTMLParser import HTMLParser
import csv
position = []
results = [""]
class hp(HTMLParser):
def handle_starttag(self, tag, attrs):
if tag == 'span' and ('class', 'watch-view-count ') in attrs:
position.append('bingo')
def handle_endtag(self, tag):
if tag == 'span' and 'bingo' in position:
position.remove('bingo')
def handle_data(self, data):
if 'bingo' in position:
results[0] += " " + data.strip() + " "
my_pages = ["http://www.youtube.com/watch?v=QOdW1OuZ1U0"]
data = []
for url in my_pages:
response = urllib2.urlopen(url)
page = str(response.read())
parser = hp()
parser.feed(page)
data.append(results[0])
# reinitialize immutiable objects
position = []
results = [""]
index = 0
with open('/path/to/test.csv', 'wb') as f:
writer = csv.writer(f)
header = ['url', 'output']
writer.writerow(header)
for d in data:
row = [my_pages[index], data[index]]
writer.writerow(row)
index += 1
```
Then just open /path/to/test.csv in Excel
|
55,700,995
|
I have. directory with ~250 .txt files in it. Each of these files has a title like this:
`Abraham Lincoln [December 01, 1862].txt`
`George Washington [October 25, 1790].txt`
etc...
However, these are terrible file names for reading into python and I want to iterate over all of them to change them to a more suitable format.
I've tried similar things for changing single variables that are shared across many files. But I can't wrap my head around how I should iterate over these files and change the formatting of their names while still keeping the same information.
The ideal output would be something like
`1861_12_01_abraham_lincoln.txt`
`1790_10_25_george_washington.txt`
etc...
|
2019/04/16
|
[
"https://Stackoverflow.com/questions/55700995",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10587373/"
] |
Please try the straightforward (tedious) bash script:
```
#!/bin/bash
declare -A map=(["January"]="01" ["February"]="02" ["March"]="03" ["April"]="04" ["May"]="05" ["June"]="06" ["July"]="07" ["August"]="08" ["September"]="09" ["October"]="10" ["November"]="11" ["December"]="12")
pat='^([^[]+) \[([A-Za-z]+) ([0-9]+), ([0-9]+)]\.txt$'
for i in *.txt; do
if [[ $i =~ $pat ]]; then
newname="$(printf "%s_%s_%s_%s.txt" "${BASH_REMATCH[4]}" "${map["${BASH_REMATCH[2]}"]}" "${BASH_REMATCH[3]}" "$(tr 'A-Z ' 'a-z_' <<< "${BASH_REMATCH[1]}")")"
mv -- "$i" "$newname"
fi
done
```
|
I like to pull the filename apart, then put it back together.
Also GNU date can parse-out the time, which is simpler than using `sed` or a big `case` statement to convert "October" to "10".
```
#! /usr/bin/bash
if [ "$1" == "" ] || [ "$1" == "--help" ]; then
echo "Give a filename like \"Abraham Lincoln [December 01, 1862].txt\" as an argument"
exit 2
fi
filename="$1"
# remove the brackets
filename=`echo "$filename" | sed -e 's/[\[]//g;s/\]//g'`
# cut out the name
namepart=`echo "$filename" | awk '{ print $1" "$2 }'`
# cut out the date
datepart=`echo "$filename" | awk '{ print $3" "$4" "$5 }' | sed -e 's/\.txt//'`
# format up the date (relies on GNU date)
datepart=`date --date="$datepart" +"%Y_%m_%d"`
# put it back together with underscores, in lower case
final=`echo "$namepart $datepart.txt" | tr '[A-Z]' '[a-z]' | sed -e 's/ /_/g'`
echo mv \"$1\" \"$final\"
```
EDIT: converted to BASH, from Bourne shell.
|
55,700,995
|
I have. directory with ~250 .txt files in it. Each of these files has a title like this:
`Abraham Lincoln [December 01, 1862].txt`
`George Washington [October 25, 1790].txt`
etc...
However, these are terrible file names for reading into python and I want to iterate over all of them to change them to a more suitable format.
I've tried similar things for changing single variables that are shared across many files. But I can't wrap my head around how I should iterate over these files and change the formatting of their names while still keeping the same information.
The ideal output would be something like
`1861_12_01_abraham_lincoln.txt`
`1790_10_25_george_washington.txt`
etc...
|
2019/04/16
|
[
"https://Stackoverflow.com/questions/55700995",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10587373/"
] |
```
for file in *.txt; do
# extract parts of the filename to be differently formatted with a regex match
[[ $file =~ (.*)\[(.*)\] ]] || { echo "invalid file $file"; exit; }
# format extracted strings and generate the new filename
formatted_date=$(date -d "${BASH_REMATCH[2]}" +"%Y_%m_%d")
name="${BASH_REMATCH[1]// /_}" # replace spaces in the name with underscores
f="${formatted_date}_${name,,}" # convert name to lower-case and append it to date string
new_filename="${f::-1}.txt" # remove trailing underscore and add `.txt` extension
# do what you need here
echo $new_filename
# mv $file $new_filename
done
```
|
I like to pull the filename apart, then put it back together.
Also GNU date can parse-out the time, which is simpler than using `sed` or a big `case` statement to convert "October" to "10".
```
#! /usr/bin/bash
if [ "$1" == "" ] || [ "$1" == "--help" ]; then
echo "Give a filename like \"Abraham Lincoln [December 01, 1862].txt\" as an argument"
exit 2
fi
filename="$1"
# remove the brackets
filename=`echo "$filename" | sed -e 's/[\[]//g;s/\]//g'`
# cut out the name
namepart=`echo "$filename" | awk '{ print $1" "$2 }'`
# cut out the date
datepart=`echo "$filename" | awk '{ print $3" "$4" "$5 }' | sed -e 's/\.txt//'`
# format up the date (relies on GNU date)
datepart=`date --date="$datepart" +"%Y_%m_%d"`
# put it back together with underscores, in lower case
final=`echo "$namepart $datepart.txt" | tr '[A-Z]' '[a-z]' | sed -e 's/ /_/g'`
echo mv \"$1\" \"$final\"
```
EDIT: converted to BASH, from Bourne shell.
|
70,627,862
|
How can i build a nested python dictionary, based on **number** of levels, like this?
How can be the recursive function passing int of levels?
```
{
"aggs": {
"cat_level_1": {
"terms": {
"field": "cat_level_1"
},
"aggs": {
"cat_level_2": {
"terms": {
"field": "cat_level_2"
},
"aggs": {
"cat_level_3": {
"terms": {
"field": "cat_level_3"
}
}
}
}
}
}
}
}
```
|
2022/01/07
|
[
"https://Stackoverflow.com/questions/70627862",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8881622/"
] |
We can build the dictionary from the deepest level, wrapping the previously built dictionary on each level.
```py
def dict_with_depth_of(n):
return {
"aggs": {
f"cat_level_{n}": {
"terms": {
"field": f"cat_level_{n}"
}
}
}
}
def nested_dict(n):
d = None
for i in reversed(range(1, n + 1)):
new_d = dict_with_depth_of(i)
if d is not None:
new_d["aggs"][f"cat_level_{i}"] = d
d = new_d
return d
print(nested_dict(3))
```
Output:
```
{'aggs': {'cat_level_1': {'aggs': {'cat_level_2': {'aggs': {'cat_level_3': {'terms': {'field': 'cat_level_3'}}}}}}}}
```
|
This is the right answer
```
def dict_with_depth_of(n):
return {
"aggs": {
f"cat_level_{n}": {
"terms": {
"field": f"cat_level_{n}"
}
}
}
}
def nested_dict(n):
d = None
for i in reversed(range(1, n + 1)):
new_d = dict_with_depth_of(i)
if d is not None:
new_d["aggs"][f"cat_level_{i}"].update(d)
d = new_d
```
|
32,194,643
|
So I am trying to use jinja2 for a simple html template but I keep getting this error when I call `render()`:
```
Warning: IronPythonEvaluator.EvaluateIronPythonScript operation failed.
Traceback (most recent call last):
File "C:\Program Files (x86)\IronPython 2.7\Lib\jinja2\jinja2\loaders.py", line 125, in load
File "C:\Program Files (x86)\IronPython 2.7\Lib\jinja2\jinja2\environment.py", line 551, in compile
File "C:\Program Files (x86)\IronPython 2.7\Lib\jinja2\jinja2\environment.py", line 470, in _parse
File "C:\Program Files (x86)\IronPython 2.7\Lib\jinja2\jinja2\parser.py", line 31, in __init__
File "C:\Program Files (x86)\IronPython 2.7\Lib\jinja2\jinja2\environment.py", line 501, in _tokenize
File "C:\Program Files (x86)\IronPython 2.7\Lib\jinja2\jinja2\environment.py", line 494, in preprocess
File "<string>", line 21, in <module>
File "C:\Program Files (x86)\IronPython 2.7\Lib\jinja2\jinja2\environment.py", line 812, in get_template
File "C:\Program Files (x86)\IronPython 2.7\Lib\jinja2\jinja2\environment.py", line 786, in _load_template
UnicodeEncodeError: ('unknown', '\x00', 0, 1, '')
```
Now I understand that jinja2 only works with utf-8 so I am forcing my python code to that formatting:
```
#-*- coding: utf-8 -*-
import clr
import sys
pyt_path = r'C:\Program Files (x86)\IronPython 2.7\Lib'
sys.path.append(pyt_path)
pyt_path1 = r'C:\Program Files (x86)\IronPython 2.7\Lib\MarkupSafe'
sys.path.append(pyt_path1)
pyt_path2 = r'C:\Program Files (x86)\IronPython 2.7\Lib\jinja2'
sys.path.append(pyt_path2)
#The inputs to this node will be stored as a list in the IN variable.
dataEnteringNode = IN
from jinja2 import Environment, FileSystemLoader
j2_env = Environment(loader=FileSystemLoader(r'C:\Users\ksobon\Documents\Visual Studio 2015\Projects\WebSite2'), trim_blocks=True)
temp = j2_env.get_template('HTMLPage2.html')
OUT = temp.render(svgWidth = r'500')
```
Here's the template html file that I am trying to use:
```
<!DOCTYPE html>
<meta charset="utf-8">
<style>
.chart rect {
fill: steelblue;
stroke: white;
}
</style>
<svg class="chart"></svg>
<script src="http://d3js.org/d3.v3.min.js"></script>
<script>
var data = [
{ name: "Locke", value: 42 },
{ name: "Reyes", value: 8 },
{ name: "Ford", value: 15 },
{ name: "Jarrah", value: 16 },
{ name: "Shephard", value: 23 },
{ name: "Kwon", value: 42 }
];
var w = {{svgWidth}},
h = 400;
var x = d3.scale.linear()
.domain([0, 1])
.range([0, w]);
var y = d3.scale.linear()
.domain([0, 100])
.rangeRound([0, h]);
var chart = d3.select("body")
.append("svg:svg")
.attr("class", "chart")
.attr("width", w * data.length - 1)
.attr("height", h);
chart.selectAll("rect")
.data(data)
.enter().append("svg:rect")
.attr("x", function (d, i) { return x(i) - .5; })
.attr("y", function (d) { return h - y(d.value) - .5; })
.attr("width", w)
.attr("height", function (d) { return y(d.value); });
</script>
```
Any ideas what I am doing wrong? I am in IronPython2.7
|
2015/08/25
|
[
"https://Stackoverflow.com/questions/32194643",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3263488/"
] |
Button should be of custom type to set an image. Verify your code once again. Set the button to custom type in Storyboard.
|
I can see you are using IBOutlet of a button this means you are using button from nib, then you should change background image from Attributes Inspector on right side of your xcode
otherwise if you still need it to be changed programmatically
then try this
```
[self.color1 setBackgroundImage:[UIImage imageNamed:@"57_bg_selected.png"] forState:UIControlStateHighlighted];
```
Please Note you should mention extension of your image .jpg or .png in your file name
|
32,194,643
|
So I am trying to use jinja2 for a simple html template but I keep getting this error when I call `render()`:
```
Warning: IronPythonEvaluator.EvaluateIronPythonScript operation failed.
Traceback (most recent call last):
File "C:\Program Files (x86)\IronPython 2.7\Lib\jinja2\jinja2\loaders.py", line 125, in load
File "C:\Program Files (x86)\IronPython 2.7\Lib\jinja2\jinja2\environment.py", line 551, in compile
File "C:\Program Files (x86)\IronPython 2.7\Lib\jinja2\jinja2\environment.py", line 470, in _parse
File "C:\Program Files (x86)\IronPython 2.7\Lib\jinja2\jinja2\parser.py", line 31, in __init__
File "C:\Program Files (x86)\IronPython 2.7\Lib\jinja2\jinja2\environment.py", line 501, in _tokenize
File "C:\Program Files (x86)\IronPython 2.7\Lib\jinja2\jinja2\environment.py", line 494, in preprocess
File "<string>", line 21, in <module>
File "C:\Program Files (x86)\IronPython 2.7\Lib\jinja2\jinja2\environment.py", line 812, in get_template
File "C:\Program Files (x86)\IronPython 2.7\Lib\jinja2\jinja2\environment.py", line 786, in _load_template
UnicodeEncodeError: ('unknown', '\x00', 0, 1, '')
```
Now I understand that jinja2 only works with utf-8 so I am forcing my python code to that formatting:
```
#-*- coding: utf-8 -*-
import clr
import sys
pyt_path = r'C:\Program Files (x86)\IronPython 2.7\Lib'
sys.path.append(pyt_path)
pyt_path1 = r'C:\Program Files (x86)\IronPython 2.7\Lib\MarkupSafe'
sys.path.append(pyt_path1)
pyt_path2 = r'C:\Program Files (x86)\IronPython 2.7\Lib\jinja2'
sys.path.append(pyt_path2)
#The inputs to this node will be stored as a list in the IN variable.
dataEnteringNode = IN
from jinja2 import Environment, FileSystemLoader
j2_env = Environment(loader=FileSystemLoader(r'C:\Users\ksobon\Documents\Visual Studio 2015\Projects\WebSite2'), trim_blocks=True)
temp = j2_env.get_template('HTMLPage2.html')
OUT = temp.render(svgWidth = r'500')
```
Here's the template html file that I am trying to use:
```
<!DOCTYPE html>
<meta charset="utf-8">
<style>
.chart rect {
fill: steelblue;
stroke: white;
}
</style>
<svg class="chart"></svg>
<script src="http://d3js.org/d3.v3.min.js"></script>
<script>
var data = [
{ name: "Locke", value: 42 },
{ name: "Reyes", value: 8 },
{ name: "Ford", value: 15 },
{ name: "Jarrah", value: 16 },
{ name: "Shephard", value: 23 },
{ name: "Kwon", value: 42 }
];
var w = {{svgWidth}},
h = 400;
var x = d3.scale.linear()
.domain([0, 1])
.range([0, w]);
var y = d3.scale.linear()
.domain([0, 100])
.rangeRound([0, h]);
var chart = d3.select("body")
.append("svg:svg")
.attr("class", "chart")
.attr("width", w * data.length - 1)
.attr("height", h);
chart.selectAll("rect")
.data(data)
.enter().append("svg:rect")
.attr("x", function (d, i) { return x(i) - .5; })
.attr("y", function (d) { return h - y(d.value) - .5; })
.attr("width", w)
.attr("height", function (d) { return y(d.value); });
</script>
```
Any ideas what I am doing wrong? I am in IronPython2.7
|
2015/08/25
|
[
"https://Stackoverflow.com/questions/32194643",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3263488/"
] |
Button should be of custom type to set an image. Verify your code once again. Set the button to custom type in Storyboard.
|
>
> I have a button and this button has a back color.
>
>
>
Are you trying to set the background color as well as background image of the button. You can only apply one thing at a time.
1. Verify if your button's type is System or Custom.
[](https://i.stack.imgur.com/oVVpF.png)
2. As you are having image in interface builder, try to set the background image property in xib/storyboard.
[](https://i.stack.imgur.com/s2jAv.png)
|
72,596,436
|
I've read about and understand [floating point round-off issues](https://docs.python.org/3/tutorial/floatingpoint.html) such as:
```
>>> sum([0.1] * 10) == 1.0
False
>>> 1.1 + 2.2 == 3.3
False
>>> sin(radians(45)) == sqrt(2) / 2
False
```
I also know how to work around these issues with [math.isclose()](https://docs.python.org/3/library/math.html#math.isclose) and [cmath.isclose()](https://docs.python.org/3/library/cmath.html#cmath.isclose).
The question is how to apply those work arounds to Python's match/case statement. I would like this to work:
```
match 1.1 + 2.2:
case 3.3:
print('hit!') # currently, this doesn't match
```
|
2022/06/12
|
[
"https://Stackoverflow.com/questions/72596436",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/424499/"
] |
The key to the solution is to build a wrapper that overrides the `__eq__` method and replaces it with an approximate match:
```
import cmath
class Approximately(complex):
def __new__(cls, x, /, **kwargs):
result = complex.__new__(cls, x)
result.kwargs = kwargs
return result
def __eq__(self, other):
try:
return isclose(self, other, **self.kwargs)
except TypeError:
return NotImplemented
```
It creates approximate equality tests for both float values and complex values:
```
>>> Approximately(1.1 + 2.2) == 3.3
True
>>> Approximately(1.1 + 2.2, abs_tol=0.2) == 3.4
True
>>> Approximately(1.1j + 2.2j) == 0.0 + 3.3j
True
```
Here is how to use it in a match/case statement:
```
for x in [sum([0.1] * 10), 1.1 + 2.2, sin(radians(45))]:
match Approximately(x):
case 1.0:
print(x, 'sums to about 1.0')
case 3.3:
print(x, 'sums to about 3.3')
case 0.7071067811865475:
print(x, 'is close to sqrt(2) / 2')
case _:
print('Mismatch')
```
This outputs:
```
0.9999999999999999 sums to about 1.0
3.3000000000000003 sums to about 3.3
0.7071067811865475 is close to sqrt(2) / 2
```
|
Raymond's answer is very fancy and ergonomic, but seems like a lot of magic for something that could be much simpler. A more minimal version would just be to capture the calculated value and just explicitly check whether the things are "close", e.g.:
```
import math
match 1.1 + 2.2:
case x if math.isclose(x, 3.3):
print(f"{x} is close to 3.3")
case x:
print(f"{x} wasn't close)
```
I'd also suggest only using `cmath.isclose()` where/when you actually need it, using appropriate types lets you ensure your code is doing what you expect.
The above example is just the minimum code used to demonstrate the matching and, as pointed out in the comments, could be more easily implemented using a traditional `if` statement. At the risk of derailing the original question, this is a somewhat more complete example:
```
from dataclasses import dataclass
@dataclass
class Square:
size: float
@dataclass
class Rectangle:
width: float
height: float
def classify(obj: Square | Rectangle) -> str:
match obj:
case Square(size=x) if math.isclose(x, 1):
return "~unit square"
case Square(size=x):
return f"square, size={x}"
case Rectangle(width=w, height=h) if math.isclose(w, h):
return "~square rectangle"
case Rectangle(width=w, height=h):
return f"rectangle, width={w}, height={h}"
almost_one = 1 + 1e-10
print(classify(Square(almost_one)))
print(classify(Rectangle(1, almost_one)))
print(classify(Rectangle(1, 2)))
```
Not sure if I'd actually use a `match` statement here, but is hopefully more representative!
|
5,684,010
|
I want to change this string
`<p><b> hello world </b></p>. I am playing <b> python </b>`
to:
`<bold><bold>hello world </bold></bold>, I am playing <bold> python </bold>`
I used:
```
import re
pattern = re.compile(r'\<p>(.*?)\</p>|\<b>(.*?)\</b>')
print re.sub(pattern, r'<bold>\1</bold>', "<p><b>hello world</b></p>. I am playing <b> python</b>")
```
It does not output what I want, it complains error: *unmatched group*
It works in this case:
```
re.sub(pattern, r'<bold>\1</bold>', "<p>hello world</p>. I am playing <p> python</p>")
```
`<bold> hello world </bold>`. I am playing `<bold> python</bold>`
|
2011/04/16
|
[
"https://Stackoverflow.com/questions/5684010",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/293487/"
] |
If you choose not to use regex, then it simple as this:
```
d = {'<p>':'<bold>','</p>':'</bold>','<b>':'<bold>','</b>':'</bold>'}
s = '<p><b> hello world </b></p>. I am playing <b> python </b>'
for k,v in d.items():
s = s.replace(k,v)
```
|
The problem is because the first group is the one within `<p></p>` and the second group is within `<b></b>` in the regexp. However, in your substitution you are referring to the first group when, if it matched to `<b></b>`, there wasn't one. I offer a couple of solutions.
First,
```
>>> pattern = re.compile(r'<(p|b)>(.*?)</\1>')
>>> print re.sub(pattern, r'<bold>\2</bold>',
"<p><b>hello world</b></p>. I am playing <b> python</b>")
<bold><b>hello world</b></bold>. I am playing <bold> python</bold>
```
will match a given pair of tags. However, as you can see, it would have to be used twice on the string because when it matched the `<p></p>` tags, it skipped over the nested `<b></b>` tags.
Here's the option that I would go with:
```
>>> pattern = re.compile(r'<(/?)[pb]>')
>>> print re.sub(pattern, r'<\1bold>',
"<p><b>hello world</b></p>. I am playing <b> python</b>")
<bold><bold>hello world</bold></bold>. I am playing <bold> python</bold>
```
|
5,684,010
|
I want to change this string
`<p><b> hello world </b></p>. I am playing <b> python </b>`
to:
`<bold><bold>hello world </bold></bold>, I am playing <bold> python </bold>`
I used:
```
import re
pattern = re.compile(r'\<p>(.*?)\</p>|\<b>(.*?)\</b>')
print re.sub(pattern, r'<bold>\1</bold>', "<p><b>hello world</b></p>. I am playing <b> python</b>")
```
It does not output what I want, it complains error: *unmatched group*
It works in this case:
```
re.sub(pattern, r'<bold>\1</bold>', "<p>hello world</p>. I am playing <p> python</p>")
```
`<bold> hello world </bold>`. I am playing `<bold> python</bold>`
|
2011/04/16
|
[
"https://Stackoverflow.com/questions/5684010",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/293487/"
] |
Although I don't recommend using Regex for parsing HTML (there are libraries for that purpose in almost every language), this should work:
```
text = "<p><b> hello world </b></p>. I am playing <b> python </b>"
import re
pattern1 = re.compile(r'\<p>(.*?)\</p>')
pattern2 = re.compile(r'\<b>(.*?)\</b>')
replaced = re.sub(pattern1, r'<bold>\1</bold>', text)
replaced = re.sub(pattern2, r'<bold>\1</bold>', replaced)
```
I think the problem you're having is because of how Python takes Groups.
Test the following and you'll see what I mean:
```
text = "<p><b> hello world </b></p>. I am playing <b> python </b>"
import re
pattern = re.compile(r'\<p>(.*?)\</p>|\<b>(.*?)\</b>')
for match in pattern.finditer(text):
print match.groups()
```
You will see the following:
```
('<b> hello world </b>', None) # Here captured the 1st group
(None, ' python ') # Here the 2nd ;)
```
And anyway, take in count that it matched first what is between `<p></p>` so it took `<b> hello world </b>` (something you would like to match too) as the first match. Maybe changin the order of the compiled regex in `pattern` would solve this, but could happen the opposite (having `<b><p> ... </p></b>`)
I wish I could provide more info, but I'm not very good in regex using Python. C# takes them differently.
**Edit:**
I understand you might want to do this using regex for learning/testing purpose, don't know, but in production code I would go for another alternative (like the one @Senthil gave you) or just use a [HTML Parser](http://www.google.com/search?q=Python+%2B+HTML+parser).
|
The problem is because the first group is the one within `<p></p>` and the second group is within `<b></b>` in the regexp. However, in your substitution you are referring to the first group when, if it matched to `<b></b>`, there wasn't one. I offer a couple of solutions.
First,
```
>>> pattern = re.compile(r'<(p|b)>(.*?)</\1>')
>>> print re.sub(pattern, r'<bold>\2</bold>',
"<p><b>hello world</b></p>. I am playing <b> python</b>")
<bold><b>hello world</b></bold>. I am playing <bold> python</bold>
```
will match a given pair of tags. However, as you can see, it would have to be used twice on the string because when it matched the `<p></p>` tags, it skipped over the nested `<b></b>` tags.
Here's the option that I would go with:
```
>>> pattern = re.compile(r'<(/?)[pb]>')
>>> print re.sub(pattern, r'<\1bold>',
"<p><b>hello world</b></p>. I am playing <b> python</b>")
<bold><bold>hello world</bold></bold>. I am playing <bold> python</bold>
```
|
5,931,386
|
I help to maintain a package for python called nxt-python. It uses metaclasses to define the methods of a control object. Here's the method that defines the available functions:
```
class _Meta(type):
'Metaclass which adds one method for each telegram opcode'
def __init__(cls, name, bases, dict):
super(_Meta, cls).__init__(name, bases, dict)
for opcode in OPCODES:
poll_func, parse_func = OPCODES[opcode]
m = _make_poller(opcode, poll_func, parse_func)
setattr(cls, poll_func.__name__, m)
```
I want to be able to add a different docstring to each of these methods that it adds. m is a method returned by \_make\_poller(). Any ideas? Is there some way to work around the python restriction on changing docstrings?
|
2011/05/09
|
[
"https://Stackoverflow.com/questions/5931386",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/495504/"
] |
For plain functions:
```
def f(): # for demonstration
pass
f.__doc__ = "Docstring!"
help(f)
```
This works in both python2 and python3, on functions with and without docstrings defined. You can also do `+=`. Note that it is `__doc__` and not `__docs__`.
For methods, you need to use the `__func__` attribute of the method:
```
class MyClass(object):
def myMethod(self):
pass
MyClass.myMethod.__func__.__doc__ = "A really cool method"
```
|
You may also use [setattr](http://docs.python.org/library/functions.html#setattr) on the class/function object and set the docstring.
```
setattr(foo,'__doc__',"""My Doc string""")
```
|
41,252,289
|
What's the syntax for *exclusive* `min` and `max` arguments for redis `zcount` command in python (redis-py)? It's not alluded to in the [documentation](https://redis-py.readthedocs.io/en/latest/).
Would it be:
```
minimum = time.time() - 2000
maximum = time.time()
my_server.zadd(sorted_set, '('+str(minimum), maximum)
```
|
2016/12/20
|
[
"https://Stackoverflow.com/questions/41252289",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4936905/"
] |
The [unit tests](https://github.com/andymccurdy/redis-py/blob/eae07e76b6524e6772be60169549766e7826419a/tests/test_commands.py) gives some examples:
```
def test_zcount(self, r):
r.zadd('a', a1=1, a2=2, a3=3)
assert r.zcount('a', '-inf', '+inf') == 3
assert r.zcount('a', 1, 2) == 2
assert r.zcount('a', 10, 20) == 0
```
This can help…
|
What about using `sys.float_info.epsilon`? This is the smallest comparable difference between two numbers:
```
minimum = time.time() - 2000
maximum = time.time()
my_server.zadd(sorted_set, minimum + sys.float_info.epsilon, maximum)
```
Or, with `-` for maximum:
```
minimum = time.time() - 2000
maximum = time.time()
my_server.zadd(sorted_set, minimum, maximum - sys.float_info.epsilon)
```
|
35,422,156
|
This is the code that I am working with:
```
import pandas as pd
z = pd.Series(data = [1,2,3,4,5,6,7], index = xrange(1,8))
array = []
for i in range(1,8):
array.append(z[i]*2)
print array
```
It does exactly what I tell it to because I can't figure out how to do a simple iteration. This is the printed output
```
[2]
[2, 4]
[2, 4, 6]
[2, 4, 6, 8]
[2, 4, 6, 8, 10]
[2, 4, 6, 8, 10, 12]
[2, 4, 6, 8, 10, 12, 14]
```
What I want is for python to use the updated value in array so the desired output would be:
```
[2]
[2, 4]
[2, 4, 8]
[2, 4, 8, 16]
[2, 4, 8, 16, 32]
[2, 4, 8, 16, 32, 64]
[2, 4, 8, 16, 32, 64, 128]
```
Thank you for your help.
**Edit**
The example I first used was too simple so please answer using the example code below
```
import pandas as pd
sample = pd.Series(data = [ -3.2 , 30.66, 7.71, 9.87], index = range(0,4))
testarray = []
for i in range(0,4):
testarray.append(100000*(1+sample.values[i]/100))
print testarray
```
This produces
```
[96800.0, 130660.0, 107710.0, 109870.0]
```
When the desired numbers are:
96800
126478.88
136230.4016
149676.3423
So instead of it using 100000 I want it to use 96800 for the second iteration and so on. Thank you!
|
2016/02/16
|
[
"https://Stackoverflow.com/questions/35422156",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5020060/"
] |
You want to use the last value of the accumulation list in the expression.
For example:
```
data = [ -3.2 , 30.66, 7.71, 9.87] # the input list for `sample`
In [686]: test=[100000]
In [687]: for i in range(4):
test.append(test[-1]*(1+data[i]/100))
In [688]: test
Out[688]: [100000, 96800.0, 126478.88, 136230.401648, 149676.3422906576]
```
I could start this with `test=[]`, but then I'd have test whether the list was empty, and use `1000000` instead of `test[-1]`. So putting the `100000` in the list to start with is logically simpler.
Another option is to maintain a temporary variable that is updated each iteration:
```
In [689]: mult=100000
In [690]: test=[]
In [691]: for i in range(4):
test.append(mult*(1+data[i]/100))
mult=test[-1]
```
Or
```
mult *= (1+data[i]/100)
test.append(mult)
```
But since this is `pandas` I could also do the calculation with one vectorized call. The `numpy` array equivalent is:
```
In [697]: data_arr=np.array(data)
In [698]: np.cumprod(1+data_arr/100)
Out[698]: array([ 0.968 , 1.2647888 , 1.36230402, 1.49676342])
```
`cumprod` is the cumulative product (like the more common cumulative sum).
---
Your first example can be produced with:
```
In [709]: np.cumprod([2 for _ in range(7)])
Out[709]: array([ 2, 4, 8, 16, 32, 64, 128])
In [710]: np.cumprod(np.ones(7,int)*2)
Out[710]: array([ 2, 4, 8, 16, 32, 64, 128])
```
|
I think that what you're trying to do is to compute powers of two, instead of multiplications of two:
```
array.append(2**z[i])
```
|
20,993,084
|
I am new to python and been stuck with an issue, any one please help me to solve this issue. Requirement is I have created a sqlite database and created a table and also inserted values to it but the problem is I am not getting how to display that data from database in table view in python so please help me out from this.....advance thanks..
```
db_con = sqlite3.Connection
db_name = "./patientData.db"
createDb = sqlite3.connect(db_name)
queryCurs = createDb.cursor()
queryCurs.execute('''CREATE TABLE IF NOT EXISTS PATIENT
(NAME TEXT NOT NULL, ID INTEGER PRIMARY KEY, AGE INTEGER NOT NULL, GENDER TEXT NOT NULL , EYE_TYPE TEXT NOT NULL)''')
pName = self.patientEdit.text()
pId =self.patientidEdit.text()
#pId1 = int(pId)
pAge = self.ageEdit.text()
#pAge1 = int(pAge)
pGender = self.patientgend.text()
pEye_type = self.eyeTypeEdit.text()
queryCurs.execute('''INSERT INTO PATIENT(NAME,ID,AGE, GENDER,EYE_TYPE) VALUES(?, ?, ?, ?, ?)''',(pName, pId, pAge, pGender, pEye_type))
print ('Inserted row')
createDb.commit()
```
now how can I dispaly data in a tableview /listview any example code is also helpful
|
2014/01/08
|
[
"https://Stackoverflow.com/questions/20993084",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2440020/"
] |
This is a short, albeit complete example on how to achieve the expected result.
The trick is to define a [QSqlQueryModel](http://srinikom.github.io/pyside-docs/PySide/QtSql/QSqlQueryModel.html?highlight=qsqlquerymodel#PySide.QtSql.QSqlQueryModel) and pass it to a [QTableView](http://srinikom.github.io/pyside-docs/PySide/QtGui/QTableView.html); in this way you use the PyQt4 SQL modules instead of `sqlite3` module, and the table can loop the query result automatically.
```
from PyQt4.QtSql import QSqlQueryModel,QSqlDatabase,QSqlQuery
from PyQt4.QtGui import QTableView,QApplication
import sys
app = QApplication(sys.argv)
db = QSqlDatabase.addDatabase("QSQLITE")
db.setDatabaseName("patientData.db")
db.open()
projectModel = QSqlQueryModel()
projectModel.setQuery("select * from patient",db)
projectView = QTableView()
projectView.setModel(projectModel)
projectView.show()
app.exec_()
```
|
I know this question has been a long time ago and the accepted one is for PyQt4. Because the API has been changed a bit on PyQt5, hope my answer can help someone using PyQt5.
```
from PyQt5 import QtWidgets, QtSql
# connect to postgresql
db = QtSql.QSqlDatabase.addDatabase("QPSQL")
db.setHostName(**)
db.setDatabaseName(**)
db.setPort(**) # int
db.setUserName(**)
db.setPassword(**)
# create tableview
tableView = QtWidgets.QTableView()
# create sqlquery
query = QtSql.QSqlQuery()
result = query.exec_("""select * from "table" """)
if result:
model = QtSql.QSqlTableModel(db=db)
model.setQuery(query)
tableView.setModel(model)
tableView.show()
```
|
62,713,159
|
I have a csv file with names (last\_name, name, age) and I want to convert all age attributes into integers. This is a way but I guess there is a more pythonic way to do so? I tried to do it with list comprehension but it didn't quite worked as I wanted.
```
import csv
with open("names.csv") as names_file:
head , *names = csv.reader(names_file)
names = [line for line in names]
for i in range(len(names)):
names[i][2] = int(names[i][2])
```
Thanks in advance.
|
2020/07/03
|
[
"https://Stackoverflow.com/questions/62713159",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13607895/"
] |
Best way to handle this:
```
import pandas as pd
df = pd.read_csv('names.csv')
df["age"] = pd.to_numeric(df["age"])
```
If you want a list just do this:
```py
list_ = df['age'].to_list()
print(list_)
```
|
I would do:
```
names = [['A', 'B', '20'], ['C', 'D', '30'], ['E', 'F', '40']] # sample data
for i in names:
i[2] = int(i[2])
print(names)
```
Which is more succint than:
```
for i in range(len(names)):
names[i][2] = int(names[i][2])
```
If you have to use list-comprehension then you might do:
```
names = [['A', 'B', '20'], ['C', 'D', '30'], ['E', 'F', '40']]
names = [[int(j) if inx==2 else j for inx, j in enumerate(i)] for i in names]
print(names)
```
Output:
```
[['A', 'B', 20], ['C', 'D', 30], ['E', 'F', 40]]
```
Note that this harness nesting and as such is less desirable than above solution concerning readibility.
|
62,713,159
|
I have a csv file with names (last\_name, name, age) and I want to convert all age attributes into integers. This is a way but I guess there is a more pythonic way to do so? I tried to do it with list comprehension but it didn't quite worked as I wanted.
```
import csv
with open("names.csv") as names_file:
head , *names = csv.reader(names_file)
names = [line for line in names]
for i in range(len(names)):
names[i][2] = int(names[i][2])
```
Thanks in advance.
|
2020/07/03
|
[
"https://Stackoverflow.com/questions/62713159",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13607895/"
] |
```
import csv
with open("names.csv") as names_file:
head, *names = csv.reader(names_file)
names = [[f, l, int(a)] for f, l, a in names]
```
|
I would do:
```
names = [['A', 'B', '20'], ['C', 'D', '30'], ['E', 'F', '40']] # sample data
for i in names:
i[2] = int(i[2])
print(names)
```
Which is more succint than:
```
for i in range(len(names)):
names[i][2] = int(names[i][2])
```
If you have to use list-comprehension then you might do:
```
names = [['A', 'B', '20'], ['C', 'D', '30'], ['E', 'F', '40']]
names = [[int(j) if inx==2 else j for inx, j in enumerate(i)] for i in names]
print(names)
```
Output:
```
[['A', 'B', 20], ['C', 'D', 30], ['E', 'F', 40]]
```
Note that this harness nesting and as such is less desirable than above solution concerning readibility.
|
33,982,834
|
I have been trying to convert some code into a try statement but I can't seem to get anything working.
Here is my code in pseudo code:
```
start
run function
check for user input ('Would you like to test another variable? (y/n) ')
if: yes ('y') restart from top
elif: no ('n') exit program (loop is at end of program)
else: return an error saying that the input is invalid.
```
And here is my code (which works) in python 3.4
```
run = True
while run == True:
spuriousCorrelate(directory)
cont = True
while cont == True:
choice = input('Would you like to test another variable? (y/n) ')
if choice == 'y':
cont = False
elif choice == 'n':
run = False
cont = False
else:
print('This is not a valid answer please try again.')
run = True
cont = True
```
Now what is the proper way for me to convert this into a try statement or to neaten my code somewhat?
This isn't a copy of the mentioned referenced post as I am trying to manage two nested statements rather than only get the correct answer.
|
2015/11/29
|
[
"https://Stackoverflow.com/questions/33982834",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5552713/"
] |
If you want to make your code neater, you should consider having
`while run:`
instead of
`while run == True:`
and also remove the last two lines, because setting `run` and `cont` to `True` again isn't necessary (their value didn't change).
Furthermore, I think that a `try - except` block would be useful in the case of an integer input, for example:
```
num = input("Please enter an integer: ")
try:
num = int(num)
except ValueError:
print("Error,", num, "is not a number.")
```
In your case though I think it's better to stick with `if - elif - else` blocks.
|
>
> Ok so as a general case I will try to avoid try...except blocks
>
>
>
Don't do this. Use the right tool for the job.
Use `raise` to signal that your code can't (or shouldn't) deal with the scenario.
Use `try-except` to process that *signal*.
>
> Now what is the proper way for me to convert this into a try statement?
>
>
>
Don't convert.
You don't have anything that `raise`s in your code, so there is no point of `try-except`.
>
> What is the proper way to neaten my code somewhat?
>
>
>
Get rid of your flag variables (`run`, `cont`). You have `break`, use it!
This is prefered way of imlementing `do-while`, as Python docs says; unfortunately, I cannot find it to link it right now.
If someone finds it, feel free to edit my answer to include it.
```
def main()
while True: # while user wants to test variables
spuriousCorrelate(directory) # or whatever your program is doing
while True: # while not received valid answer
choice = input('Would you like to test another variable? (y/n) ')
if choice == 'y':
break # let's test next variable
elif choice == 'n':
return # no more testing, exit whole program
else:
print('This is not a valid answer please try again.')
```
|
14,286,480
|
I tagged python and perl in this only because that's what I've used thus far. If anyone knows a better way to go about this I'd certainly be willing to try it out. Anyway, my problem:
I need to create an input file for a gene prediction program that follows the following format:
```
seq1 5 15
seq1 20 34
seq2 50 48
seq2 45 36
seq3 17 20
```
Where seq# is the geneID and the numbers to the right are the positions of exons within an open reading frame. Now I have this information, in a .gff3 file that has a lot of other information. I can open this with excel and easily delete the columns with non-relevant data. Here's how it's arranged now:
```
PITG_00002 . gene 2 397 . + . ID=g.1;Name=ORF%
PITG_00002 . mRNA 2 397 . + . ID=m.1;
**PITG_00002** . exon **2 397** . + . ID=m.1.exon1;
PITG_00002 . CDS 2 397 . + . ID=cds.m.1;
PITG_00004 . gene 1 1275 . + . ID=g.3;Name=ORF%20g
PITG_00004 . mRNA 1 1275 . + . ID=m.3;
**PITG_00004** . exon **1 1275** . + . ID=m.3.exon1;P
PITG_00004 . CDS 1 1275 . + . ID=cds.m.3;P
PITG_00004 . gene 1397 1969 . + . ID=g.4;Name=
PITG_00004 . mRNA 1397 1969 . + . ID=m.4;
**PITG_00004** . exon **1397 1969** . + . ID=m.4.exon1;
PITG_00004 . CDS 1397 1969 . + . ID=cds.m.4;
```
---
So I need only the data that is in bold. For example,
```
PITG_0002 2 397
PITG_00004 1 1275
PITG_00004 1397 1969
```
Any help you could give would be greatly appreciated, thanks!
Edit: Well I messed up the formatting. Anything that is between the \*\*'s is what I need lol.
|
2013/01/11
|
[
"https://Stackoverflow.com/questions/14286480",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1784467/"
] |
In Unix:
```
grep <file.gff3 " exon " |
sed "s/^\([^ ]+\) +[.] +exon +\([0-9]+\) \([0-9]+\).*$/\1 \2 \3/"
```
|
For pedestrians:
(this is Python)
```
with open(data_file) as f:
for line in f:
tokens = line.split()
if len(tokens) > 3 and tokens[2] == 'exon':
print tokens[0], tokens[3], tokens[4]
```
which prints
```
PITG_00002 2 397
PITG_00004 1 1275
PITG_00004 1397 1969
```
|
14,286,480
|
I tagged python and perl in this only because that's what I've used thus far. If anyone knows a better way to go about this I'd certainly be willing to try it out. Anyway, my problem:
I need to create an input file for a gene prediction program that follows the following format:
```
seq1 5 15
seq1 20 34
seq2 50 48
seq2 45 36
seq3 17 20
```
Where seq# is the geneID and the numbers to the right are the positions of exons within an open reading frame. Now I have this information, in a .gff3 file that has a lot of other information. I can open this with excel and easily delete the columns with non-relevant data. Here's how it's arranged now:
```
PITG_00002 . gene 2 397 . + . ID=g.1;Name=ORF%
PITG_00002 . mRNA 2 397 . + . ID=m.1;
**PITG_00002** . exon **2 397** . + . ID=m.1.exon1;
PITG_00002 . CDS 2 397 . + . ID=cds.m.1;
PITG_00004 . gene 1 1275 . + . ID=g.3;Name=ORF%20g
PITG_00004 . mRNA 1 1275 . + . ID=m.3;
**PITG_00004** . exon **1 1275** . + . ID=m.3.exon1;P
PITG_00004 . CDS 1 1275 . + . ID=cds.m.3;P
PITG_00004 . gene 1397 1969 . + . ID=g.4;Name=
PITG_00004 . mRNA 1397 1969 . + . ID=m.4;
**PITG_00004** . exon **1397 1969** . + . ID=m.4.exon1;
PITG_00004 . CDS 1397 1969 . + . ID=cds.m.4;
```
---
So I need only the data that is in bold. For example,
```
PITG_0002 2 397
PITG_00004 1 1275
PITG_00004 1397 1969
```
Any help you could give would be greatly appreciated, thanks!
Edit: Well I messed up the formatting. Anything that is between the \*\*'s is what I need lol.
|
2013/01/11
|
[
"https://Stackoverflow.com/questions/14286480",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1784467/"
] |
For pedestrians:
(this is Python)
```
with open(data_file) as f:
for line in f:
tokens = line.split()
if len(tokens) > 3 and tokens[2] == 'exon':
print tokens[0], tokens[3], tokens[4]
```
which prints
```
PITG_00002 2 397
PITG_00004 1 1275
PITG_00004 1397 1969
```
|
Here's a Perl script option `perl scriptName.pl file.gff3`:
```
use strict;
use warnings;
while (<>) {
print "@{ [ (split)[ 0, 3, 4 ] ] }\n" if /exon/;
}
```
Output:
```
PITG_00002 2 397
PITG_00004 1 1275
PITG_00004 1397 1969
```
Or you could just do the following:
```
perl -n -e 'print "@{ [ (split)[ 0, 3, 4 ] ] }\n" if /exon/' file.gff3
```
To save the data to a file:
```
use strict;
use warnings;
open my $inFH, '<', 'file.gff3' or die $!;
open my $outFH, '>>', 'data.txt' or die $!;
while (<$inFH>) {
print $outFH "@{ [ (split)[ 0, 3, 4 ] ] }\n" if /exon/;
}
```
|
14,286,480
|
I tagged python and perl in this only because that's what I've used thus far. If anyone knows a better way to go about this I'd certainly be willing to try it out. Anyway, my problem:
I need to create an input file for a gene prediction program that follows the following format:
```
seq1 5 15
seq1 20 34
seq2 50 48
seq2 45 36
seq3 17 20
```
Where seq# is the geneID and the numbers to the right are the positions of exons within an open reading frame. Now I have this information, in a .gff3 file that has a lot of other information. I can open this with excel and easily delete the columns with non-relevant data. Here's how it's arranged now:
```
PITG_00002 . gene 2 397 . + . ID=g.1;Name=ORF%
PITG_00002 . mRNA 2 397 . + . ID=m.1;
**PITG_00002** . exon **2 397** . + . ID=m.1.exon1;
PITG_00002 . CDS 2 397 . + . ID=cds.m.1;
PITG_00004 . gene 1 1275 . + . ID=g.3;Name=ORF%20g
PITG_00004 . mRNA 1 1275 . + . ID=m.3;
**PITG_00004** . exon **1 1275** . + . ID=m.3.exon1;P
PITG_00004 . CDS 1 1275 . + . ID=cds.m.3;P
PITG_00004 . gene 1397 1969 . + . ID=g.4;Name=
PITG_00004 . mRNA 1397 1969 . + . ID=m.4;
**PITG_00004** . exon **1397 1969** . + . ID=m.4.exon1;
PITG_00004 . CDS 1397 1969 . + . ID=cds.m.4;
```
---
So I need only the data that is in bold. For example,
```
PITG_0002 2 397
PITG_00004 1 1275
PITG_00004 1397 1969
```
Any help you could give would be greatly appreciated, thanks!
Edit: Well I messed up the formatting. Anything that is between the \*\*'s is what I need lol.
|
2013/01/11
|
[
"https://Stackoverflow.com/questions/14286480",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1784467/"
] |
It looks like your data is tab-separated.
This Perl program will print columns 1, 4 and 5 from all records that have `exon` in the third column. You need to change the file name in the `open` statement to your actual file name.
```
use strict;
use warnings;
open my $fh, '<', 'genes.gff3' or die $!;
while (<$fh>) {
chomp;
my @fields = split /\t/;
next unless @fields >= 5 and $fields[2] eq 'exon';
print join("\t", @fields[0,3,4]), "\n";
}
```
**output**
```
PITG_00002 2 397
PITG_00004 1 1275
PITG_00004 1397 1969
```
|
For pedestrians:
(this is Python)
```
with open(data_file) as f:
for line in f:
tokens = line.split()
if len(tokens) > 3 and tokens[2] == 'exon':
print tokens[0], tokens[3], tokens[4]
```
which prints
```
PITG_00002 2 397
PITG_00004 1 1275
PITG_00004 1397 1969
```
|
14,286,480
|
I tagged python and perl in this only because that's what I've used thus far. If anyone knows a better way to go about this I'd certainly be willing to try it out. Anyway, my problem:
I need to create an input file for a gene prediction program that follows the following format:
```
seq1 5 15
seq1 20 34
seq2 50 48
seq2 45 36
seq3 17 20
```
Where seq# is the geneID and the numbers to the right are the positions of exons within an open reading frame. Now I have this information, in a .gff3 file that has a lot of other information. I can open this with excel and easily delete the columns with non-relevant data. Here's how it's arranged now:
```
PITG_00002 . gene 2 397 . + . ID=g.1;Name=ORF%
PITG_00002 . mRNA 2 397 . + . ID=m.1;
**PITG_00002** . exon **2 397** . + . ID=m.1.exon1;
PITG_00002 . CDS 2 397 . + . ID=cds.m.1;
PITG_00004 . gene 1 1275 . + . ID=g.3;Name=ORF%20g
PITG_00004 . mRNA 1 1275 . + . ID=m.3;
**PITG_00004** . exon **1 1275** . + . ID=m.3.exon1;P
PITG_00004 . CDS 1 1275 . + . ID=cds.m.3;P
PITG_00004 . gene 1397 1969 . + . ID=g.4;Name=
PITG_00004 . mRNA 1397 1969 . + . ID=m.4;
**PITG_00004** . exon **1397 1969** . + . ID=m.4.exon1;
PITG_00004 . CDS 1397 1969 . + . ID=cds.m.4;
```
---
So I need only the data that is in bold. For example,
```
PITG_0002 2 397
PITG_00004 1 1275
PITG_00004 1397 1969
```
Any help you could give would be greatly appreciated, thanks!
Edit: Well I messed up the formatting. Anything that is between the \*\*'s is what I need lol.
|
2013/01/11
|
[
"https://Stackoverflow.com/questions/14286480",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1784467/"
] |
In Unix:
```
grep <file.gff3 " exon " |
sed "s/^\([^ ]+\) +[.] +exon +\([0-9]+\) \([0-9]+\).*$/\1 \2 \3/"
```
|
Here's a Perl script option `perl scriptName.pl file.gff3`:
```
use strict;
use warnings;
while (<>) {
print "@{ [ (split)[ 0, 3, 4 ] ] }\n" if /exon/;
}
```
Output:
```
PITG_00002 2 397
PITG_00004 1 1275
PITG_00004 1397 1969
```
Or you could just do the following:
```
perl -n -e 'print "@{ [ (split)[ 0, 3, 4 ] ] }\n" if /exon/' file.gff3
```
To save the data to a file:
```
use strict;
use warnings;
open my $inFH, '<', 'file.gff3' or die $!;
open my $outFH, '>>', 'data.txt' or die $!;
while (<$inFH>) {
print $outFH "@{ [ (split)[ 0, 3, 4 ] ] }\n" if /exon/;
}
```
|
14,286,480
|
I tagged python and perl in this only because that's what I've used thus far. If anyone knows a better way to go about this I'd certainly be willing to try it out. Anyway, my problem:
I need to create an input file for a gene prediction program that follows the following format:
```
seq1 5 15
seq1 20 34
seq2 50 48
seq2 45 36
seq3 17 20
```
Where seq# is the geneID and the numbers to the right are the positions of exons within an open reading frame. Now I have this information, in a .gff3 file that has a lot of other information. I can open this with excel and easily delete the columns with non-relevant data. Here's how it's arranged now:
```
PITG_00002 . gene 2 397 . + . ID=g.1;Name=ORF%
PITG_00002 . mRNA 2 397 . + . ID=m.1;
**PITG_00002** . exon **2 397** . + . ID=m.1.exon1;
PITG_00002 . CDS 2 397 . + . ID=cds.m.1;
PITG_00004 . gene 1 1275 . + . ID=g.3;Name=ORF%20g
PITG_00004 . mRNA 1 1275 . + . ID=m.3;
**PITG_00004** . exon **1 1275** . + . ID=m.3.exon1;P
PITG_00004 . CDS 1 1275 . + . ID=cds.m.3;P
PITG_00004 . gene 1397 1969 . + . ID=g.4;Name=
PITG_00004 . mRNA 1397 1969 . + . ID=m.4;
**PITG_00004** . exon **1397 1969** . + . ID=m.4.exon1;
PITG_00004 . CDS 1397 1969 . + . ID=cds.m.4;
```
---
So I need only the data that is in bold. For example,
```
PITG_0002 2 397
PITG_00004 1 1275
PITG_00004 1397 1969
```
Any help you could give would be greatly appreciated, thanks!
Edit: Well I messed up the formatting. Anything that is between the \*\*'s is what I need lol.
|
2013/01/11
|
[
"https://Stackoverflow.com/questions/14286480",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1784467/"
] |
It looks like your data is tab-separated.
This Perl program will print columns 1, 4 and 5 from all records that have `exon` in the third column. You need to change the file name in the `open` statement to your actual file name.
```
use strict;
use warnings;
open my $fh, '<', 'genes.gff3' or die $!;
while (<$fh>) {
chomp;
my @fields = split /\t/;
next unless @fields >= 5 and $fields[2] eq 'exon';
print join("\t", @fields[0,3,4]), "\n";
}
```
**output**
```
PITG_00002 2 397
PITG_00004 1 1275
PITG_00004 1397 1969
```
|
Here's a Perl script option `perl scriptName.pl file.gff3`:
```
use strict;
use warnings;
while (<>) {
print "@{ [ (split)[ 0, 3, 4 ] ] }\n" if /exon/;
}
```
Output:
```
PITG_00002 2 397
PITG_00004 1 1275
PITG_00004 1397 1969
```
Or you could just do the following:
```
perl -n -e 'print "@{ [ (split)[ 0, 3, 4 ] ] }\n" if /exon/' file.gff3
```
To save the data to a file:
```
use strict;
use warnings;
open my $inFH, '<', 'file.gff3' or die $!;
open my $outFH, '>>', 'data.txt' or die $!;
while (<$inFH>) {
print $outFH "@{ [ (split)[ 0, 3, 4 ] ] }\n" if /exon/;
}
```
|
6,254,713
|
How can I create a numpy matrix with its elements being a function of its indices?
For example, a multiplication table: `a[i,j] = i*j`
An Un-numpy and un-pythonic would be to create an array of zeros and then loop through.
There is no doubt that there is a better way to do this, without a loop.
However, even better would be to create the matrix straight-away.
|
2011/06/06
|
[
"https://Stackoverflow.com/questions/6254713",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/349043/"
] |
Here's one way to do that:
```
>>> indices = numpy.indices((5, 5))
>>> a = indices[0] * indices[1]
>>> a
array([[ 0, 0, 0, 0, 0],
[ 0, 1, 2, 3, 4],
[ 0, 2, 4, 6, 8],
[ 0, 3, 6, 9, 12],
[ 0, 4, 8, 12, 16]])
```
To further explain, `numpy.indices((5, 5))` generates two arrays containing the x and y indices of a 5x5 array like so:
```
>>> numpy.indices((5, 5))
array([[[0, 0, 0, 0, 0],
[1, 1, 1, 1, 1],
[2, 2, 2, 2, 2],
[3, 3, 3, 3, 3],
[4, 4, 4, 4, 4]],
[[0, 1, 2, 3, 4],
[0, 1, 2, 3, 4],
[0, 1, 2, 3, 4],
[0, 1, 2, 3, 4],
[0, 1, 2, 3, 4]]])
```
When you multiply these two arrays, numpy multiplies the value of the two arrays at each position and returns the result.
|
I'm away from my python at the moment, but does this one work?
```
array( [ [ i*j for j in xrange(5)] for i in xrange(5)] )
```
|
6,254,713
|
How can I create a numpy matrix with its elements being a function of its indices?
For example, a multiplication table: `a[i,j] = i*j`
An Un-numpy and un-pythonic would be to create an array of zeros and then loop through.
There is no doubt that there is a better way to do this, without a loop.
However, even better would be to create the matrix straight-away.
|
2011/06/06
|
[
"https://Stackoverflow.com/questions/6254713",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/349043/"
] |
Here's one way to do that:
```
>>> indices = numpy.indices((5, 5))
>>> a = indices[0] * indices[1]
>>> a
array([[ 0, 0, 0, 0, 0],
[ 0, 1, 2, 3, 4],
[ 0, 2, 4, 6, 8],
[ 0, 3, 6, 9, 12],
[ 0, 4, 8, 12, 16]])
```
To further explain, `numpy.indices((5, 5))` generates two arrays containing the x and y indices of a 5x5 array like so:
```
>>> numpy.indices((5, 5))
array([[[0, 0, 0, 0, 0],
[1, 1, 1, 1, 1],
[2, 2, 2, 2, 2],
[3, 3, 3, 3, 3],
[4, 4, 4, 4, 4]],
[[0, 1, 2, 3, 4],
[0, 1, 2, 3, 4],
[0, 1, 2, 3, 4],
[0, 1, 2, 3, 4],
[0, 1, 2, 3, 4]]])
```
When you multiply these two arrays, numpy multiplies the value of the two arrays at each position and returns the result.
|
Just wanted to add that @Senderle's response can be generalized for any function and dimension:
```
dims = (3,3,3) #i,j,k
ii = np.indices(dims)
```
You could then calculate `a[i,j,k] = i*j*k` as
```
a = np.prod(ii,axis=0)
```
or `a[i,j,k] = (i-1)*j*k`:
```
a = (ii[0,...]-1)*ii[1,...]*ii[2,...]
```
etc
|
6,254,713
|
How can I create a numpy matrix with its elements being a function of its indices?
For example, a multiplication table: `a[i,j] = i*j`
An Un-numpy and un-pythonic would be to create an array of zeros and then loop through.
There is no doubt that there is a better way to do this, without a loop.
However, even better would be to create the matrix straight-away.
|
2011/06/06
|
[
"https://Stackoverflow.com/questions/6254713",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/349043/"
] |
Here's one way to do that:
```
>>> indices = numpy.indices((5, 5))
>>> a = indices[0] * indices[1]
>>> a
array([[ 0, 0, 0, 0, 0],
[ 0, 1, 2, 3, 4],
[ 0, 2, 4, 6, 8],
[ 0, 3, 6, 9, 12],
[ 0, 4, 8, 12, 16]])
```
To further explain, `numpy.indices((5, 5))` generates two arrays containing the x and y indices of a 5x5 array like so:
```
>>> numpy.indices((5, 5))
array([[[0, 0, 0, 0, 0],
[1, 1, 1, 1, 1],
[2, 2, 2, 2, 2],
[3, 3, 3, 3, 3],
[4, 4, 4, 4, 4]],
[[0, 1, 2, 3, 4],
[0, 1, 2, 3, 4],
[0, 1, 2, 3, 4],
[0, 1, 2, 3, 4],
[0, 1, 2, 3, 4]]])
```
When you multiply these two arrays, numpy multiplies the value of the two arrays at each position and returns the result.
|
For the multiplication
```
np.multiply.outer(np.arange(5), np.arange(5)) # a_ij = i * j
```
and in general
```
np.frompyfunc(
lambda i, j: f(i, j), 2, 1
).outer(
np.arange(5),
np.arange(5),
).astype(np.float64) # a_ij = f(i, j)
```
basically you create an `np.ufunc` via `np.frompyfunc` and then `outer` it with the indices.
Edit
----
Speed comparision between the different solutions.
Small matrices:
```
Eyy![1]: %timeit np.multiply.outer(np.arange(5), np.arange(5))
100000 loops, best of 3: 4.97 µs per loop
Eyy![2]: %timeit np.array( [ [ i*j for j in xrange(5)] for i in xrange(5)] )
100000 loops, best of 3: 5.51 µs per loop
Eyy![3]: %timeit indices = np.indices((5, 5)); indices[0] * indices[1]
100000 loops, best of 3: 16.1 µs per loop
```
Bigger matrices:
```
Eyy![4]: %timeit np.multiply.outer(np.arange(4096), np.arange(4096))
10 loops, best of 3: 62.4 ms per loop
Eyy![5]: %timeit indices = np.indices((4096, 4096)); indices[0] * indices[1]
10 loops, best of 3: 165 ms per loop
Eyy![6]: %timeit np.array( [ [ i*j for j in xrange(4096)] for i in xrange(4096)] )
1 loops, best of 3: 1.39 s per loop
```
|
6,254,713
|
How can I create a numpy matrix with its elements being a function of its indices?
For example, a multiplication table: `a[i,j] = i*j`
An Un-numpy and un-pythonic would be to create an array of zeros and then loop through.
There is no doubt that there is a better way to do this, without a loop.
However, even better would be to create the matrix straight-away.
|
2011/06/06
|
[
"https://Stackoverflow.com/questions/6254713",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/349043/"
] |
A generic solution would be to use [np.fromfunction()](https://docs.scipy.org/doc/numpy/reference/generated/numpy.fromfunction.html)
From the doc:
>
> `numpy.fromfunction(function, shape, **kwargs)`
>
>
> Construct an array by executing a function over each coordinate. The
> resulting array therefore has a value fn(x, y, z) at coordinate (x, y,
> z).
>
>
>
The below snippet should provide the required matrix.
```
import numpy as np
np.fromfunction(lambda i, j: i*j, (5,5))
```
Output:
```
array([[ 0., 0., 0., 0., 0.],
[ 0., 1., 2., 3., 4.],
[ 0., 2., 4., 6., 8.],
[ 0., 3., 6., 9., 12.],
[ 0., 4., 8., 12., 16.]])
```
The first parameter to the function is a callable which is executed for each of the coordinates. If `foo` is a function that you pass as the first argument, `foo(i,j)` will be the value at `(i,j)`. This holds for higher dimensions too. The shape of the coordinate array can be modified using the `shape` parameter.
**Edit:**
Based on the comment on using custom functions like `lambda x,y: 2*x if x > y else y/2`, the following code works:
```
import numpy as np
def generic_f(shape, elementwise_f):
fv = np.vectorize(elementwise_f)
return np.fromfunction(fv, shape)
def elementwise_f(x , y):
return 2*x if x > y else y/2
print(generic_f( (5,5), elementwise_f))
```
Output:
```
[[0. 0.5 1. 1.5 2. ]
[2. 0.5 1. 1.5 2. ]
[4. 4. 1. 1.5 2. ]
[6. 6. 6. 1.5 2. ]
[8. 8. 8. 8. 2. ]]
```
The user is expected to pass a scalar function that defines the elementwise operation. [np.vectorize](https://numpy.org/doc/stable/reference/generated/numpy.vectorize.html#numpy-vectorize) is used to vectorize the user-defined scalar function and is passed to np.fromfunction().
|
Here's one way to do that:
```
>>> indices = numpy.indices((5, 5))
>>> a = indices[0] * indices[1]
>>> a
array([[ 0, 0, 0, 0, 0],
[ 0, 1, 2, 3, 4],
[ 0, 2, 4, 6, 8],
[ 0, 3, 6, 9, 12],
[ 0, 4, 8, 12, 16]])
```
To further explain, `numpy.indices((5, 5))` generates two arrays containing the x and y indices of a 5x5 array like so:
```
>>> numpy.indices((5, 5))
array([[[0, 0, 0, 0, 0],
[1, 1, 1, 1, 1],
[2, 2, 2, 2, 2],
[3, 3, 3, 3, 3],
[4, 4, 4, 4, 4]],
[[0, 1, 2, 3, 4],
[0, 1, 2, 3, 4],
[0, 1, 2, 3, 4],
[0, 1, 2, 3, 4],
[0, 1, 2, 3, 4]]])
```
When you multiply these two arrays, numpy multiplies the value of the two arrays at each position and returns the result.
|
6,254,713
|
How can I create a numpy matrix with its elements being a function of its indices?
For example, a multiplication table: `a[i,j] = i*j`
An Un-numpy and un-pythonic would be to create an array of zeros and then loop through.
There is no doubt that there is a better way to do this, without a loop.
However, even better would be to create the matrix straight-away.
|
2011/06/06
|
[
"https://Stackoverflow.com/questions/6254713",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/349043/"
] |
A generic solution would be to use [np.fromfunction()](https://docs.scipy.org/doc/numpy/reference/generated/numpy.fromfunction.html)
From the doc:
>
> `numpy.fromfunction(function, shape, **kwargs)`
>
>
> Construct an array by executing a function over each coordinate. The
> resulting array therefore has a value fn(x, y, z) at coordinate (x, y,
> z).
>
>
>
The below snippet should provide the required matrix.
```
import numpy as np
np.fromfunction(lambda i, j: i*j, (5,5))
```
Output:
```
array([[ 0., 0., 0., 0., 0.],
[ 0., 1., 2., 3., 4.],
[ 0., 2., 4., 6., 8.],
[ 0., 3., 6., 9., 12.],
[ 0., 4., 8., 12., 16.]])
```
The first parameter to the function is a callable which is executed for each of the coordinates. If `foo` is a function that you pass as the first argument, `foo(i,j)` will be the value at `(i,j)`. This holds for higher dimensions too. The shape of the coordinate array can be modified using the `shape` parameter.
**Edit:**
Based on the comment on using custom functions like `lambda x,y: 2*x if x > y else y/2`, the following code works:
```
import numpy as np
def generic_f(shape, elementwise_f):
fv = np.vectorize(elementwise_f)
return np.fromfunction(fv, shape)
def elementwise_f(x , y):
return 2*x if x > y else y/2
print(generic_f( (5,5), elementwise_f))
```
Output:
```
[[0. 0.5 1. 1.5 2. ]
[2. 0.5 1. 1.5 2. ]
[4. 4. 1. 1.5 2. ]
[6. 6. 6. 1.5 2. ]
[8. 8. 8. 8. 2. ]]
```
The user is expected to pass a scalar function that defines the elementwise operation. [np.vectorize](https://numpy.org/doc/stable/reference/generated/numpy.vectorize.html#numpy-vectorize) is used to vectorize the user-defined scalar function and is passed to np.fromfunction().
|
I'm away from my python at the moment, but does this one work?
```
array( [ [ i*j for j in xrange(5)] for i in xrange(5)] )
```
|
6,254,713
|
How can I create a numpy matrix with its elements being a function of its indices?
For example, a multiplication table: `a[i,j] = i*j`
An Un-numpy and un-pythonic would be to create an array of zeros and then loop through.
There is no doubt that there is a better way to do this, without a loop.
However, even better would be to create the matrix straight-away.
|
2011/06/06
|
[
"https://Stackoverflow.com/questions/6254713",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/349043/"
] |
A generic solution would be to use [np.fromfunction()](https://docs.scipy.org/doc/numpy/reference/generated/numpy.fromfunction.html)
From the doc:
>
> `numpy.fromfunction(function, shape, **kwargs)`
>
>
> Construct an array by executing a function over each coordinate. The
> resulting array therefore has a value fn(x, y, z) at coordinate (x, y,
> z).
>
>
>
The below snippet should provide the required matrix.
```
import numpy as np
np.fromfunction(lambda i, j: i*j, (5,5))
```
Output:
```
array([[ 0., 0., 0., 0., 0.],
[ 0., 1., 2., 3., 4.],
[ 0., 2., 4., 6., 8.],
[ 0., 3., 6., 9., 12.],
[ 0., 4., 8., 12., 16.]])
```
The first parameter to the function is a callable which is executed for each of the coordinates. If `foo` is a function that you pass as the first argument, `foo(i,j)` will be the value at `(i,j)`. This holds for higher dimensions too. The shape of the coordinate array can be modified using the `shape` parameter.
**Edit:**
Based on the comment on using custom functions like `lambda x,y: 2*x if x > y else y/2`, the following code works:
```
import numpy as np
def generic_f(shape, elementwise_f):
fv = np.vectorize(elementwise_f)
return np.fromfunction(fv, shape)
def elementwise_f(x , y):
return 2*x if x > y else y/2
print(generic_f( (5,5), elementwise_f))
```
Output:
```
[[0. 0.5 1. 1.5 2. ]
[2. 0.5 1. 1.5 2. ]
[4. 4. 1. 1.5 2. ]
[6. 6. 6. 1.5 2. ]
[8. 8. 8. 8. 2. ]]
```
The user is expected to pass a scalar function that defines the elementwise operation. [np.vectorize](https://numpy.org/doc/stable/reference/generated/numpy.vectorize.html#numpy-vectorize) is used to vectorize the user-defined scalar function and is passed to np.fromfunction().
|
Just wanted to add that @Senderle's response can be generalized for any function and dimension:
```
dims = (3,3,3) #i,j,k
ii = np.indices(dims)
```
You could then calculate `a[i,j,k] = i*j*k` as
```
a = np.prod(ii,axis=0)
```
or `a[i,j,k] = (i-1)*j*k`:
```
a = (ii[0,...]-1)*ii[1,...]*ii[2,...]
```
etc
|
6,254,713
|
How can I create a numpy matrix with its elements being a function of its indices?
For example, a multiplication table: `a[i,j] = i*j`
An Un-numpy and un-pythonic would be to create an array of zeros and then loop through.
There is no doubt that there is a better way to do this, without a loop.
However, even better would be to create the matrix straight-away.
|
2011/06/06
|
[
"https://Stackoverflow.com/questions/6254713",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/349043/"
] |
A generic solution would be to use [np.fromfunction()](https://docs.scipy.org/doc/numpy/reference/generated/numpy.fromfunction.html)
From the doc:
>
> `numpy.fromfunction(function, shape, **kwargs)`
>
>
> Construct an array by executing a function over each coordinate. The
> resulting array therefore has a value fn(x, y, z) at coordinate (x, y,
> z).
>
>
>
The below snippet should provide the required matrix.
```
import numpy as np
np.fromfunction(lambda i, j: i*j, (5,5))
```
Output:
```
array([[ 0., 0., 0., 0., 0.],
[ 0., 1., 2., 3., 4.],
[ 0., 2., 4., 6., 8.],
[ 0., 3., 6., 9., 12.],
[ 0., 4., 8., 12., 16.]])
```
The first parameter to the function is a callable which is executed for each of the coordinates. If `foo` is a function that you pass as the first argument, `foo(i,j)` will be the value at `(i,j)`. This holds for higher dimensions too. The shape of the coordinate array can be modified using the `shape` parameter.
**Edit:**
Based on the comment on using custom functions like `lambda x,y: 2*x if x > y else y/2`, the following code works:
```
import numpy as np
def generic_f(shape, elementwise_f):
fv = np.vectorize(elementwise_f)
return np.fromfunction(fv, shape)
def elementwise_f(x , y):
return 2*x if x > y else y/2
print(generic_f( (5,5), elementwise_f))
```
Output:
```
[[0. 0.5 1. 1.5 2. ]
[2. 0.5 1. 1.5 2. ]
[4. 4. 1. 1.5 2. ]
[6. 6. 6. 1.5 2. ]
[8. 8. 8. 8. 2. ]]
```
The user is expected to pass a scalar function that defines the elementwise operation. [np.vectorize](https://numpy.org/doc/stable/reference/generated/numpy.vectorize.html#numpy-vectorize) is used to vectorize the user-defined scalar function and is passed to np.fromfunction().
|
For the multiplication
```
np.multiply.outer(np.arange(5), np.arange(5)) # a_ij = i * j
```
and in general
```
np.frompyfunc(
lambda i, j: f(i, j), 2, 1
).outer(
np.arange(5),
np.arange(5),
).astype(np.float64) # a_ij = f(i, j)
```
basically you create an `np.ufunc` via `np.frompyfunc` and then `outer` it with the indices.
Edit
----
Speed comparision between the different solutions.
Small matrices:
```
Eyy![1]: %timeit np.multiply.outer(np.arange(5), np.arange(5))
100000 loops, best of 3: 4.97 µs per loop
Eyy![2]: %timeit np.array( [ [ i*j for j in xrange(5)] for i in xrange(5)] )
100000 loops, best of 3: 5.51 µs per loop
Eyy![3]: %timeit indices = np.indices((5, 5)); indices[0] * indices[1]
100000 loops, best of 3: 16.1 µs per loop
```
Bigger matrices:
```
Eyy![4]: %timeit np.multiply.outer(np.arange(4096), np.arange(4096))
10 loops, best of 3: 62.4 ms per loop
Eyy![5]: %timeit indices = np.indices((4096, 4096)); indices[0] * indices[1]
10 loops, best of 3: 165 ms per loop
Eyy![6]: %timeit np.array( [ [ i*j for j in xrange(4096)] for i in xrange(4096)] )
1 loops, best of 3: 1.39 s per loop
```
|
6,975,808
|
I like to store my data after a longish python program as dictionaries in a new script. This then allows me to import the program (and hence data) easily for further manipulation.
I write something like this (an old example):
```
file = open(p['results']+'asa_contacts.py','w')
print>>file, \
'''
\'''
This file stores the contact residues according to changes in ASA
as a dictionary
\'''
d = {}
'''
```
followed by a lot of faffing around entering the dictionary code as a string:
```
print>>file, 'd[\'%s\'] = {}' %st
```
I was wondering if there was a module which did this automatically as it would save me a lot of time.
Thank you
Edit: it may be useful to know that these dictionaries are usually several layers deep like this one I'm using today:
```
d[ratio][bound][charge] = a_list
```
|
2011/08/07
|
[
"https://Stackoverflow.com/questions/6975808",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/299000/"
] |
Unless there's a specific reason that you need source code -- and I suspect there isn't, you just want to serialize and deserialize data from disk -- a better option would be Python's [`pickle` module](http://docs.python.org/library/pickle.html).
|
[Lossy's suggestion](https://stackoverflow.com/questions/6975808/outputting-data-as-dictionaries-in-a-new-python-file/6975843#6975843) of `repr` happens to work, but `repr` isn't *specifically* designed for serialization. I think it would be slightly more robust to use a tool designed for that purpose; and since you want something that's human-readable and -editable, `json` is the obvious choice.
```
>>> import json
>>> animals = {'a':'aardwolf', 'b':'beluga', 'c':'civet', 'd':'dik-dik',
'e':'echidna', 'f':'fennec', 'g':'goa', 'h':'hyrax',
'i':'impala', 'j':'javelina', 'k':'kudu', 'l':'lemur',
'm':'macaque', 'n':'nutria', 'o':'orca', 'p':'peccary',
'q':'quagga', 'r':'reebok', 's':'serval', 't':'tenrec',
'u':'urial', 'v':'vole', 'w':'wallaroo', 'x':'xenurine',
'y':'yapok', 'z':'zoologist'}
>>> s = json.dumps(animals)
>>> s[:60] + '...'
'{"a": "aardwolf", "c": "civet", "b": "beluga", "e": "echidna...'
>>> animals = json.loads(s)
>>> animals['w']
u'wallaroo'
```
|
6,975,808
|
I like to store my data after a longish python program as dictionaries in a new script. This then allows me to import the program (and hence data) easily for further manipulation.
I write something like this (an old example):
```
file = open(p['results']+'asa_contacts.py','w')
print>>file, \
'''
\'''
This file stores the contact residues according to changes in ASA
as a dictionary
\'''
d = {}
'''
```
followed by a lot of faffing around entering the dictionary code as a string:
```
print>>file, 'd[\'%s\'] = {}' %st
```
I was wondering if there was a module which did this automatically as it would save me a lot of time.
Thank you
Edit: it may be useful to know that these dictionaries are usually several layers deep like this one I'm using today:
```
d[ratio][bound][charge] = a_list
```
|
2011/08/07
|
[
"https://Stackoverflow.com/questions/6975808",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/299000/"
] |
I am not sure if this is what you are looking for but try the inbuilt function repr.
```
repr(a)
```
|
[Lossy's suggestion](https://stackoverflow.com/questions/6975808/outputting-data-as-dictionaries-in-a-new-python-file/6975843#6975843) of `repr` happens to work, but `repr` isn't *specifically* designed for serialization. I think it would be slightly more robust to use a tool designed for that purpose; and since you want something that's human-readable and -editable, `json` is the obvious choice.
```
>>> import json
>>> animals = {'a':'aardwolf', 'b':'beluga', 'c':'civet', 'd':'dik-dik',
'e':'echidna', 'f':'fennec', 'g':'goa', 'h':'hyrax',
'i':'impala', 'j':'javelina', 'k':'kudu', 'l':'lemur',
'm':'macaque', 'n':'nutria', 'o':'orca', 'p':'peccary',
'q':'quagga', 'r':'reebok', 's':'serval', 't':'tenrec',
'u':'urial', 'v':'vole', 'w':'wallaroo', 'x':'xenurine',
'y':'yapok', 'z':'zoologist'}
>>> s = json.dumps(animals)
>>> s[:60] + '...'
'{"a": "aardwolf", "c": "civet", "b": "beluga", "e": "echidna...'
>>> animals = json.loads(s)
>>> animals['w']
u'wallaroo'
```
|
48,765,260
|
Object oriented python, I want to create a static method to convert hours, minutes and seconds into seconds.
I create a class called Duration:
```
class Duration:
def __init__(self, hours, minutes, seconds):
self.hours = hours
self.minutes = minutes
self.seconds = seconds
```
I then create a variable named duration1 where I give it the numbers
```
duration1 = Duration(29, 7, 10)
```
I have a method called info which checks if any of the numbers are less than 10, and if so add a "0" in front of it, and then i later revert the values into ints.
```
def info(self):
if self.hours < 10:
self.hours = "0" + str(self.hours)
elif self.minutes < 10:
self.minutes = "0" + str(self.minutes)
elif self.seconds < 10:
self.seconds = "0" + str(self.seconds)
info_string = str(self.hours) + "-" + str(self.minutes) + "-" + str(self.seconds)
self.hours = int(self.hours)
self.minutes = int(self.minutes)
self.seconds = int(self.seconds)
return info_string
```
And now I want to create a static\_method to convert these values into seconds, and return the value so i can call it with duration1 (atleast I think thats how i should do it..?
```
@staticmethod
def static_method(self):
hour_to_seconds = self.hours * 3600
minutes_to_seconds = self.minutes * 60
converted_to_seconds = hours_to_seconds + minutes_to_seconds \
+ self.seconds
return converted_to_seconds
duration1.static_method(duration1.info())
```
I guess my question is how would I use a static method with this code to change the numbers? I should also tell you this is a school assignment so I have to use a static method to solve the problem. Any help is greatly appreciated!
Error message says this:
```
hour_to_seconds = self.hours * 3600
```
AttributeError: 'str' object has no attribute 'hours'
|
2018/02/13
|
[
"https://Stackoverflow.com/questions/48765260",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6715237/"
] |
You're almost there.
Instead of passing the result of `duration1.info()` to `duration1.static_method()`, you simply pass the whole object:
```
duration1 = Duration(29, 7, 10)
print(duration1.info()) # 29-07-10
print(duration1.static_method(duration1)) # 104830
```
And since it's a static method, you could just as well call it from the class instead of the instance (which is the reason why one wants a static method - you don't have to create an instance):
```
print(Duration.static_method(duration1))
```
By the way, as a convention `self` should not be used as the parameter name if you're using it in a static method, but it works anyway, because - as I said - it's just a convention. It's not as if `self` is a "magic" variable.
But you should consider to rename the parameter.
By the way, you have a typo in `static_method`, once you call the variable `hour_to_seconds` but then access `hours_to_seconds` - you need to decide on one.
|
A static method is *explicitly* a function which is tied to a class rather than an instance. `self` in Python represents an instance of a class in Python.
These are not simpatico.
You can make your code work by passing `duration1` to your static method and changing the name of its accepted parameter from `self` to `duration` or similar.
|
33,432,426
|
I am trying to import `requests` module, but I got this error
my python version is 3.4 running on ubuntu 14.04
```
>>> import requests
Traceback (most recent call last):
File "/usr/local/lib/python3.4/dist-packages/requests/packages/urllib3/connectionpool.py", line 10, in <module>
from queue import LifoQueue, Empty, Full
ImportError: cannot import name 'LifoQueue'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python3.4/dist-packages/requests/__init__.py", line 58, in <module>
from . import utils
File "/usr/local/lib/python3.4/dist-packages/requests/utils.py", line 26, in <module>
from .compat import parse_http_list as _parse_list_header
File "/usr/local/lib/python3.4/dist-packages/requests/compat.py", line 7, in <module>
from .packages import chardet
File "/usr/local/lib/python3.4/dist-packages/requests/packages/__init__.py", line 3, in <module>
from . import urllib3
File "/usr/local/lib/python3.4/dist-packages/requests/packages/urllib3/__init__.py", line 10, in <module>
from .connectionpool import (
File "/usr/local/lib/python3.4/dist-packages/requests/packages/urllib3/connectionpool.py", line 12, in <module>
from Queue import LifoQueue, Empty, Full
ImportError: No module named 'Queue'
```
|
2015/10/30
|
[
"https://Stackoverflow.com/questions/33432426",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5104452/"
] |
`import queue` is **lowercase** `q` in Python 3.
Change `Q` to `q` and it will be fine.
(See code in <https://stackoverflow.com/a/29688081/632951> for smart switching.)
|
It's because of the Python version. In Python 2.x it's `import Queue as queue`; on the contrary in Python 3 it's `import queue`. If you want it for both environments you may use something below as mentioned [here](https://stackoverflow.com/questions/46363871/no-module-named-queue?noredirect=1&lq=1)
```
try:
import queue
except ImportError:
import Queue as queue
```
|
33,432,426
|
I am trying to import `requests` module, but I got this error
my python version is 3.4 running on ubuntu 14.04
```
>>> import requests
Traceback (most recent call last):
File "/usr/local/lib/python3.4/dist-packages/requests/packages/urllib3/connectionpool.py", line 10, in <module>
from queue import LifoQueue, Empty, Full
ImportError: cannot import name 'LifoQueue'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python3.4/dist-packages/requests/__init__.py", line 58, in <module>
from . import utils
File "/usr/local/lib/python3.4/dist-packages/requests/utils.py", line 26, in <module>
from .compat import parse_http_list as _parse_list_header
File "/usr/local/lib/python3.4/dist-packages/requests/compat.py", line 7, in <module>
from .packages import chardet
File "/usr/local/lib/python3.4/dist-packages/requests/packages/__init__.py", line 3, in <module>
from . import urllib3
File "/usr/local/lib/python3.4/dist-packages/requests/packages/urllib3/__init__.py", line 10, in <module>
from .connectionpool import (
File "/usr/local/lib/python3.4/dist-packages/requests/packages/urllib3/connectionpool.py", line 12, in <module>
from Queue import LifoQueue, Empty, Full
ImportError: No module named 'Queue'
```
|
2015/10/30
|
[
"https://Stackoverflow.com/questions/33432426",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5104452/"
] |
I solve the problem my issue was I had file named queue.py in the same directory
|
It's because of the Python version. In Python 2.x it's `import Queue as queue`; on the contrary in Python 3 it's `import queue`. If you want it for both environments you may use something below as mentioned [here](https://stackoverflow.com/questions/46363871/no-module-named-queue?noredirect=1&lq=1)
```
try:
import queue
except ImportError:
import Queue as queue
```
|
33,432,426
|
I am trying to import `requests` module, but I got this error
my python version is 3.4 running on ubuntu 14.04
```
>>> import requests
Traceback (most recent call last):
File "/usr/local/lib/python3.4/dist-packages/requests/packages/urllib3/connectionpool.py", line 10, in <module>
from queue import LifoQueue, Empty, Full
ImportError: cannot import name 'LifoQueue'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python3.4/dist-packages/requests/__init__.py", line 58, in <module>
from . import utils
File "/usr/local/lib/python3.4/dist-packages/requests/utils.py", line 26, in <module>
from .compat import parse_http_list as _parse_list_header
File "/usr/local/lib/python3.4/dist-packages/requests/compat.py", line 7, in <module>
from .packages import chardet
File "/usr/local/lib/python3.4/dist-packages/requests/packages/__init__.py", line 3, in <module>
from . import urllib3
File "/usr/local/lib/python3.4/dist-packages/requests/packages/urllib3/__init__.py", line 10, in <module>
from .connectionpool import (
File "/usr/local/lib/python3.4/dist-packages/requests/packages/urllib3/connectionpool.py", line 12, in <module>
from Queue import LifoQueue, Empty, Full
ImportError: No module named 'Queue'
```
|
2015/10/30
|
[
"https://Stackoverflow.com/questions/33432426",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5104452/"
] |
Queue is in the multiprocessing module so:
```
from multiprocessing import Queue
```
|
In my case it should be:
`from multiprocessing import JoinableQueue`
Since in python2, Queue has methods like `.task_done()`, but in python3 `multiprocessing.Queue` doesn't have this method, and `multiprocessing.JoinableQueue` does.
|
33,432,426
|
I am trying to import `requests` module, but I got this error
my python version is 3.4 running on ubuntu 14.04
```
>>> import requests
Traceback (most recent call last):
File "/usr/local/lib/python3.4/dist-packages/requests/packages/urllib3/connectionpool.py", line 10, in <module>
from queue import LifoQueue, Empty, Full
ImportError: cannot import name 'LifoQueue'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python3.4/dist-packages/requests/__init__.py", line 58, in <module>
from . import utils
File "/usr/local/lib/python3.4/dist-packages/requests/utils.py", line 26, in <module>
from .compat import parse_http_list as _parse_list_header
File "/usr/local/lib/python3.4/dist-packages/requests/compat.py", line 7, in <module>
from .packages import chardet
File "/usr/local/lib/python3.4/dist-packages/requests/packages/__init__.py", line 3, in <module>
from . import urllib3
File "/usr/local/lib/python3.4/dist-packages/requests/packages/urllib3/__init__.py", line 10, in <module>
from .connectionpool import (
File "/usr/local/lib/python3.4/dist-packages/requests/packages/urllib3/connectionpool.py", line 12, in <module>
from Queue import LifoQueue, Empty, Full
ImportError: No module named 'Queue'
```
|
2015/10/30
|
[
"https://Stackoverflow.com/questions/33432426",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5104452/"
] |
It's because of the Python version. In Python 2.x it's `import Queue as queue`; on the contrary in Python 3 it's `import queue`. If you want it for both environments you may use something below as mentioned [here](https://stackoverflow.com/questions/46363871/no-module-named-queue?noredirect=1&lq=1)
```
try:
import queue
except ImportError:
import Queue as queue
```
|
I run into the same problem and learn that queue module defines classes and exceptions, that defines the public methods (Queue Objects).
Ex.
```
workQueue = queue.Queue(10)
```
|
33,432,426
|
I am trying to import `requests` module, but I got this error
my python version is 3.4 running on ubuntu 14.04
```
>>> import requests
Traceback (most recent call last):
File "/usr/local/lib/python3.4/dist-packages/requests/packages/urllib3/connectionpool.py", line 10, in <module>
from queue import LifoQueue, Empty, Full
ImportError: cannot import name 'LifoQueue'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python3.4/dist-packages/requests/__init__.py", line 58, in <module>
from . import utils
File "/usr/local/lib/python3.4/dist-packages/requests/utils.py", line 26, in <module>
from .compat import parse_http_list as _parse_list_header
File "/usr/local/lib/python3.4/dist-packages/requests/compat.py", line 7, in <module>
from .packages import chardet
File "/usr/local/lib/python3.4/dist-packages/requests/packages/__init__.py", line 3, in <module>
from . import urllib3
File "/usr/local/lib/python3.4/dist-packages/requests/packages/urllib3/__init__.py", line 10, in <module>
from .connectionpool import (
File "/usr/local/lib/python3.4/dist-packages/requests/packages/urllib3/connectionpool.py", line 12, in <module>
from Queue import LifoQueue, Empty, Full
ImportError: No module named 'Queue'
```
|
2015/10/30
|
[
"https://Stackoverflow.com/questions/33432426",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5104452/"
] |
I solve the problem my issue was I had file named queue.py in the same directory
|
I just copy the file name Queue.py in the `*/lib/python2.7/` to queue.py and that solved my problem.
|
33,432,426
|
I am trying to import `requests` module, but I got this error
my python version is 3.4 running on ubuntu 14.04
```
>>> import requests
Traceback (most recent call last):
File "/usr/local/lib/python3.4/dist-packages/requests/packages/urllib3/connectionpool.py", line 10, in <module>
from queue import LifoQueue, Empty, Full
ImportError: cannot import name 'LifoQueue'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python3.4/dist-packages/requests/__init__.py", line 58, in <module>
from . import utils
File "/usr/local/lib/python3.4/dist-packages/requests/utils.py", line 26, in <module>
from .compat import parse_http_list as _parse_list_header
File "/usr/local/lib/python3.4/dist-packages/requests/compat.py", line 7, in <module>
from .packages import chardet
File "/usr/local/lib/python3.4/dist-packages/requests/packages/__init__.py", line 3, in <module>
from . import urllib3
File "/usr/local/lib/python3.4/dist-packages/requests/packages/urllib3/__init__.py", line 10, in <module>
from .connectionpool import (
File "/usr/local/lib/python3.4/dist-packages/requests/packages/urllib3/connectionpool.py", line 12, in <module>
from Queue import LifoQueue, Empty, Full
ImportError: No module named 'Queue'
```
|
2015/10/30
|
[
"https://Stackoverflow.com/questions/33432426",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5104452/"
] |
It's because of the Python version. In Python 2.x it's `import Queue as queue`; on the contrary in Python 3 it's `import queue`. If you want it for both environments you may use something below as mentioned [here](https://stackoverflow.com/questions/46363871/no-module-named-queue?noredirect=1&lq=1)
```
try:
import queue
except ImportError:
import Queue as queue
```
|
In my case it should be:
`from multiprocessing import JoinableQueue`
Since in python2, Queue has methods like `.task_done()`, but in python3 `multiprocessing.Queue` doesn't have this method, and `multiprocessing.JoinableQueue` does.
|
33,432,426
|
I am trying to import `requests` module, but I got this error
my python version is 3.4 running on ubuntu 14.04
```
>>> import requests
Traceback (most recent call last):
File "/usr/local/lib/python3.4/dist-packages/requests/packages/urllib3/connectionpool.py", line 10, in <module>
from queue import LifoQueue, Empty, Full
ImportError: cannot import name 'LifoQueue'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python3.4/dist-packages/requests/__init__.py", line 58, in <module>
from . import utils
File "/usr/local/lib/python3.4/dist-packages/requests/utils.py", line 26, in <module>
from .compat import parse_http_list as _parse_list_header
File "/usr/local/lib/python3.4/dist-packages/requests/compat.py", line 7, in <module>
from .packages import chardet
File "/usr/local/lib/python3.4/dist-packages/requests/packages/__init__.py", line 3, in <module>
from . import urllib3
File "/usr/local/lib/python3.4/dist-packages/requests/packages/urllib3/__init__.py", line 10, in <module>
from .connectionpool import (
File "/usr/local/lib/python3.4/dist-packages/requests/packages/urllib3/connectionpool.py", line 12, in <module>
from Queue import LifoQueue, Empty, Full
ImportError: No module named 'Queue'
```
|
2015/10/30
|
[
"https://Stackoverflow.com/questions/33432426",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5104452/"
] |
Queue is in the multiprocessing module so:
```
from multiprocessing import Queue
```
|
It's because of the Python version. In Python 2.x it's `import Queue as queue`; on the contrary in Python 3 it's `import queue`. If you want it for both environments you may use something below as mentioned [here](https://stackoverflow.com/questions/46363871/no-module-named-queue?noredirect=1&lq=1)
```
try:
import queue
except ImportError:
import Queue as queue
```
|
33,432,426
|
I am trying to import `requests` module, but I got this error
my python version is 3.4 running on ubuntu 14.04
```
>>> import requests
Traceback (most recent call last):
File "/usr/local/lib/python3.4/dist-packages/requests/packages/urllib3/connectionpool.py", line 10, in <module>
from queue import LifoQueue, Empty, Full
ImportError: cannot import name 'LifoQueue'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python3.4/dist-packages/requests/__init__.py", line 58, in <module>
from . import utils
File "/usr/local/lib/python3.4/dist-packages/requests/utils.py", line 26, in <module>
from .compat import parse_http_list as _parse_list_header
File "/usr/local/lib/python3.4/dist-packages/requests/compat.py", line 7, in <module>
from .packages import chardet
File "/usr/local/lib/python3.4/dist-packages/requests/packages/__init__.py", line 3, in <module>
from . import urllib3
File "/usr/local/lib/python3.4/dist-packages/requests/packages/urllib3/__init__.py", line 10, in <module>
from .connectionpool import (
File "/usr/local/lib/python3.4/dist-packages/requests/packages/urllib3/connectionpool.py", line 12, in <module>
from Queue import LifoQueue, Empty, Full
ImportError: No module named 'Queue'
```
|
2015/10/30
|
[
"https://Stackoverflow.com/questions/33432426",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5104452/"
] |
`import queue` is **lowercase** `q` in Python 3.
Change `Q` to `q` and it will be fine.
(See code in <https://stackoverflow.com/a/29688081/632951> for smart switching.)
|
I just copy the file name Queue.py in the `*/lib/python2.7/` to queue.py and that solved my problem.
|
33,432,426
|
I am trying to import `requests` module, but I got this error
my python version is 3.4 running on ubuntu 14.04
```
>>> import requests
Traceback (most recent call last):
File "/usr/local/lib/python3.4/dist-packages/requests/packages/urllib3/connectionpool.py", line 10, in <module>
from queue import LifoQueue, Empty, Full
ImportError: cannot import name 'LifoQueue'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python3.4/dist-packages/requests/__init__.py", line 58, in <module>
from . import utils
File "/usr/local/lib/python3.4/dist-packages/requests/utils.py", line 26, in <module>
from .compat import parse_http_list as _parse_list_header
File "/usr/local/lib/python3.4/dist-packages/requests/compat.py", line 7, in <module>
from .packages import chardet
File "/usr/local/lib/python3.4/dist-packages/requests/packages/__init__.py", line 3, in <module>
from . import urllib3
File "/usr/local/lib/python3.4/dist-packages/requests/packages/urllib3/__init__.py", line 10, in <module>
from .connectionpool import (
File "/usr/local/lib/python3.4/dist-packages/requests/packages/urllib3/connectionpool.py", line 12, in <module>
from Queue import LifoQueue, Empty, Full
ImportError: No module named 'Queue'
```
|
2015/10/30
|
[
"https://Stackoverflow.com/questions/33432426",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5104452/"
] |
I solve the problem my issue was I had file named queue.py in the same directory
|
In my case it should be:
`from multiprocessing import JoinableQueue`
Since in python2, Queue has methods like `.task_done()`, but in python3 `multiprocessing.Queue` doesn't have this method, and `multiprocessing.JoinableQueue` does.
|
33,432,426
|
I am trying to import `requests` module, but I got this error
my python version is 3.4 running on ubuntu 14.04
```
>>> import requests
Traceback (most recent call last):
File "/usr/local/lib/python3.4/dist-packages/requests/packages/urllib3/connectionpool.py", line 10, in <module>
from queue import LifoQueue, Empty, Full
ImportError: cannot import name 'LifoQueue'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python3.4/dist-packages/requests/__init__.py", line 58, in <module>
from . import utils
File "/usr/local/lib/python3.4/dist-packages/requests/utils.py", line 26, in <module>
from .compat import parse_http_list as _parse_list_header
File "/usr/local/lib/python3.4/dist-packages/requests/compat.py", line 7, in <module>
from .packages import chardet
File "/usr/local/lib/python3.4/dist-packages/requests/packages/__init__.py", line 3, in <module>
from . import urllib3
File "/usr/local/lib/python3.4/dist-packages/requests/packages/urllib3/__init__.py", line 10, in <module>
from .connectionpool import (
File "/usr/local/lib/python3.4/dist-packages/requests/packages/urllib3/connectionpool.py", line 12, in <module>
from Queue import LifoQueue, Empty, Full
ImportError: No module named 'Queue'
```
|
2015/10/30
|
[
"https://Stackoverflow.com/questions/33432426",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5104452/"
] |
Queue is in the multiprocessing module so:
```
from multiprocessing import Queue
```
|
I just copy the file name Queue.py in the `*/lib/python2.7/` to queue.py and that solved my problem.
|
57,229,074
|
I am trying to import an existing database into my **Django** project, so I run `python manage.py migrate --fake-initial`, but I get this error:
```
operations to perform:
Apply all migrations: ExcursionsManagerApp, GeneralApp, InvoicesManagerApp, OperationsManagerApp, PaymentsManagerApp, RatesMan
agerApp, ReportsManagerApp, ReservationsManagerApp, UsersManagerApp, admin, auth, authtoken, contenttypes, sessions
Running migrations:
Applying contenttypes.0001_initial... FAKED
Applying auth.0001_initial... FAKED
Applying contenttypes.0002_remove_content_type_name... OK
Applying GeneralApp.0001_initial...Traceback (most recent call last):
File "/Users/hugovillalobos/Documents/Code/IntellibookWebProject/IntellibookWebVenv/lib/python3.6/site-packages/django/db/back
ends/utils.py", line 83, in _execute
return self.cursor.execute(sql)
psycopg2.ProgrammingError: relation "GeneralApp_airport" already exists
```
Of course all the tables already exist in the database, that is the reason why I use `--fake-initial`, that is supposed to **fake** the creation of database objects.
Why is `migrate` attempting to create the table `GeneralApp__airport` instead of faking it?
|
2019/07/27
|
[
"https://Stackoverflow.com/questions/57229074",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6843153/"
] |
Just by running the following [command](https://docs.djangoproject.com/en/3.1/ref/django-admin/#cmdoption-migrate-fake) would solve the problem:
```
python manage.py migrate --fake
```
|
Steps you can follow to make migrations from an existing database. Firstly empty the django migration table from the database.
```sql
delete from django_migrations
```
1. Remove migrations from your migrations folder for the app
```sh
rm -rf <app>/migrations/
```
2. Reset the migration for builtin apps(like admin)
```sh
python manage.py migrate --fake
```
3. Create initial migration for each and every app
```sh
python manage.py makemigrations <app>
```
4. The final step is to create fake initial migrations
```sh
python manage.py migrate --fake-initial
```
|
57,229,074
|
I am trying to import an existing database into my **Django** project, so I run `python manage.py migrate --fake-initial`, but I get this error:
```
operations to perform:
Apply all migrations: ExcursionsManagerApp, GeneralApp, InvoicesManagerApp, OperationsManagerApp, PaymentsManagerApp, RatesMan
agerApp, ReportsManagerApp, ReservationsManagerApp, UsersManagerApp, admin, auth, authtoken, contenttypes, sessions
Running migrations:
Applying contenttypes.0001_initial... FAKED
Applying auth.0001_initial... FAKED
Applying contenttypes.0002_remove_content_type_name... OK
Applying GeneralApp.0001_initial...Traceback (most recent call last):
File "/Users/hugovillalobos/Documents/Code/IntellibookWebProject/IntellibookWebVenv/lib/python3.6/site-packages/django/db/back
ends/utils.py", line 83, in _execute
return self.cursor.execute(sql)
psycopg2.ProgrammingError: relation "GeneralApp_airport" already exists
```
Of course all the tables already exist in the database, that is the reason why I use `--fake-initial`, that is supposed to **fake** the creation of database objects.
Why is `migrate` attempting to create the table `GeneralApp__airport` instead of faking it?
|
2019/07/27
|
[
"https://Stackoverflow.com/questions/57229074",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6843153/"
] |
`--fake-initial` can't deal with any situation where some of the tables listed in the initial migration exist and some do not. So, if tables exist for some, but not all, of the `CreateModel()`s in the `operations` list in your `0001_initial.py`, `--fake-initial` does not apply and it tries to create tables for ALL of the models.
Does that make sense? Not to me...
|
Steps you can follow to make migrations from an existing database. Firstly empty the django migration table from the database.
```sql
delete from django_migrations
```
1. Remove migrations from your migrations folder for the app
```sh
rm -rf <app>/migrations/
```
2. Reset the migration for builtin apps(like admin)
```sh
python manage.py migrate --fake
```
3. Create initial migration for each and every app
```sh
python manage.py makemigrations <app>
```
4. The final step is to create fake initial migrations
```sh
python manage.py migrate --fake-initial
```
|
26,327,497
|
**WHAT IS WORKING**
I'm following [wsgi documentation to run django](http://uwsgi-docs.readthedocs.org/en/latest/tutorials/Django_and_nginx.html#test-your-django-project). I'm testing that it's all working before start to use nginx. I succeeded running manage.py and loading the webpage in my browser:
```
python manage.py runserver 0.0.0.0:8000
```

**WHAT IS NOT WORKING**
The problem comes when I try to run it using uwsgi:
```
uwsgi --http :8000 --module metrics.wsgi
```
I can [run it without errors](http://pastebin.com/aBjSMt0b), but when I try to load it in my browser I get [AppRegistryNotReady error](http://pastebin.com/sWTKZnv3) from uwsgi. Any idea about what could be the reason? This is my wsgi.py file:
```
import os, sys, site, django.core.handlers.wsgi
SITE_DIR = '/home/ubuntu/web/metrics.com/app/'
site.addsitedir(SITE_DIR)
sys.path.append(SITE_DIR)
os.environ['DJANGO_SETTINGS_MODULE'] = 'metrics.settings'
application = django.core.handlers.wsgi.WSGIHandler()
```
my project structure:
```
/ubuntu
/www
/metrics.com
/app # here's my manage.py file
metrics/ # here's my wsgi.py and settigs.py files
```
|
2014/10/12
|
[
"https://Stackoverflow.com/questions/26327497",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2010764/"
] |
**SOLUTION:**
An incorrect configuration in wsgi.py was making uWSGI unable to call the application. I solved it using this wsgi.py:
```
import os
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "metrics.settings")
from django.core.wsgi import get_wsgi_application
application = get_wsgi_application()
```
And running uwsgi like this:
```
uwsgi --http :8000 --chdir /home/ubuntu/web/metrics.com/app --module gamemetrics.wsgi
```
**Edit**: using --chdir we set the base directory to use for --module.
**Edit 2**: In some cases, this can fix NGINX error: upstream prematurely closed connection while reading response header from upstream
|
1):
Highly recommend run uwsgi in emperor mode.
```
/usr/bin/uwsgi --emperor /etc/uwsgi --pidfile /var/run/uwsgi.pid --daemonize /var/log/uwsgi.log
```
2) Example wsgi.py for your project:
```
import os
import sys
ADD_PATH = ['/home/ubuntu/web/metrics.com/app/',]
for item in ADD_PATH:
sys.path.insert (0, item)
os.environ['PYTHON_EGG_CACHE']= '/tmp/your-project-eggs'
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "settings")
from django.core.wsgi import get_wsgi_application
application = get_wsgi_application()
```
3) Example uwsgi config for project (put this in /etc/uwsgi) <- see i.1
------- project.yaml -------
```
uwsgi:
print: Project Configuration Started
socket: /var/tmp/metrics_uwsgi.sock
pythonpath: /home/ubuntu/web/metrics.com
env: DJANGO_SETTINGS_MODULE=app.settings
module: app.wsgi
chdir: /home/ubuntu/web/metrics.com/app
daemonize: /home/ubuntu/web/metrics.com/log/uwsgi.log
pidfile: /var/run/metrics_uwsgi.pid
max-requests: 5000
buffer-size: 32768
harakiri: 30
reload-mercy: 8
master: 1
no-orphans: 1
touch-reload: /home/ubuntu/web/metrics.com/log/uwsgi
post-buffering: 8192
```
---
4) Include in your nginx config
```
location /
{
uwsgi_pass unix:///var/tmp/metrics_uwsgi.sock;
include uwsgi_params;
uwsgi_buffers 8 128k;
}
```
|
46,895,876
|
I have a script `wordcount.py`
I used setuptools to create an entry point, named `wordcount`, so now I can call the command from anywhere in the system.
I am trying to execute it via spark-submit (command: `spark-submit wordcount`) but it is failing with the following error:
`Error: Cannot load main class from JAR file:/usr/local/bin/wordcount
Run with --help for usage help or --verbose for debug output`
However the exact same command works fine when I provide the path to the python script (command: `spark-submit /home/ubuntu/wordcount.py`)
Content of wordcount.py
```
import sys
from operator import add
from pyspark.sql import SparkSession
def main(args=None):
if len(sys.argv) != 2:
print("Usage: wordcount <file>", file=sys.stderr)
exit(-1)
spark = SparkSession\
.builder\
.appName("PythonWordCount")\
.getOrCreate()
lines = spark.read.text(sys.argv[1]).rdd.map(lambda r: r[0])
counts = lines.flatMap(lambda x: x.split(' ')) \
.map(lambda x: (x, 1)) \
.reduceByKey(add)
output = counts.collect()
for (word, count) in output:
print("%s: %i" % (word, count))
spark.stop()
if __name__ == "__main__":
main()
```
Do you know if there is a way to bypass this?
Thanks a lot in advance.
|
2017/10/23
|
[
"https://Stackoverflow.com/questions/46895876",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7963251/"
] |
This is pretty similar to Jaap's comment, but a little more spelled out and uses the row names explicitly:
```
mat = as.matrix(dat[, 2:5])
row.names(mat) = dat$MUN
mat = rbind(mat, colSums(mat[c("Angra dos Reis (RJ)", "Areal (RJ)"), ], na.rm = T))
row.names(mat)[nrow(mat)] = "X"
mat
# X1990 X1991 X1992 X1993
# Angra dos Reis (RJ) 11 10 10 10
# Aperibé (RJ) NA NA NA NA
# Araruama (RJ) 12040 14589 14231 14231
# Areal (RJ) NA NA NA 3
# Armação dos Búzios (RJ) NA NA NA NA
# X 11 10 10 13
```
The result is a `matrix`, you can convert it back to a data frame if needed:
```
dat_result = data.frame(MUN = row.names(mat), mat, row.names = NULL)
```
I dislike the format of your data as a data frame. I would either convert it to a matrix (as above) or convert it to long format with, e.g., `tidyr::gather(dat, key = year, value = value, -MUN)` and work with it "by group" using `data.table` or `dplyr`.
---
Using this data:
```
dat = read.table(text = " MUN X1990 X1991 X1992 X1993
1 'Angra dos Reis (RJ)' 11 10 10 10
2 'Aperibé (RJ)' NA NA NA NA
3 'Araruama (RJ)' 12040 14589 14231 14231
4 'Areal (RJ)' NA NA NA 3
5 'Armação dos Búzios (RJ)' NA NA NA NA", header= T)
```
|
A solution can be using sqldf package. If the name of the data frame is `df`, you can do it likes the following:
```
library(sqldf)
result <- sqldf("SELECT * FROM df UNION
SELECT 'X', SUM(X1990), SUM(X1991), SUM(X1992), SUM(X1993) FROM df
WHERE MUN IN ('Angra dos Reis (RJ)', 'Areal (RJ)')")
```
|
46,895,876
|
I have a script `wordcount.py`
I used setuptools to create an entry point, named `wordcount`, so now I can call the command from anywhere in the system.
I am trying to execute it via spark-submit (command: `spark-submit wordcount`) but it is failing with the following error:
`Error: Cannot load main class from JAR file:/usr/local/bin/wordcount
Run with --help for usage help or --verbose for debug output`
However the exact same command works fine when I provide the path to the python script (command: `spark-submit /home/ubuntu/wordcount.py`)
Content of wordcount.py
```
import sys
from operator import add
from pyspark.sql import SparkSession
def main(args=None):
if len(sys.argv) != 2:
print("Usage: wordcount <file>", file=sys.stderr)
exit(-1)
spark = SparkSession\
.builder\
.appName("PythonWordCount")\
.getOrCreate()
lines = spark.read.text(sys.argv[1]).rdd.map(lambda r: r[0])
counts = lines.flatMap(lambda x: x.split(' ')) \
.map(lambda x: (x, 1)) \
.reduceByKey(add)
output = counts.collect()
for (word, count) in output:
print("%s: %i" % (word, count))
spark.stop()
if __name__ == "__main__":
main()
```
Do you know if there is a way to bypass this?
Thanks a lot in advance.
|
2017/10/23
|
[
"https://Stackoverflow.com/questions/46895876",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7963251/"
] |
This is pretty similar to Jaap's comment, but a little more spelled out and uses the row names explicitly:
```
mat = as.matrix(dat[, 2:5])
row.names(mat) = dat$MUN
mat = rbind(mat, colSums(mat[c("Angra dos Reis (RJ)", "Areal (RJ)"), ], na.rm = T))
row.names(mat)[nrow(mat)] = "X"
mat
# X1990 X1991 X1992 X1993
# Angra dos Reis (RJ) 11 10 10 10
# Aperibé (RJ) NA NA NA NA
# Araruama (RJ) 12040 14589 14231 14231
# Areal (RJ) NA NA NA 3
# Armação dos Búzios (RJ) NA NA NA NA
# X 11 10 10 13
```
The result is a `matrix`, you can convert it back to a data frame if needed:
```
dat_result = data.frame(MUN = row.names(mat), mat, row.names = NULL)
```
I dislike the format of your data as a data frame. I would either convert it to a matrix (as above) or convert it to long format with, e.g., `tidyr::gather(dat, key = year, value = value, -MUN)` and work with it "by group" using `data.table` or `dplyr`.
---
Using this data:
```
dat = read.table(text = " MUN X1990 X1991 X1992 X1993
1 'Angra dos Reis (RJ)' 11 10 10 10
2 'Aperibé (RJ)' NA NA NA NA
3 'Araruama (RJ)' 12040 14589 14231 14231
4 'Areal (RJ)' NA NA NA 3
5 'Armação dos Búzios (RJ)' NA NA NA NA", header= T)
```
|
I have assumed that you would like to sum data of two municipalites whose names you know/specify and then add their sum at the end of the table. I was not sure if this understanding is correct. You might need to specify your question again in case below code is not what you need (e.g., concerning if you need to sum multiple municipalities each time or only two at a time, etc.)
Furthermore, if you have to call the function I proposed a large number of times or your table is really large, it needs to be improved in terms of speed, e.g., by using the package `data.table` instead of base R (since you said you are a beginner I sticked to base R).
To fulfull your request of keeping NA values where possible I have used code proposed by [Joshua Ulrich](https://stackoverflow.com/users/271616/joshua-ulrich) as answer to this question [rowSums but keeping NA values](https://stackoverflow.com/questions/18189753/rowsums-but-keeping-na-values).
```
data <- data.frame(MUN = c("Angra dos Reis (RJ)", "Aperibé (RJ)", "Araruama (RJ)", "Areal (RJ)", "Armação dos Búzios (RJ)")
,X1990 = c(11, NA, 12040, NA, NA)
,X1991 = c(10, NA, 14589, NA, NA)
,X1992 = c(10, NA, 14231, NA, NA)
,X1993 = c(10, NA, 12231, 3, NA)
)
sum_rows <- function(df, row1, row2) {
#get the indices of the two rows to be summed
#grep returns the position in a vector at which a certain element is stored
#here the name of the municipality
index_row1 <- grep(row1, df$MUN, fixed=T)
index_row2 <- grep(row2, df$MUN, fixed=T)
#select the two rows of the data.frame that you want to sum
#on basis of the entry in the MUN column
#further only select the column with numbers for the sum operation
#check if all entries in a single column are NA values
#if yes then the ouput for this column is NA
#if no calculate the column sum, if one entry is NA, ignore it
sum <- ifelse(apply(is.na(df[c(index_row1, index_row2),2:ncol(df)]),2,all)
,NA
,colSums(df[c(index_row1, index_row2),2:ncol(df)],na.rm=TRUE)
)
#create a name entry for the new MUN column
#paste0 is used to combine strings
#in this case it might make sense to create a name
#that includes the indices of the rows that have been summed instad of only using X as name
name <- paste0("Sum_R",index_row1,"_R" , index_row2)
#add the row to the original data.frame
df <- cbind(MUN = c(as.character(df$MUN), name)
,rbind(df[, 2:ncol(df)], sum)
)
#return the data.frame from the function
df
}
#sum two rows and replace your data.frame by the new result
data <- sum_rows(data, "Angra dos Reis (RJ)", "Areal (RJ)")
data <- sum_rows(data, "Armação dos Búzios (RJ)", "Areal (RJ)")
```
|
46,895,876
|
I have a script `wordcount.py`
I used setuptools to create an entry point, named `wordcount`, so now I can call the command from anywhere in the system.
I am trying to execute it via spark-submit (command: `spark-submit wordcount`) but it is failing with the following error:
`Error: Cannot load main class from JAR file:/usr/local/bin/wordcount
Run with --help for usage help or --verbose for debug output`
However the exact same command works fine when I provide the path to the python script (command: `spark-submit /home/ubuntu/wordcount.py`)
Content of wordcount.py
```
import sys
from operator import add
from pyspark.sql import SparkSession
def main(args=None):
if len(sys.argv) != 2:
print("Usage: wordcount <file>", file=sys.stderr)
exit(-1)
spark = SparkSession\
.builder\
.appName("PythonWordCount")\
.getOrCreate()
lines = spark.read.text(sys.argv[1]).rdd.map(lambda r: r[0])
counts = lines.flatMap(lambda x: x.split(' ')) \
.map(lambda x: (x, 1)) \
.reduceByKey(add)
output = counts.collect()
for (word, count) in output:
print("%s: %i" % (word, count))
spark.stop()
if __name__ == "__main__":
main()
```
Do you know if there is a way to bypass this?
Thanks a lot in advance.
|
2017/10/23
|
[
"https://Stackoverflow.com/questions/46895876",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7963251/"
] |
This is pretty similar to Jaap's comment, but a little more spelled out and uses the row names explicitly:
```
mat = as.matrix(dat[, 2:5])
row.names(mat) = dat$MUN
mat = rbind(mat, colSums(mat[c("Angra dos Reis (RJ)", "Areal (RJ)"), ], na.rm = T))
row.names(mat)[nrow(mat)] = "X"
mat
# X1990 X1991 X1992 X1993
# Angra dos Reis (RJ) 11 10 10 10
# Aperibé (RJ) NA NA NA NA
# Araruama (RJ) 12040 14589 14231 14231
# Areal (RJ) NA NA NA 3
# Armação dos Búzios (RJ) NA NA NA NA
# X 11 10 10 13
```
The result is a `matrix`, you can convert it back to a data frame if needed:
```
dat_result = data.frame(MUN = row.names(mat), mat, row.names = NULL)
```
I dislike the format of your data as a data frame. I would either convert it to a matrix (as above) or convert it to long format with, e.g., `tidyr::gather(dat, key = year, value = value, -MUN)` and work with it "by group" using `data.table` or `dplyr`.
---
Using this data:
```
dat = read.table(text = " MUN X1990 X1991 X1992 X1993
1 'Angra dos Reis (RJ)' 11 10 10 10
2 'Aperibé (RJ)' NA NA NA NA
3 'Araruama (RJ)' 12040 14589 14231 14231
4 'Areal (RJ)' NA NA NA 3
5 'Armação dos Búzios (RJ)' NA NA NA NA", header= T)
```
|
Here is a `dplyr` solution:
```
library(dplyr)
df %>%
filter(MUN %in% c("Angra dos Reis (RJ)", "Areal (RJ)")) %>%
summarize_if(is.numeric, sum, na.rm = TRUE) %>%
as.list(.) %>%
c(MUN = "X") %>%
bind_rows(df, .)
```
**Result:**
```
MUN X1990 X1991 X1992 X1993
1 Angra dos Reis (RJ) 11 10 10 10
2 Aperibé (RJ) NA NA NA NA
3 Araruama (RJ) 12040 14589 14231 14231
4 Areal (RJ) NA NA NA 3
5 Armação dos Búzios (RJ) NA NA NA NA
6 X 11 10 10 13
```
**Data (from @Gregor with `stringsAsFactors = FALSE`):**
```
df = read.table(text = " MUN X1990 X1991 X1992 X1993
1 'Angra dos Reis (RJ)' 11 10 10 10
2 'Aperibé (RJ)' NA NA NA NA
3 'Araruama (RJ)' 12040 14589 14231 14231
4 'Areal (RJ)' NA NA NA 3
5 'Armação dos Búzios (RJ)' NA NA NA NA", header= T, stringsAsFactors = FALSE)
```
|
46,895,876
|
I have a script `wordcount.py`
I used setuptools to create an entry point, named `wordcount`, so now I can call the command from anywhere in the system.
I am trying to execute it via spark-submit (command: `spark-submit wordcount`) but it is failing with the following error:
`Error: Cannot load main class from JAR file:/usr/local/bin/wordcount
Run with --help for usage help or --verbose for debug output`
However the exact same command works fine when I provide the path to the python script (command: `spark-submit /home/ubuntu/wordcount.py`)
Content of wordcount.py
```
import sys
from operator import add
from pyspark.sql import SparkSession
def main(args=None):
if len(sys.argv) != 2:
print("Usage: wordcount <file>", file=sys.stderr)
exit(-1)
spark = SparkSession\
.builder\
.appName("PythonWordCount")\
.getOrCreate()
lines = spark.read.text(sys.argv[1]).rdd.map(lambda r: r[0])
counts = lines.flatMap(lambda x: x.split(' ')) \
.map(lambda x: (x, 1)) \
.reduceByKey(add)
output = counts.collect()
for (word, count) in output:
print("%s: %i" % (word, count))
spark.stop()
if __name__ == "__main__":
main()
```
Do you know if there is a way to bypass this?
Thanks a lot in advance.
|
2017/10/23
|
[
"https://Stackoverflow.com/questions/46895876",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7963251/"
] |
A solution can be using sqldf package. If the name of the data frame is `df`, you can do it likes the following:
```
library(sqldf)
result <- sqldf("SELECT * FROM df UNION
SELECT 'X', SUM(X1990), SUM(X1991), SUM(X1992), SUM(X1993) FROM df
WHERE MUN IN ('Angra dos Reis (RJ)', 'Areal (RJ)')")
```
|
I have assumed that you would like to sum data of two municipalites whose names you know/specify and then add their sum at the end of the table. I was not sure if this understanding is correct. You might need to specify your question again in case below code is not what you need (e.g., concerning if you need to sum multiple municipalities each time or only two at a time, etc.)
Furthermore, if you have to call the function I proposed a large number of times or your table is really large, it needs to be improved in terms of speed, e.g., by using the package `data.table` instead of base R (since you said you are a beginner I sticked to base R).
To fulfull your request of keeping NA values where possible I have used code proposed by [Joshua Ulrich](https://stackoverflow.com/users/271616/joshua-ulrich) as answer to this question [rowSums but keeping NA values](https://stackoverflow.com/questions/18189753/rowsums-but-keeping-na-values).
```
data <- data.frame(MUN = c("Angra dos Reis (RJ)", "Aperibé (RJ)", "Araruama (RJ)", "Areal (RJ)", "Armação dos Búzios (RJ)")
,X1990 = c(11, NA, 12040, NA, NA)
,X1991 = c(10, NA, 14589, NA, NA)
,X1992 = c(10, NA, 14231, NA, NA)
,X1993 = c(10, NA, 12231, 3, NA)
)
sum_rows <- function(df, row1, row2) {
#get the indices of the two rows to be summed
#grep returns the position in a vector at which a certain element is stored
#here the name of the municipality
index_row1 <- grep(row1, df$MUN, fixed=T)
index_row2 <- grep(row2, df$MUN, fixed=T)
#select the two rows of the data.frame that you want to sum
#on basis of the entry in the MUN column
#further only select the column with numbers for the sum operation
#check if all entries in a single column are NA values
#if yes then the ouput for this column is NA
#if no calculate the column sum, if one entry is NA, ignore it
sum <- ifelse(apply(is.na(df[c(index_row1, index_row2),2:ncol(df)]),2,all)
,NA
,colSums(df[c(index_row1, index_row2),2:ncol(df)],na.rm=TRUE)
)
#create a name entry for the new MUN column
#paste0 is used to combine strings
#in this case it might make sense to create a name
#that includes the indices of the rows that have been summed instad of only using X as name
name <- paste0("Sum_R",index_row1,"_R" , index_row2)
#add the row to the original data.frame
df <- cbind(MUN = c(as.character(df$MUN), name)
,rbind(df[, 2:ncol(df)], sum)
)
#return the data.frame from the function
df
}
#sum two rows and replace your data.frame by the new result
data <- sum_rows(data, "Angra dos Reis (RJ)", "Areal (RJ)")
data <- sum_rows(data, "Armação dos Búzios (RJ)", "Areal (RJ)")
```
|
46,895,876
|
I have a script `wordcount.py`
I used setuptools to create an entry point, named `wordcount`, so now I can call the command from anywhere in the system.
I am trying to execute it via spark-submit (command: `spark-submit wordcount`) but it is failing with the following error:
`Error: Cannot load main class from JAR file:/usr/local/bin/wordcount
Run with --help for usage help or --verbose for debug output`
However the exact same command works fine when I provide the path to the python script (command: `spark-submit /home/ubuntu/wordcount.py`)
Content of wordcount.py
```
import sys
from operator import add
from pyspark.sql import SparkSession
def main(args=None):
if len(sys.argv) != 2:
print("Usage: wordcount <file>", file=sys.stderr)
exit(-1)
spark = SparkSession\
.builder\
.appName("PythonWordCount")\
.getOrCreate()
lines = spark.read.text(sys.argv[1]).rdd.map(lambda r: r[0])
counts = lines.flatMap(lambda x: x.split(' ')) \
.map(lambda x: (x, 1)) \
.reduceByKey(add)
output = counts.collect()
for (word, count) in output:
print("%s: %i" % (word, count))
spark.stop()
if __name__ == "__main__":
main()
```
Do you know if there is a way to bypass this?
Thanks a lot in advance.
|
2017/10/23
|
[
"https://Stackoverflow.com/questions/46895876",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7963251/"
] |
Here is a `dplyr` solution:
```
library(dplyr)
df %>%
filter(MUN %in% c("Angra dos Reis (RJ)", "Areal (RJ)")) %>%
summarize_if(is.numeric, sum, na.rm = TRUE) %>%
as.list(.) %>%
c(MUN = "X") %>%
bind_rows(df, .)
```
**Result:**
```
MUN X1990 X1991 X1992 X1993
1 Angra dos Reis (RJ) 11 10 10 10
2 Aperibé (RJ) NA NA NA NA
3 Araruama (RJ) 12040 14589 14231 14231
4 Areal (RJ) NA NA NA 3
5 Armação dos Búzios (RJ) NA NA NA NA
6 X 11 10 10 13
```
**Data (from @Gregor with `stringsAsFactors = FALSE`):**
```
df = read.table(text = " MUN X1990 X1991 X1992 X1993
1 'Angra dos Reis (RJ)' 11 10 10 10
2 'Aperibé (RJ)' NA NA NA NA
3 'Araruama (RJ)' 12040 14589 14231 14231
4 'Areal (RJ)' NA NA NA 3
5 'Armação dos Búzios (RJ)' NA NA NA NA", header= T, stringsAsFactors = FALSE)
```
|
I have assumed that you would like to sum data of two municipalites whose names you know/specify and then add their sum at the end of the table. I was not sure if this understanding is correct. You might need to specify your question again in case below code is not what you need (e.g., concerning if you need to sum multiple municipalities each time or only two at a time, etc.)
Furthermore, if you have to call the function I proposed a large number of times or your table is really large, it needs to be improved in terms of speed, e.g., by using the package `data.table` instead of base R (since you said you are a beginner I sticked to base R).
To fulfull your request of keeping NA values where possible I have used code proposed by [Joshua Ulrich](https://stackoverflow.com/users/271616/joshua-ulrich) as answer to this question [rowSums but keeping NA values](https://stackoverflow.com/questions/18189753/rowsums-but-keeping-na-values).
```
data <- data.frame(MUN = c("Angra dos Reis (RJ)", "Aperibé (RJ)", "Araruama (RJ)", "Areal (RJ)", "Armação dos Búzios (RJ)")
,X1990 = c(11, NA, 12040, NA, NA)
,X1991 = c(10, NA, 14589, NA, NA)
,X1992 = c(10, NA, 14231, NA, NA)
,X1993 = c(10, NA, 12231, 3, NA)
)
sum_rows <- function(df, row1, row2) {
#get the indices of the two rows to be summed
#grep returns the position in a vector at which a certain element is stored
#here the name of the municipality
index_row1 <- grep(row1, df$MUN, fixed=T)
index_row2 <- grep(row2, df$MUN, fixed=T)
#select the two rows of the data.frame that you want to sum
#on basis of the entry in the MUN column
#further only select the column with numbers for the sum operation
#check if all entries in a single column are NA values
#if yes then the ouput for this column is NA
#if no calculate the column sum, if one entry is NA, ignore it
sum <- ifelse(apply(is.na(df[c(index_row1, index_row2),2:ncol(df)]),2,all)
,NA
,colSums(df[c(index_row1, index_row2),2:ncol(df)],na.rm=TRUE)
)
#create a name entry for the new MUN column
#paste0 is used to combine strings
#in this case it might make sense to create a name
#that includes the indices of the rows that have been summed instad of only using X as name
name <- paste0("Sum_R",index_row1,"_R" , index_row2)
#add the row to the original data.frame
df <- cbind(MUN = c(as.character(df$MUN), name)
,rbind(df[, 2:ncol(df)], sum)
)
#return the data.frame from the function
df
}
#sum two rows and replace your data.frame by the new result
data <- sum_rows(data, "Angra dos Reis (RJ)", "Areal (RJ)")
data <- sum_rows(data, "Armação dos Búzios (RJ)", "Areal (RJ)")
```
|
4,119,054
|
I'm looking for a finance library in python which offers a method similar to the MATLAB's [portalloc](http://www.mathworks.com/help/toolbox/finance/portalloc.html) . It is used to optimize a portfolio.
|
2010/11/07
|
[
"https://Stackoverflow.com/questions/4119054",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/409701/"
] |
If you know linear algebra, there is a simple function for solving the optimization problem which any library should support. Unfortunately, it's been so long since I researched it I can't tell you the formula nor a library that supports it, but a little research should reveal it. The main point is that any linear algebra library should do.
Update:
Here's a quote from a post I found.
>
> Some research says that "mean variance portfolio optimization" can
> give good results. I discussed this in a message
>
>
> To implement this approach, a needed input is the covariance matrix of
> returns, which requires historical stock prices, which one can obtain
> using "Python quote grabber" <http://www.openvest.org/Databases/ovpyq> .
>
>
> For expected returns -- hmmm. One of the papers I cited found that
> assuming equal expected returns of all stocks can give reasonable
> results.
>
>
> Then one needs a "quadratic programming" solver, which appears to be
> handled by the CVXOPT Python package.
>
>
> If someone implements the approach in Python, I'd be happy to hear
> about it.
>
>
> There is a "backtest" package in R (open source stats package callable
> from Python) <http://cran.r-project.org/web/packages/backtest/index.html>
> "for exploring portfolio-based hypotheses about financial instruments
> (stocks, bonds, swaps, options, et cetera)."
>
>
>
|
Maybe you could use this [library](http://www.downv.com/Mac/download-Python-statlib-10009752.htm) (statlib) or this [one](http://www.downv.com/Mac/download-Mystic-10004716.htm) (Mystic) to help you.
|
4,119,054
|
I'm looking for a finance library in python which offers a method similar to the MATLAB's [portalloc](http://www.mathworks.com/help/toolbox/finance/portalloc.html) . It is used to optimize a portfolio.
|
2010/11/07
|
[
"https://Stackoverflow.com/questions/4119054",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/409701/"
] |
If you know linear algebra, there is a simple function for solving the optimization problem which any library should support. Unfortunately, it's been so long since I researched it I can't tell you the formula nor a library that supports it, but a little research should reveal it. The main point is that any linear algebra library should do.
Update:
Here's a quote from a post I found.
>
> Some research says that "mean variance portfolio optimization" can
> give good results. I discussed this in a message
>
>
> To implement this approach, a needed input is the covariance matrix of
> returns, which requires historical stock prices, which one can obtain
> using "Python quote grabber" <http://www.openvest.org/Databases/ovpyq> .
>
>
> For expected returns -- hmmm. One of the papers I cited found that
> assuming equal expected returns of all stocks can give reasonable
> results.
>
>
> Then one needs a "quadratic programming" solver, which appears to be
> handled by the CVXOPT Python package.
>
>
> If someone implements the approach in Python, I'd be happy to hear
> about it.
>
>
> There is a "backtest" package in R (open source stats package callable
> from Python) <http://cran.r-project.org/web/packages/backtest/index.html>
> "for exploring portfolio-based hypotheses about financial instruments
> (stocks, bonds, swaps, options, et cetera)."
>
>
>
|
If you know how to define your objective function. You can use [Numpy](http://docs.scipy.org/doc/scipy-0.14.0/reference/optimize.html) to solve almost any portfolio optimization problem.
|
4,119,054
|
I'm looking for a finance library in python which offers a method similar to the MATLAB's [portalloc](http://www.mathworks.com/help/toolbox/finance/portalloc.html) . It is used to optimize a portfolio.
|
2010/11/07
|
[
"https://Stackoverflow.com/questions/4119054",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/409701/"
] |
If you know linear algebra, there is a simple function for solving the optimization problem which any library should support. Unfortunately, it's been so long since I researched it I can't tell you the formula nor a library that supports it, but a little research should reveal it. The main point is that any linear algebra library should do.
Update:
Here's a quote from a post I found.
>
> Some research says that "mean variance portfolio optimization" can
> give good results. I discussed this in a message
>
>
> To implement this approach, a needed input is the covariance matrix of
> returns, which requires historical stock prices, which one can obtain
> using "Python quote grabber" <http://www.openvest.org/Databases/ovpyq> .
>
>
> For expected returns -- hmmm. One of the papers I cited found that
> assuming equal expected returns of all stocks can give reasonable
> results.
>
>
> Then one needs a "quadratic programming" solver, which appears to be
> handled by the CVXOPT Python package.
>
>
> If someone implements the approach in Python, I'd be happy to hear
> about it.
>
>
> There is a "backtest" package in R (open source stats package callable
> from Python) <http://cran.r-project.org/web/packages/backtest/index.html>
> "for exploring portfolio-based hypotheses about financial instruments
> (stocks, bonds, swaps, options, et cetera)."
>
>
>
|
Python implementations of some typical portfolio optimizations can be found at <https://github.com/czielinski/portfolioopt>. The corresponding quadratic programs are being solved using the `CVXOPT` library. *(Disclaimer: this is my own GitHub repository.)*
|
4,119,054
|
I'm looking for a finance library in python which offers a method similar to the MATLAB's [portalloc](http://www.mathworks.com/help/toolbox/finance/portalloc.html) . It is used to optimize a portfolio.
|
2010/11/07
|
[
"https://Stackoverflow.com/questions/4119054",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/409701/"
] |
If you know linear algebra, there is a simple function for solving the optimization problem which any library should support. Unfortunately, it's been so long since I researched it I can't tell you the formula nor a library that supports it, but a little research should reveal it. The main point is that any linear algebra library should do.
Update:
Here's a quote from a post I found.
>
> Some research says that "mean variance portfolio optimization" can
> give good results. I discussed this in a message
>
>
> To implement this approach, a needed input is the covariance matrix of
> returns, which requires historical stock prices, which one can obtain
> using "Python quote grabber" <http://www.openvest.org/Databases/ovpyq> .
>
>
> For expected returns -- hmmm. One of the papers I cited found that
> assuming equal expected returns of all stocks can give reasonable
> results.
>
>
> Then one needs a "quadratic programming" solver, which appears to be
> handled by the CVXOPT Python package.
>
>
> If someone implements the approach in Python, I'd be happy to hear
> about it.
>
>
> There is a "backtest" package in R (open source stats package callable
> from Python) <http://cran.r-project.org/web/packages/backtest/index.html>
> "for exploring portfolio-based hypotheses about financial instruments
> (stocks, bonds, swaps, options, et cetera)."
>
>
>
|
I am new but I believe gradient descent is what you are looking for. Michael Chu, who is author of optopsy 2.0 python library (<https://github.com/michaelchu/optopsy>), has great insights into implementation. Works great with v3.7.
|
71,929,367
|
Please don't delete this. this is so simple but I'm banging my head making this work for last two hours.
As displayed. I want to import python file module\_dir into module\_sub\_dir.py, but its giving erros. all **init**.py are empty.[](https://i.stack.imgur.com/MYBST.png)
```
from .. import module_dir
```
This doesn't work either as it gives erro
```
ImportError: attempted relative import with no known parent package
```
|
2022/04/19
|
[
"https://Stackoverflow.com/questions/71929367",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11268322/"
] |
Add `sys.path.append("..")` to the very beginning then importlib will be able to reach the file in parent directories of CWD. But I recommend you not to use such ugly solution in your projects.
|
Update as I have figured out the issue myslef.
So the issue is how VS Code runs .py file individually as oppose to running entire package like Pycharm. It works in Pycharm. To make it work in VS Code sys.import.path('..') works.
|
58,455,061
|
Suppose python dictionary is like
D = {'a':1,'a':2}
Can I get those 2 values with same key
Because I want write a function so I can get dictionary like above?
|
2019/10/18
|
[
"https://Stackoverflow.com/questions/58455061",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5961544/"
] |
Dictionary keys in Python are unique. Python will resolve `D = {'a':1,'a':2}` as `D = {'a': 2}`
You can effectively store multiple values under the same key by storing a list under that key. In your case,
```
D = {'a': [1, 2]}
```
This would allow you to access the elements of 'a' by using
```
D['a'][elementIdx] # D['a'][0] = 1
```
|
You cannot. I set up an identical dictionary, and when attempting to print the key `'a'`, I received the secondary value, i.e., `2`. Keys are meant to be unique.
You could try something like:
```
x = {}
for i in range(2):
x[f"a{i}"] = i
```
Which would output key values like `a0, a1, etc.`
|
1,062,803
|
Take two lists, second with same items than first plus some more:
```
a = [1,2,3]
b = [1,2,3,4,5]
```
I want to get a third one, containing only the new items (the ones not repeated):
```
c = [4,5]
```
The solution I have right now is:
```
>>> c = []
>>> for i in ab:
... if ab.count(i) == 1:
... c.append(i)
>>> c
[4, 5]
```
Is there any other way more pythonic than this?
Thanx folks!
|
2009/06/30
|
[
"https://Stackoverflow.com/questions/1062803",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/65459/"
] |
at the very least use a list comprehension:
```
[x for x in a + b if (a + b).count(x) == 1]
```
otherwise use the [set](http://docs.python.org/library/stdtypes.html#set-types-set-frozenset) class:
```
list(set(a).symmetric_difference(set(b)))
```
there is also a more compact form:
```
list(set(a) ^ set(b))
```
|
Items in b that aren't in a, if you need to preserve order or duplicates in b:
```
>>> a = [1, 2, 3]
>>> b = [1, 2, 3, 4, 4, 5]
>>> a_set = set(a)
>>> [x for x in b if x not in a_set]
[4, 4, 5]
```
Items in b that aren't in a, not preserving order, and not preserving duplicates in b:
```
>>> list(set(b) - set(a))
[4, 5]
```
|
1,062,803
|
Take two lists, second with same items than first plus some more:
```
a = [1,2,3]
b = [1,2,3,4,5]
```
I want to get a third one, containing only the new items (the ones not repeated):
```
c = [4,5]
```
The solution I have right now is:
```
>>> c = []
>>> for i in ab:
... if ab.count(i) == 1:
... c.append(i)
>>> c
[4, 5]
```
Is there any other way more pythonic than this?
Thanx folks!
|
2009/06/30
|
[
"https://Stackoverflow.com/questions/1062803",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/65459/"
] |
If the order is not important and you can ignore repetitions within `a` and `b`, I would simply use sets:
```
>>> set(b) - set(a)
set([4, 5])
```
Sets are iterable, so most of the times you do not need to explicitly convert them back to list. If you have to, this does it:
```
>>> list(set(b) - set(a))
[4, 5]
```
|
I'd say go for the set variant, where
```
set(b) ^ set(a) (set.symmetric_difference())
```
only applies if you can be certain that a is always a subset of b, but in that case has the advantage of being commutative, ie. you don't have to worry about calculating set(b) ^ set(a) or set(a) ^ set(b); or
```
set(b) - set(a) (set.difference())
```
which matches your description more closely, allows a to have extra elements not in b which will not be in the result set, but you have to mind the order (set(a) - set(b) will give you a different result).
|
1,062,803
|
Take two lists, second with same items than first plus some more:
```
a = [1,2,3]
b = [1,2,3,4,5]
```
I want to get a third one, containing only the new items (the ones not repeated):
```
c = [4,5]
```
The solution I have right now is:
```
>>> c = []
>>> for i in ab:
... if ab.count(i) == 1:
... c.append(i)
>>> c
[4, 5]
```
Is there any other way more pythonic than this?
Thanx folks!
|
2009/06/30
|
[
"https://Stackoverflow.com/questions/1062803",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/65459/"
] |
Here are some different possibilities with the sets
```
>>> a = [1, 2, 3, 4, 5, 1, 2]
>>> b = [1, 2, 5, 6]
>>> print list(set(a)^set(b))
[3, 4, 6]
>>> print list(set(a)-set(b))
[3, 4]
>>> print list(set(b)-set(a))
[6]
>>> print list(set(a)-set(b))+list(set(b)-set(a))
[3, 4, 6]
>>>
```
|
Another solution using only lists:
```
a = [1, 2, 3]
b = [1, 2, 3, 4, 5]
c = [n for n in a + b if n not in a or n not in b]
```
|
1,062,803
|
Take two lists, second with same items than first plus some more:
```
a = [1,2,3]
b = [1,2,3,4,5]
```
I want to get a third one, containing only the new items (the ones not repeated):
```
c = [4,5]
```
The solution I have right now is:
```
>>> c = []
>>> for i in ab:
... if ab.count(i) == 1:
... c.append(i)
>>> c
[4, 5]
```
Is there any other way more pythonic than this?
Thanx folks!
|
2009/06/30
|
[
"https://Stackoverflow.com/questions/1062803",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/65459/"
] |
at the very least use a list comprehension:
```
[x for x in a + b if (a + b).count(x) == 1]
```
otherwise use the [set](http://docs.python.org/library/stdtypes.html#set-types-set-frozenset) class:
```
list(set(a).symmetric_difference(set(b)))
```
there is also a more compact form:
```
list(set(a) ^ set(b))
```
|
```
a = [1, 2 ,3]
b = [1, 2, 3, 4, 5]
c=[]
for x in a:
if x not in b:
c.append(x)
print(c)
```
|
1,062,803
|
Take two lists, second with same items than first plus some more:
```
a = [1,2,3]
b = [1,2,3,4,5]
```
I want to get a third one, containing only the new items (the ones not repeated):
```
c = [4,5]
```
The solution I have right now is:
```
>>> c = []
>>> for i in ab:
... if ab.count(i) == 1:
... c.append(i)
>>> c
[4, 5]
```
Is there any other way more pythonic than this?
Thanx folks!
|
2009/06/30
|
[
"https://Stackoverflow.com/questions/1062803",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/65459/"
] |
at the very least use a list comprehension:
```
[x for x in a + b if (a + b).count(x) == 1]
```
otherwise use the [set](http://docs.python.org/library/stdtypes.html#set-types-set-frozenset) class:
```
list(set(a).symmetric_difference(set(b)))
```
there is also a more compact form:
```
list(set(a) ^ set(b))
```
|
Here are some different possibilities with the sets
```
>>> a = [1, 2, 3, 4, 5, 1, 2]
>>> b = [1, 2, 5, 6]
>>> print list(set(a)^set(b))
[3, 4, 6]
>>> print list(set(a)-set(b))
[3, 4]
>>> print list(set(b)-set(a))
[6]
>>> print list(set(a)-set(b))+list(set(b)-set(a))
[3, 4, 6]
>>>
```
|
1,062,803
|
Take two lists, second with same items than first plus some more:
```
a = [1,2,3]
b = [1,2,3,4,5]
```
I want to get a third one, containing only the new items (the ones not repeated):
```
c = [4,5]
```
The solution I have right now is:
```
>>> c = []
>>> for i in ab:
... if ab.count(i) == 1:
... c.append(i)
>>> c
[4, 5]
```
Is there any other way more pythonic than this?
Thanx folks!
|
2009/06/30
|
[
"https://Stackoverflow.com/questions/1062803",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/65459/"
] |
I'd say go for the set variant, where
```
set(b) ^ set(a) (set.symmetric_difference())
```
only applies if you can be certain that a is always a subset of b, but in that case has the advantage of being commutative, ie. you don't have to worry about calculating set(b) ^ set(a) or set(a) ^ set(b); or
```
set(b) - set(a) (set.difference())
```
which matches your description more closely, allows a to have extra elements not in b which will not be in the result set, but you have to mind the order (set(a) - set(b) will give you a different result).
|
Another solution using only lists:
```
a = [1, 2, 3]
b = [1, 2, 3, 4, 5]
c = [n for n in a + b if n not in a or n not in b]
```
|
1,062,803
|
Take two lists, second with same items than first plus some more:
```
a = [1,2,3]
b = [1,2,3,4,5]
```
I want to get a third one, containing only the new items (the ones not repeated):
```
c = [4,5]
```
The solution I have right now is:
```
>>> c = []
>>> for i in ab:
... if ab.count(i) == 1:
... c.append(i)
>>> c
[4, 5]
```
Is there any other way more pythonic than this?
Thanx folks!
|
2009/06/30
|
[
"https://Stackoverflow.com/questions/1062803",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/65459/"
] |
Items in b that aren't in a, if you need to preserve order or duplicates in b:
```
>>> a = [1, 2, 3]
>>> b = [1, 2, 3, 4, 4, 5]
>>> a_set = set(a)
>>> [x for x in b if x not in a_set]
[4, 4, 5]
```
Items in b that aren't in a, not preserving order, and not preserving duplicates in b:
```
>>> list(set(b) - set(a))
[4, 5]
```
|
Another solution using only lists:
```
a = [1, 2, 3]
b = [1, 2, 3, 4, 5]
c = [n for n in a + b if n not in a or n not in b]
```
|
1,062,803
|
Take two lists, second with same items than first plus some more:
```
a = [1,2,3]
b = [1,2,3,4,5]
```
I want to get a third one, containing only the new items (the ones not repeated):
```
c = [4,5]
```
The solution I have right now is:
```
>>> c = []
>>> for i in ab:
... if ab.count(i) == 1:
... c.append(i)
>>> c
[4, 5]
```
Is there any other way more pythonic than this?
Thanx folks!
|
2009/06/30
|
[
"https://Stackoverflow.com/questions/1062803",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/65459/"
] |
at the very least use a list comprehension:
```
[x for x in a + b if (a + b).count(x) == 1]
```
otherwise use the [set](http://docs.python.org/library/stdtypes.html#set-types-set-frozenset) class:
```
list(set(a).symmetric_difference(set(b)))
```
there is also a more compact form:
```
list(set(a) ^ set(b))
```
|
If the order is not important and you can ignore repetitions within `a` and `b`, I would simply use sets:
```
>>> set(b) - set(a)
set([4, 5])
```
Sets are iterable, so most of the times you do not need to explicitly convert them back to list. If you have to, this does it:
```
>>> list(set(b) - set(a))
[4, 5]
```
|
1,062,803
|
Take two lists, second with same items than first plus some more:
```
a = [1,2,3]
b = [1,2,3,4,5]
```
I want to get a third one, containing only the new items (the ones not repeated):
```
c = [4,5]
```
The solution I have right now is:
```
>>> c = []
>>> for i in ab:
... if ab.count(i) == 1:
... c.append(i)
>>> c
[4, 5]
```
Is there any other way more pythonic than this?
Thanx folks!
|
2009/06/30
|
[
"https://Stackoverflow.com/questions/1062803",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/65459/"
] |
at the very least use a list comprehension:
```
[x for x in a + b if (a + b).count(x) == 1]
```
otherwise use the [set](http://docs.python.org/library/stdtypes.html#set-types-set-frozenset) class:
```
list(set(a).symmetric_difference(set(b)))
```
there is also a more compact form:
```
list(set(a) ^ set(b))
```
|
I'd say go for the set variant, where
```
set(b) ^ set(a) (set.symmetric_difference())
```
only applies if you can be certain that a is always a subset of b, but in that case has the advantage of being commutative, ie. you don't have to worry about calculating set(b) ^ set(a) or set(a) ^ set(b); or
```
set(b) - set(a) (set.difference())
```
which matches your description more closely, allows a to have extra elements not in b which will not be in the result set, but you have to mind the order (set(a) - set(b) will give you a different result).
|
1,062,803
|
Take two lists, second with same items than first plus some more:
```
a = [1,2,3]
b = [1,2,3,4,5]
```
I want to get a third one, containing only the new items (the ones not repeated):
```
c = [4,5]
```
The solution I have right now is:
```
>>> c = []
>>> for i in ab:
... if ab.count(i) == 1:
... c.append(i)
>>> c
[4, 5]
```
Is there any other way more pythonic than this?
Thanx folks!
|
2009/06/30
|
[
"https://Stackoverflow.com/questions/1062803",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/65459/"
] |
Items in b that aren't in a, if you need to preserve order or duplicates in b:
```
>>> a = [1, 2, 3]
>>> b = [1, 2, 3, 4, 4, 5]
>>> a_set = set(a)
>>> [x for x in b if x not in a_set]
[4, 4, 5]
```
Items in b that aren't in a, not preserving order, and not preserving duplicates in b:
```
>>> list(set(b) - set(a))
[4, 5]
```
|
Here are some different possibilities with the sets
```
>>> a = [1, 2, 3, 4, 5, 1, 2]
>>> b = [1, 2, 5, 6]
>>> print list(set(a)^set(b))
[3, 4, 6]
>>> print list(set(a)-set(b))
[3, 4]
>>> print list(set(b)-set(a))
[6]
>>> print list(set(a)-set(b))+list(set(b)-set(a))
[3, 4, 6]
>>>
```
|
70,992,422
|
When running xlwings 0.26.1 (latest for Anaconda 3.83) or 0.10.0 (using for compatibility reasons) with the latest version of `Office 365 Excel`, I get an error after moving a sheet when running `app.quit()`:
```
import xlwings as xw
import pythoncom
pythoncom.CoInitialize()
app = xw.apps.add()
app.display_alerts = False
app.screen_updating = False
wbSource = app.books.open('pathSourceTemp')
wsSource = wbSource.sheets['sourceSheet']
wbDestination = app.books.open('pathDestinationTemp')
wsDestination = None
#Grabs first sheet in destination
wsDestination = wbDestination.sheets[0]
#Copy sheet "before" destination sheet (which should be 1 sheet after the destination sheet)
wsSource.api.Copy(Before=wsDestination.api)
wbDestination.save()
#Close workbooks and app
wbDestination.close()
wbSource.close()
app.screen_updating = True
app.quit()
```
The final line causes Excel to throw an error that I have to click out of for the process to continue.
|
2022/02/04
|
[
"https://Stackoverflow.com/questions/70992422",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7952542/"
] |
I found the problem
If `mcr.microsoft.com/windows/nanoserver:1809` image is used then arguments should be used in %arg% format.
If `mcr.microsoft.com/dotnet/framework/sdk:4.8` image is used then arguments should be used in $env:arg format.
It is confusing and I haven't found where it is documented.
|
This looks like it might be a bug.
When you have a build arg with the same name as an already existing environment variable, Docker will use the already set environment variable instead of the build arg.
The framework image you use already has an environment variable called DOTNET\_VERSION, so you can't access the build arg value.
The solution is to name your build arguments something else. I've added a suffix \_ARG here
```
ARG DOTNET_VERSION_ARG=net48
ARG CONFIGURATION=Release
FROM mcr.microsoft.com/dotnet/framework/sdk:4.8 AS build-env
ARG DOTNET_VERSION_ARG
ARG CONFIGURATION
RUN echo .Net version: $env:DOTNET_VERSION_ARG
FROM mcr.microsoft.com/windows/nanoserver:1809
ARG DOTNET_VERSION_ARG
RUN echo .Net version: $env:DOTNET_VERSION_ARG
```
My experiments were on Linux where I used this Dockerfile
```
FROM mcr.microsoft.com/dotnet/aspnet:6.0
ARG DOTNET_VERSION=no-arg
ARG DOTNET_VERSION_ARG=arg
RUN echo DOTNET_VERSION=$DOTNET_VERSION - DOTNET_VERSION_ARG=$DOTNET_VERSION_ARG
ENV DOTNET_VERSION=$DOTNET_VERSION_ARG
RUN echo DOTNET_VERSION=$DOTNET_VERSION
```
and got this output
```
DOTNET_VERSION=6.0.1 - DOTNET_VERSION_ARG=arg
DOTNET_VERSION=arg
```
So if you have an ENV statement, you can set the environment variable to the value from the build argument.
|
53,092,936
|
I need some help with a assignment for python.
The task is to convert a .csv file to a dictionary, and do some changes. The problem is that the .csv file only got 1 column, but 3 rows.
The .csv file looks like this in excel
```
A B
1.male Bob West
2.female Hannah South
3.male Bruce North
```
So everything is in column A.
My code looks so far like this:
```
import csv
reader = csv.reader(open("filename.csv"))
d={}
for row in reader:
d[row[0]]=row[0:]
print(d)
```
And the output
```
{'\ufeffmale Bob West': ['\ufeffmale Bob West'], 'female Hannah South':
['female Hannah South'], 'male Bruce North': ['male Bruce North']}
```
but I want
```
{1 : Bob West, 2 : Hannah South, 3 : Bruce North}
```
The male/female should be changed with ID, (1,2,3). And i don´t know how to figure out the 1 column thing.
Thanks in advance.
|
2018/10/31
|
[
"https://Stackoverflow.com/questions/53092936",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10470153/"
] |
You can use dict comprehension and enumerate the `csv` object,
```
import csv
reader = csv.reader(open("filename.csv"))
x = {num+1:name[0].split(" ",1)[-1].rstrip() for (num, name) in enumerate(reader)}
print(x)
# output,
{1: 'Bob West', 2: 'Hannah South', 3: 'Bruce North'}
```
Or you can do it without using `csv` module simply by reading the file,
```
with open("filename.csv", 'r') as t:
next(t) # skip first line
x = {num+1:name.split(" ",1)[-1].strip() for (num, name) in enumerate(t)}
print(x)
# output,
{1: 'Bob West', 2: 'Hannah South', 3: 'Bruce North'}
```
|
This should work for the given input:
data.csv:
=========
```
1.male Bob West,
2.female Hannah South,
3.male Bruce North,
```
Code:
=====
```
import csv
reader = csv.reader(open("data.csv"))
d = {}
for row in reader:
splitted = row[0].split('.')
# print splitted[0]
# print ' '.join(splitted[1].split(' ')[1:])
d[splitted[0]] = ' '.join(splitted[1].split(' ')[1:])
print(d)
```
Output
======
```
{'1': 'Bob West', '3': 'Bruce North', '2': 'Hannah South'}
```
|
53,092,936
|
I need some help with a assignment for python.
The task is to convert a .csv file to a dictionary, and do some changes. The problem is that the .csv file only got 1 column, but 3 rows.
The .csv file looks like this in excel
```
A B
1.male Bob West
2.female Hannah South
3.male Bruce North
```
So everything is in column A.
My code looks so far like this:
```
import csv
reader = csv.reader(open("filename.csv"))
d={}
for row in reader:
d[row[0]]=row[0:]
print(d)
```
And the output
```
{'\ufeffmale Bob West': ['\ufeffmale Bob West'], 'female Hannah South':
['female Hannah South'], 'male Bruce North': ['male Bruce North']}
```
but I want
```
{1 : Bob West, 2 : Hannah South, 3 : Bruce North}
```
The male/female should be changed with ID, (1,2,3). And i don´t know how to figure out the 1 column thing.
Thanks in advance.
|
2018/10/31
|
[
"https://Stackoverflow.com/questions/53092936",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10470153/"
] |
This should work for the given input:
data.csv:
=========
```
1.male Bob West,
2.female Hannah South,
3.male Bruce North,
```
Code:
=====
```
import csv
reader = csv.reader(open("data.csv"))
d = {}
for row in reader:
splitted = row[0].split('.')
# print splitted[0]
# print ' '.join(splitted[1].split(' ')[1:])
d[splitted[0]] = ' '.join(splitted[1].split(' ')[1:])
print(d)
```
Output
======
```
{'1': 'Bob West', '3': 'Bruce North', '2': 'Hannah South'}
```
|
```
import cv with open('Employee_address.txt', mode='r') as CSV_file:
csv_reader= csv.DirectReader(csv_file)
life_count=0
for row in csv_reader:
if line_count==0:
print(f'columns names are {",".join()}')
line += 1
print(f'\t{row["name"]} works in the {row["department"]} department, and lives in{row["living address"]}.line_count +=1 print(f'Processed {line_count} lines.')
```
|
53,092,936
|
I need some help with a assignment for python.
The task is to convert a .csv file to a dictionary, and do some changes. The problem is that the .csv file only got 1 column, but 3 rows.
The .csv file looks like this in excel
```
A B
1.male Bob West
2.female Hannah South
3.male Bruce North
```
So everything is in column A.
My code looks so far like this:
```
import csv
reader = csv.reader(open("filename.csv"))
d={}
for row in reader:
d[row[0]]=row[0:]
print(d)
```
And the output
```
{'\ufeffmale Bob West': ['\ufeffmale Bob West'], 'female Hannah South':
['female Hannah South'], 'male Bruce North': ['male Bruce North']}
```
but I want
```
{1 : Bob West, 2 : Hannah South, 3 : Bruce North}
```
The male/female should be changed with ID, (1,2,3). And i don´t know how to figure out the 1 column thing.
Thanks in advance.
|
2018/10/31
|
[
"https://Stackoverflow.com/questions/53092936",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10470153/"
] |
You can use dict comprehension and enumerate the `csv` object,
```
import csv
reader = csv.reader(open("filename.csv"))
x = {num+1:name[0].split(" ",1)[-1].rstrip() for (num, name) in enumerate(reader)}
print(x)
# output,
{1: 'Bob West', 2: 'Hannah South', 3: 'Bruce North'}
```
Or you can do it without using `csv` module simply by reading the file,
```
with open("filename.csv", 'r') as t:
next(t) # skip first line
x = {num+1:name.split(" ",1)[-1].strip() for (num, name) in enumerate(t)}
print(x)
# output,
{1: 'Bob West', 2: 'Hannah South', 3: 'Bruce North'}
```
|
as per Simit, but using regular expressions and realising that your `1.` and `A` and `B` are you just trying to explain Excel cell and column identifiers
```
import re, csv
reader = csv.reader(open("data.csv"))
out = {}
for i, line in enumerate(reader, 1):
m = re.match(r'^(male|female) (.*)$', line)
if not m:
print(f"error processing {repr(line)}")
continue
out[i] = m[2]
print(out)
```
|
53,092,936
|
I need some help with a assignment for python.
The task is to convert a .csv file to a dictionary, and do some changes. The problem is that the .csv file only got 1 column, but 3 rows.
The .csv file looks like this in excel
```
A B
1.male Bob West
2.female Hannah South
3.male Bruce North
```
So everything is in column A.
My code looks so far like this:
```
import csv
reader = csv.reader(open("filename.csv"))
d={}
for row in reader:
d[row[0]]=row[0:]
print(d)
```
And the output
```
{'\ufeffmale Bob West': ['\ufeffmale Bob West'], 'female Hannah South':
['female Hannah South'], 'male Bruce North': ['male Bruce North']}
```
but I want
```
{1 : Bob West, 2 : Hannah South, 3 : Bruce North}
```
The male/female should be changed with ID, (1,2,3). And i don´t know how to figure out the 1 column thing.
Thanks in advance.
|
2018/10/31
|
[
"https://Stackoverflow.com/questions/53092936",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10470153/"
] |
as per Simit, but using regular expressions and realising that your `1.` and `A` and `B` are you just trying to explain Excel cell and column identifiers
```
import re, csv
reader = csv.reader(open("data.csv"))
out = {}
for i, line in enumerate(reader, 1):
m = re.match(r'^(male|female) (.*)$', line)
if not m:
print(f"error processing {repr(line)}")
continue
out[i] = m[2]
print(out)
```
|
```
import cv with open('Employee_address.txt', mode='r') as CSV_file:
csv_reader= csv.DirectReader(csv_file)
life_count=0
for row in csv_reader:
if line_count==0:
print(f'columns names are {",".join()}')
line += 1
print(f'\t{row["name"]} works in the {row["department"]} department, and lives in{row["living address"]}.line_count +=1 print(f'Processed {line_count} lines.')
```
|
53,092,936
|
I need some help with a assignment for python.
The task is to convert a .csv file to a dictionary, and do some changes. The problem is that the .csv file only got 1 column, but 3 rows.
The .csv file looks like this in excel
```
A B
1.male Bob West
2.female Hannah South
3.male Bruce North
```
So everything is in column A.
My code looks so far like this:
```
import csv
reader = csv.reader(open("filename.csv"))
d={}
for row in reader:
d[row[0]]=row[0:]
print(d)
```
And the output
```
{'\ufeffmale Bob West': ['\ufeffmale Bob West'], 'female Hannah South':
['female Hannah South'], 'male Bruce North': ['male Bruce North']}
```
but I want
```
{1 : Bob West, 2 : Hannah South, 3 : Bruce North}
```
The male/female should be changed with ID, (1,2,3). And i don´t know how to figure out the 1 column thing.
Thanks in advance.
|
2018/10/31
|
[
"https://Stackoverflow.com/questions/53092936",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10470153/"
] |
You can use dict comprehension and enumerate the `csv` object,
```
import csv
reader = csv.reader(open("filename.csv"))
x = {num+1:name[0].split(" ",1)[-1].rstrip() for (num, name) in enumerate(reader)}
print(x)
# output,
{1: 'Bob West', 2: 'Hannah South', 3: 'Bruce North'}
```
Or you can do it without using `csv` module simply by reading the file,
```
with open("filename.csv", 'r') as t:
next(t) # skip first line
x = {num+1:name.split(" ",1)[-1].strip() for (num, name) in enumerate(t)}
print(x)
# output,
{1: 'Bob West', 2: 'Hannah South', 3: 'Bruce North'}
```
|
```
import cv with open('Employee_address.txt', mode='r') as CSV_file:
csv_reader= csv.DirectReader(csv_file)
life_count=0
for row in csv_reader:
if line_count==0:
print(f'columns names are {",".join()}')
line += 1
print(f'\t{row["name"]} works in the {row["department"]} department, and lives in{row["living address"]}.line_count +=1 print(f'Processed {line_count} lines.')
```
|
53,092,936
|
I need some help with a assignment for python.
The task is to convert a .csv file to a dictionary, and do some changes. The problem is that the .csv file only got 1 column, but 3 rows.
The .csv file looks like this in excel
```
A B
1.male Bob West
2.female Hannah South
3.male Bruce North
```
So everything is in column A.
My code looks so far like this:
```
import csv
reader = csv.reader(open("filename.csv"))
d={}
for row in reader:
d[row[0]]=row[0:]
print(d)
```
And the output
```
{'\ufeffmale Bob West': ['\ufeffmale Bob West'], 'female Hannah South':
['female Hannah South'], 'male Bruce North': ['male Bruce North']}
```
but I want
```
{1 : Bob West, 2 : Hannah South, 3 : Bruce North}
```
The male/female should be changed with ID, (1,2,3). And i don´t know how to figure out the 1 column thing.
Thanks in advance.
|
2018/10/31
|
[
"https://Stackoverflow.com/questions/53092936",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10470153/"
] |
You can use dict comprehension and enumerate the `csv` object,
```
import csv
reader = csv.reader(open("filename.csv"))
x = {num+1:name[0].split(" ",1)[-1].rstrip() for (num, name) in enumerate(reader)}
print(x)
# output,
{1: 'Bob West', 2: 'Hannah South', 3: 'Bruce North'}
```
Or you can do it without using `csv` module simply by reading the file,
```
with open("filename.csv", 'r') as t:
next(t) # skip first line
x = {num+1:name.split(" ",1)[-1].strip() for (num, name) in enumerate(t)}
print(x)
# output,
{1: 'Bob West', 2: 'Hannah South', 3: 'Bruce North'}
```
|
I like to use Pandas for stuff like this. You can use Pandas to import it and then export it to a dict.
```
import pandas as pd
df = pd.read_csv('test.csv',header=-1)
# Creates new columns in the dataframe based on the rules of the question
df['Name']=df[0].str.split(' ',1).str.get(1)
df['ID'] = df[0].str.split('.',1).str.get(0)
```
The dataframe should have three columns:
* 0 - This is the raw data.
* Name - The name as defined in the problem.
* ID - The number that comes before the period.
I didn't include gender, but it really won't fit into the dict. I'm also assuming your data does not have a header.
The next part converts your pandas dataframe to a dict in the output that you want.
```
output_dict = dict()
for i in range(len(df[['ID','Name']])):
output_dict[df.iloc[i]['ID']] = df.iloc[i]['Name']
```
|
53,092,936
|
I need some help with a assignment for python.
The task is to convert a .csv file to a dictionary, and do some changes. The problem is that the .csv file only got 1 column, but 3 rows.
The .csv file looks like this in excel
```
A B
1.male Bob West
2.female Hannah South
3.male Bruce North
```
So everything is in column A.
My code looks so far like this:
```
import csv
reader = csv.reader(open("filename.csv"))
d={}
for row in reader:
d[row[0]]=row[0:]
print(d)
```
And the output
```
{'\ufeffmale Bob West': ['\ufeffmale Bob West'], 'female Hannah South':
['female Hannah South'], 'male Bruce North': ['male Bruce North']}
```
but I want
```
{1 : Bob West, 2 : Hannah South, 3 : Bruce North}
```
The male/female should be changed with ID, (1,2,3). And i don´t know how to figure out the 1 column thing.
Thanks in advance.
|
2018/10/31
|
[
"https://Stackoverflow.com/questions/53092936",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10470153/"
] |
I like to use Pandas for stuff like this. You can use Pandas to import it and then export it to a dict.
```
import pandas as pd
df = pd.read_csv('test.csv',header=-1)
# Creates new columns in the dataframe based on the rules of the question
df['Name']=df[0].str.split(' ',1).str.get(1)
df['ID'] = df[0].str.split('.',1).str.get(0)
```
The dataframe should have three columns:
* 0 - This is the raw data.
* Name - The name as defined in the problem.
* ID - The number that comes before the period.
I didn't include gender, but it really won't fit into the dict. I'm also assuming your data does not have a header.
The next part converts your pandas dataframe to a dict in the output that you want.
```
output_dict = dict()
for i in range(len(df[['ID','Name']])):
output_dict[df.iloc[i]['ID']] = df.iloc[i]['Name']
```
|
```
import cv with open('Employee_address.txt', mode='r') as CSV_file:
csv_reader= csv.DirectReader(csv_file)
life_count=0
for row in csv_reader:
if line_count==0:
print(f'columns names are {",".join()}')
line += 1
print(f'\t{row["name"]} works in the {row["department"]} department, and lives in{row["living address"]}.line_count +=1 print(f'Processed {line_count} lines.')
```
|
13,788,688
|
I'm using a python code to get data from my server. However, I keep getting a "u" as a prefix to each key in the JSON
as follows:
```
"{u'BD': 271, u'PS': 48, u'00': 177, u'CA': 5, u'DE': 15, u'FR': 18, u'UM': 45, u'KR': 6, u'IL': 22181, u'GB': 15}"
```
My python code is as follows:
```
from json import dumps
ans = select something from the database
json.dumps(ans)
```
does any know how to avoid it?
|
2012/12/09
|
[
"https://Stackoverflow.com/questions/13788688",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1432779/"
] |
The `u''` means the value is a unicode literal. Everything is working as intended, you don't need to get rid of those.
JSON is a standard that supports Unicode values natively, and thus the `json` module accepts unicode strings when converting a Python value to JSON:
```
>>> import json
>>> ans={u'BD': 271, u'PS': 48, u'00': 177, u'CA': 5, u'DE': 15, u'FR': 18, u'UM': 45, u'KR': 6, u'IL': 22181, u'GB': 15}
>>> json.dumps(ans)
'{"BD": 271, "PS": 48, "00": 177, "IL": 22181, "UM": 45, "KR": 6, "CA": 5, "DE": 15, "FR": 18, "GB": 15}'
```
|
I think something got mixed up here. The result you've posted looks like a Python representation of a dict. To be precise: json.dumps returns a string, so its result should be enclosed in quotes, like this:
```
>>> import json
>>> json.dumps({'foo': 'bar'})
'{"foo": "bar"}'
```
|
32,540,092
|
I have a jinja2 template designed to print out the IP addresses of ec2 instances (tagged region: au) :
```
{% for host in groups['tag_region_au'] %}
```
My problem is I can't for the life of me work out how to include only hosts that exist in one group and NOT another (however each host may be in two or more groups), for example in python the following works:
```
( (a in list) and ( a not in list2) )
```
However the following does not:
```
{% for (host in groups['tag_region_au']) and (host not in groups['tag_state_live']) %}
```
Any idea how I can include only hosts that exist in one group and that do not exist in the another group?
|
2015/09/12
|
[
"https://Stackoverflow.com/questions/32540092",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/264897/"
] |
You can use built-in `group_names` variable in this case. `group_names` variable is a list of all groups that the current host is a member of.
My `hosts` file:
```
[tag_region_au]
host1
host2
host3
[tag_state_live]
host2
host3
host4
```
My template file `test.j2`:
```
{% for host in groups['tag_region_au'] %}
{% if hostvars[host]['group_names']|length == 1 %}
{{ host }} - {{ hostvars[host]['group_names'] }}
{% endif %}
{% endfor %}
```
`hostvars` is a dict whose keys are ansible hostname and values are dict map variable name to value. `length` is jinja filter return number of items in list.
Result:
```
host1 - ['tag_region_au']
```
If you change `==` to `>`, the result is:
```
host2 - ['tag_region_au', 'tag_state_live']
host3 - ['tag_region_au', 'tag_state_live']
```
Update:
=======
To check host in group A and not in group B, you can use `difference` filter. Syntax is `{{ list1 | difference(list2) }}`.
Here is example template:
```
{% for host in groups['tag_region_au']|difference(groups['tag_state_live']) %}
{{ host }}
{% endfor %}
```
Result is: `host1`
|
A really clean way to do it if you don't mind starting a new play for the templating stuff is to use a group expression in the play target (which is exactly what they're for). For example:
```
- hosts: tag_region_au:!tag_state_live
tasks:
- template: (bla)
```
Then in your template, you'd reference the `play_hosts` var to get at the list of filtered hosts.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.