qid
int64 46k
74.7M
| question
stringlengths 54
37.8k
| date
stringlengths 10
10
| metadata
listlengths 3
3
| response_j
stringlengths 29
22k
| response_k
stringlengths 26
13.4k
| __index_level_0__
int64 0
17.8k
|
|---|---|---|---|---|---|---|
30,421,267
|
I'm currently having troubles in setting `is_active` field from `False` back to `True` in Django. I have a custom Account model which is inhereted from `AbstractBaseUser` and is managed by `class AccountManager(BaseUserManager)`.
By default, I override the `is_active` field to be False. However, when I try to reactivate the field again. It does update at that time in `python manage.py shell`, but when I logout and login back to the shell. The `is_active` field is still `False`. Below is my `activateUser()` function
```
def activateUser(self, account):
account.is_active = True
account.save()
return account
```
Below is my `python manage.py shell` perform:
```
>>> from authentication.models import Account
>>> a = Account.objects.all()[0]
>>> a.is_active
False
>>> Account.objects.activateUser(a)
>>> a.is_active
True
>>> exit()
python manage.py shell
>>> from authentication.models import Account
>>> a = Account.objects.all()[0]
>>> a.is_active
False
```
**Updated:** Here is my Account class. The `activateUser()` function above is from the `AccountManager` class which is inherited from `BaseUserManager` and is managing the Account class.
```
class Account(AbstractBaseUser):
"""
Create a new model from AbstractBaseUser account to have our own fields
"""
username = models.CharField(max_length=40, unique=True)
email = models.EmailField(unique=True)
firstName = models.CharField(max_length=40, blank=True, null=True)
lastName = models.CharField(max_length=40, blank=True, null=True)
createdAt = models.DateTimeField(auto_now_add=True)
updatedAt = models.DateTimeField(auto_now=True)
is_staff = models.BooleanField(default=False)
is_active = False
objects = AccountManager()
USERNAME_FIELD = 'username'
REQUIRED_FIELDS = ['email']
def __unicode__(self):
return self.username
def get_full_name(self):
return self.firstName + ' ' + self.lastName
def get_short_name(self):
return self.firstName
```
|
2015/05/24
|
[
"https://Stackoverflow.com/questions/30421267",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3773879/"
] |
In Java, we have a class named `String` which has a method called `length()`.
In C, you need to have a `\0` at the end of your string so you could know where your string ends. But in Java, this problem handled with the method`length()`.
|
C doesn't have strings as an actual data type and the convention is just that character arrays ending in a null character can be used as strings. That's what you get when you use string literals in the language and that's what you have to recreate when you don't use them.
The underlying issue is that C wanted to save memory on its string representation by not storing a length (Pascal stored the string length in the first byte for example), thus the length has to follow somehow from the data, in this case by ending the data with `'\0'`.
| 7,718
|
72,923,752
|
I am learning python.
For the code below, how to convert for loop to while loop in an efficient way?
```
import pandas as pd
transactions01 = []
file=open('raw-data1.txt','w')
file.write('HotDogs,Buns\nHotDogs,Buns\nHotDogs,Coke,Chips\nChips,Coke\nChips,Ketchup\nHotDogs,Coke,Chips\n')
file.close()
file=open('raw-data1.txt','r')
lines = file.readlines()
for line in lines:
items = line[:-1].split(',')
has_item = {}
for item in items:
has_item[item] = 1
transactions01.append(has_item)**
file.close()
data = pd.DataFrame(transactions01)
data.fillna(0, inplace = True)
data
```
|
2022/07/09
|
[
"https://Stackoverflow.com/questions/72923752",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/19517608/"
] |
Code :
```
i = 0
while i<len(lines):
items = lines[i][:-1].split(',')
has_item = {}
j = 0
while j<len(items):
has_item[items[j]]=1
j+=1
transactions01.append(has_item)
i+=1
```
|
It looks like you could just take your, use the csv module to parse the file as you've got an inconsistent number of rows per column, then turn it into a dataframe, use `pd.get_dummies` to get 0/1's per item present, then aggregate back to a row level to product your final output, eg:
```
import pandas as pd
import csv
with open('raw-data1.txt') as fin:
df = pd.get_dummies(pd.DataFrame(csv.reader(fin)).stack()).groupby(level=0).max()
```
Will give you a `df` of:
```
Buns Chips Coke HotDogs Ketchup
0 1 0 0 1 0
1 1 0 0 1 0
2 0 1 1 1 0
3 0 1 1 0 0
4 0 1 0 0 1
5 0 1 1 1 0
```
.. which you can then write back out as CSV if required.
| 7,727
|
38,000,412
|
I'm just new to python and I can't seem to find a solution to my problem, since it seems to be pretty simple. I have a geometry on paraview, I'm saving it as a vtk file and I'm trying to use python to calculate it's volume.
This is the code I'm using:
```
import vtk
reader = vtk.vtkPolyDataReader()
reader.SetFileName("C:\Users\Pauuu\Google Drive\2016-01\SURF\Sim Vascular\Modelos\apoE183 Day 14 3D\AAA.vtk")
reader.Update()
polydata = reader.GetOutput()
Mass = vtk.vtkMassProperties()
Mass.SetInputConnection(polydata.GetOutput())
Mass.Update()
print "Volume = ", Mass.GetVolume()
print "Surface = ", Mass.GetSurfaceArea()
```
I think there might be a problem with the way im loding the data, and i get the `AttributeError: GetOutput`.
Do you know what might be happening or what I'm doing wrong?
Thank you in advance.
|
2016/06/23
|
[
"https://Stackoverflow.com/questions/38000412",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6505466/"
] |
Depending on your version of `vtk` package you may want to test the following syntax if your version <= `5`:
```
Mass.SetInput(polydata.GetOutput());
```
Otherwise, the actual syntax is:
```
Mass.SetInputData(polydata.GetOutputPort());
```
PS: you can check the python-wrapped `vtk` version by running:
```
import vtk
print vtk.vtkVersion.GetVTKSourceVersion()
```
|
You have assigned `reader.GetOutput()` in `polydata`. From `polydata`, I believe you need to do, `polydata.GetOutputPort()`
| 7,728
|
32,157,861
|
I use this in python:
```
test = zlib.compress(test, 1)
```
And now I want to use this in java, but I don't know how.
At the end I need to convert the result to a string...
I wait your help! thx
|
2015/08/22
|
[
"https://Stackoverflow.com/questions/32157861",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3363949/"
] |
You need to be more specific in describing what you want to achieve.
Are you just seeking to decompress a zlib-compressed piece of data?
Or are you seeking for a way to exchange data between Python and Java, possibly by transferring it across a network?
For the former, it's basically sticking the compressed datastream in an `Inflater` (not `Deflater`!). Something like:
```
Inflater decompresser = new Inflater();
decompresser.setInput(data);
ByteArrayOutputStream bos = new ByteArrayOutputStream(data.length);
byte[] buffer = new byte[8192];
while (!decompresser.finished()) {
int size = decompresser.inflate(buffer);
bos.write(buffer, 0, size);
}
byte[] unzippeddata = bos.toByteArray();
decompresser.end();
```
For the latter, alex's answer already points to Pyro/Pyrolite.
|
You can try using a RPC to create a server.
<https://github.com/irmen/Pyro4>
And access that server using java.
<https://github.com/irmen/Pyrolite>
| 7,731
|
66,129,119
|
Trying to install freeradius package on Debian 10 buster and it fails.
```
$ sudo apt install freeradius
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
Suggested packages:
freeradius-krb5 freeradius-ldap freeradius-mysql freeradius-postgresql freeradius-python3
The following NEW packages will be installed:
freeradius
0 upgraded, 1 newly installed, 0 to remove and 0 not upgraded.
Need to get 555 kB of archives.
After this operation, 2230 kB of additional disk space will be used.
Get:1 http://ftp.de.debian.org/debian sid/main amd64 freeradius amd64 3.0.21+dfsg-2+b2 [555 kB]
Fetched 555 kB in 0s (2753 kB/s)
Selecting previously unselected package freeradius.
(Reading database ... 140557 files and directories currently installed.)
Preparing to unpack .../freeradius_3.0.21+dfsg-2+b2_amd64.deb ...
Unpacking freeradius (3.0.21+dfsg-2+b2) ...
Setting up freeradius (3.0.21+dfsg-2+b2) ...
update-rc.d: warning: start and stop actions are no longer supported; falling back to defaults
install: invalid user ‘freerad’
dpkg: error processing package freeradius (--configure):
installed freeradius package post-installation script subprocess returned error exit status 1
Errors were encountered while processing:
freeradius
E: Sub-process /usr/bin/dpkg returned an error code (1)
```
install: invalid user ‘freerad’
Apparently it seems as if there's something wrong with the freerad user which doesn't exist?
Typing / checking inside /etc/passwd file learly shows there's no such user
```
$ cat /etc/passwd | grep freer*
```
Checking the syslog shows:
```
Feb 10 00:38:45 server-1 systemd[1]: freeradius.service: Scheduled restart job, restart counter is at 277.
Feb 10 00:38:45 server-1 freeradius[10918]: FreeRADIUS Version 3.0.21
Feb 10 00:38:45 server-1 freeradius[10918]: Copyright (C) 1999-2019 The FreeRADIUS server project and contributors
Feb 10 00:38:45 server-1 freeradius[10918]: There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A
Feb 10 00:38:45 server-1 freeradius[10918]: PARTICULAR PURPOSE
Feb 10 00:38:45 server-1 freeradius[10918]: You may redistribute copies of FreeRADIUS under the terms of the
Feb 10 00:38:45 server-1 freeradius[10918]: GNU General Public License
Feb 10 00:38:45 server-1 freeradius[10918]: For more information about these matters, see the file named COPYRIGHT
Feb 10 00:38:45 server-1 freeradius[10918]: Errors reading /etc/freeradius/3.0: Permission denied
Feb 10 00:38:45 server-1 systemd[1]: freeradius.service: Control process exited, code=exited, status=1/FAILURE
Feb 10 00:38:45 server-1 systemd[1]: freeradius.service: Failed with result 'exit-code'.
```
Tried a\*\*smarting this by adding the user manually with the command adduser freeradius
and then tried re-installing:
```
$ sudo apt install freeradius
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
freeradius is already the newest version (3.0.21+dfsg-2+b2).
0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
1 not fully installed or removed.
After this operation, 0 B of additional disk space will be used.
Do you want to continue? [Y/n] y
Setting up freeradius (3.0.21+dfsg-2+b2) ...
update-rc.d: warning: start and stop actions are no longer supported; falling back to defaults
chown: cannot access '/etc/freeradius': No such file or directory
dpkg: error processing package freeradius (--configure):
installed freeradius package post-installation script subprocess returned error exit status 1
Errors were encountered while processing:
freeradius
E: Sub-process /usr/bin/dpkg returned an error code (1)
```
now it fails prompting a new error: chown: cannot access '/etc/freeradius': No such file or directory
Tried purging / removing / clean / cache deleting / rebooting and then restarting the installation but it keeps reappearing.
Let me know if there's more info I can provide in manner to help you help me.
Cheers.
|
2021/02/10
|
[
"https://Stackoverflow.com/questions/66129119",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11966276/"
] |
when removing freeradius (either by `apt remove freeradius` or `apt purge freeradius`) mind that config files are a seperate package called `freeradius-config`
so to completely wipe freeradius and do a reinstall do:
```
apt purge freeradius freeradius-config
rm -rf /etc/freeradius/
apt install freeradius
```
|
After installation of freeradius, add the following lines at the end of the client.conf file to set the access of your client:
```
#vi /etc/freeradius/3.0/clients.conf
```
For Example :
```
client SWITCH-01 {
ipaddr = 192.168.0.10
secret = kamisama123
}
client LINUX-01 {
ipaddr = 192.168.0.20
secret = vegeto123
}
```
In our example, we are adding 2 client devices, both devices will offer a login promt to authenticate on freeradius server database.
Now, locate and edit the freeradius users configuration file:
Example:
```
# vi /etc/freeradius/3.0/users
```
Adding Your Username And Password Like below:
```
freerad1 Cleartext-Password := "freerad1234"
freerad2 Cleartext-Password := "freerad2123"
```
Now, Restart freeradius server :
```
# service freeradius restart
```
Now, test your radius server configuration file :
```
#freeradius -CX
```
you can Test your radius authentication locally on the Radius server using the following commands:
```
# radtest freerad1 freerad1234 localhost 0 testing123
```
Here is an example of a successful radius authentication:
```
Sent Access-Request Id 151 from 0.0.0.0:34857 to 127.0.0.1:1812 length 75
User-Name = "freerad1"
User-Password = "freerad1234"
NAS-IP-Address = 172.31.41.98
NAS-Port = 0
Message-Authenticator = 0x00
Cleartext-Password = "freerad1234"
Received Access-Accept Id 151 from 127.0.0.1:1812 to 0.0.0.0:0 length 20
```
We are using the freerad1 username and the freerad1234 password to authenticate the user account.
The testing123 is a default device password included in the clients.conf file.
Now, go to the Linux server included on the clients.conf configuration file as LINUX-01.
Install the freeradius-utils package.
```
# apt-get install freeradius-utils
```
Test your radius authentication remotely on the Linux server (LINUX-01 OR YOUR CLIENT YOU ARE ADDED ON client.conf) using the following commands:
```
# radtest freerad freerad1234 192.168.0.50 0 vegeto123
```
\*\* Congratulations, Your Freeradius server is on the service \*\*
| 7,732
|
71,397,984
|
I have successfully installed Pillow:
chris@MBPvonChristoph sources % python3 -m pip install --upgrade Pillow
Collecting Pillow
Using cached Pillow-9.0.1-1-cp310-cp310-macosx\_11\_0\_arm64.whl (2.7 MB)
Installing collected packages: Pillow
Successfully installed Pillow-9.0.1
but when i try to use it in pycharm got:
Traceback (most recent call last):
File "/Users/chris/PycharmProjects/pythonProject2/main.py", line 1, in
from PIL import Image
ModuleNotFoundError: No module named 'PIL'
or using in Blender i got:
ModuleNotFoundError: No module named 'PIL'
I am not a python lib installing pro...so obviously i made something wrong. But how do i fix that?
Maybe i have to say i am working on a M1 Macbook
|
2022/03/08
|
[
"https://Stackoverflow.com/questions/71397984",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8457280/"
] |
looks like you may need to repoint your pycharm to your installed python interpreter.
---
1. go to command line and find out python interpreter path. On windows you can `where python` in your command line an it will give you where your python and packages are installed.. You could also activate python directly in command line and find paths from there. For example, open command line then;
```
python
```
press enter = activates python
within then you can do:
```
import sys
```
```
for x in sys.path: x
```
---
2. In pycharm make sure you point to path discovered from step 1 and select that to be your python interpreter within pycharm --- check out examples here <https://www.jetbrains.com/help/pycharm/configuring-python-interpreter.html#add-existing-interpreter>
---
3. Should work. Not sure about all the steps you took, but if you installed python with pycharm on top of your regular installation of python i would recommend :
* finding all the paths from step 1
* deleting python using system
* checking if folders found from paths step still exist
* if they do, delete those as well
* start over just with one python installation
* repoint to that in pycharm
|
first
```
pip uninstall PIL
```
after uninstall
```
python3 -m pip install --upgrade pip
python3 -m pip install --upgrade Pillow
```
or
```
brew install Pillow
```
| 7,733
|
61,941,471
|
i'm still new to the field of deep learning using keras and wanted to ask if it is possible to take a model after training the network and run the network again with the variables of that model?
in other words, say i train the network and reach an accuracy of 90%, can i train the network again starting with the variable values reached before in order to further improve the accuracy.and if so how can i do so in python?
i would appreciate if you can explain this to me or provide a resource for me to learn about something like this.
|
2020/05/21
|
[
"https://Stackoverflow.com/questions/61941471",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12139083/"
] |
For me I just had to add `./mongo_data` to `.dockerignore` and the Dockerfile would ignore this directory while building `api`.
I did this for all my volumes and things work fine now, no permission errors.
|
I had the same problem.
I assume that you didn't have this permission error at your first time of docker-compose up.
The
The file "mongo\_data/diagnostic.data" did not exist either. The mongo container will create the file when starting, and the file belongs to a user, it's 999:root in my case(postgres).
```
drwx------ 19 999 root 4096 Jun 4 12:19 postgres-data/
```
Your api container has the current directory as the build context. The user who builds the docker is your host user, it has to not be 999. So when you docker-compose up would have the permission denied.
You can create a .dockerignore file in the context folder, the content is:
```
mongo_data
```
You have to add other folders in this file in the volumes of other containers.
| 7,734
|
32,925,532
|
I had already looked through this post:
[Python: building new list from existing by dropping every n-th element](https://stackoverflow.com/questions/16406772/python-building-new-list-from-existing-by-dropping-every-n-th-element), but for some reason it does not work for me:
I tried this way:
```
def drop(mylist, n):
del mylist[0::n]
print(mylist)
```
This function takes a list and `n`. Then it removes every n-th element by using n-step from list and prints result.
Here is my function call:
```
drop([1,2,3,4],2)
```
Wrong output:
`[2, 4]` instead of `[1, 3]`
---
Then I tried a variant from the link above:
```
def drop(mylist, n):
new_list = [item for index, item in enumerate(mylist) if index % n != 0]
print(new_list)
```
Again, function call:
```
drop([1,2,3,4],2)
```
Gives me the same wrong result:
`[2, 4]` instead of `[1, 3]`
---
How to correctly remove/delete/drop every **n-th** item from a list?
|
2015/10/03
|
[
"https://Stackoverflow.com/questions/32925532",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4786305/"
] |
The output is correct, you are removing the the elements with index 0, n, 2n, ... . So 1 and 3 are removed, 2 and 4 are left. So if you want to print the 0, n, 2n, ... element, just write
```
print(mylist[::n])
```
|
your first approach looks good to me - you just have to adapt your start index if you want to drop the elements 1, 1+n, 1+2n, ... (as seems to be the case):
```
lst = list(range(1, 5))
del lst[1::2]
print(lst)
```
| 7,735
|
67,972,487
|
I want to get a script to:
a) check if a number within the user defined range is prime or not
b) print the result of a check
c) print amount of numbers checked and how many of these numbers were primes
d) print the last prime number
Here is what I have so far:
```
lower = int(input("Lower boundry: "))
upper = int(input("Upper boundry: "))
for n in range(lower, upper + 1):
if (n <= 1):
print(n, "error")
elif (n > 1):
for x in range(2, n):
if ((n % x) == 0):
print(n, "not prime, because", x, "*", int(n/x),"=", n)
break
else:
print(n, "prime")
last = [n]
print("Checked", n, "numbers, of which","XXX","were prime.")
print("Last prime number found was", *last)
```
Question:
1. The tool im using gives me an error. Lets say, I check the numbers 1-10. On 1 I get a notification that 1 is not prime (as intended, as n <= 1). Number 2 is a prime and the programme notifies me of this. However, on 3 i get an error - "incorrect output: your program printed "prime3", but should have printed "prime.". I don't get it. Is it the white space python seems to like so much? Or did I fail with else if use?
2. For checking the number of primes within the specified range - am I supposed to use a list to store the values generated in the for loop? The material will cover lists during later sessions, so I am guessing I am meant to use some other method. But lets say I want to use the list. Is it something like:
```
list=[]
for y in [1,10] :
for z in [0,1,2,3] :
x=y+z
list.append(x)
```
and the use len(list) where the XXX placeholder is ATM?
Edit: Got the c) and d) to work - used list as a storage for values. Printed last value in the list for d) and used the length of the list for c). Fixed the indentation issue pointed out to me in the comments below. Still cannot get the code to run properly. In I in put 1 and 10 as bounds, the programme identifies 1 as not prime, 2 as prime and then gives error. Error text is: "Incorrect output: your program printed "prime3", but should have printed "prime." " Not really sure what is up with that.
|
2021/06/14
|
[
"https://Stackoverflow.com/questions/67972487",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/16223545/"
] |
The indentation of your `else` clause is wrong. It should belong to the `for` loop. This means it will be executed when the loop is not terminated by the `break` statement.
I can't answer your question about the error message, because I don't know the test tool in question.
To keep track of the number of primes found, you can use a simple integer variable: Initialize it as zero and increment it by 1 whenever a prime is found.
```py
lower = int(input("Lower boundry: "))
upper = int(input("Upper boundry: "))
n_primes = 0
for n in range(lower, upper + 1):
if (n <= 1):
print(n, "error")
else:
for x in range(2, n):
if ((n % x) == 0):
print(n, "not prime, because", x, "*", int(n/x),"=", n)
break
else:
print(n, "prime")
n_primes += 1
last = n
print("Checked", upper - lower + 1, "numbers, of which", n_primes, "were prime.")
print("Last prime number found was", last)
```
|
How about this .
```
# define a flag variable
lower = int(input("Lower boundry: "))
upper = int(input("Upper boundry: "))
prime_list=[]
for num in range(lower,upper+1):
flag = False
# prime numbers are greater than 1
if num > 1:
# check for factors
for i in range(2, num):
if (num % i) == 0:
# if factor is found, set flag to True
flag = True
# break out of loop
break
# check if flag is True
if flag:
flag=False
else:
prime_list.append(num)
print("Checked:",len(list(range(lower,upper+1))),"numbers where",len(prime_list),"were prime.")
print("Last prime number checked: ",prime_list[-1])
print("All prime numbers are: ",','.join(str(n) for n in prime_list))
```
Output
```
Lower boundry: 1
Upper boundry: 30
Checked: 30 numbers where 11 were prime.
Last prime number checked: 29
All prime numbers are: 1,2,3,5,7,11,13,17,19,23,29
```
| 7,744
|
52,767,072
|
I am trying psql and getting an error.
```
$ psql
psql: FATAL: "myilmaz" role not available
```
Then I try
```
$ createdb python_getting_started
createdb: Unable to connect to template1 database: FATAL: "myilmaz" does not have access to the system
```
I run `export DATABASE_URL=postgres://$(whoami)` as indicated [here](https://devcenter.heroku.com/articles/heroku-postgresql#local-setup) and then started getting this error. I don't know if there was an error before that.
**Note:** I'm using Pardus 17.3, and it based Debian 9. 'myilmaz' is my username.
**Edit:** I tried adding my user to template1 in postgres user.
|
2018/10/11
|
[
"https://Stackoverflow.com/questions/52767072",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10398711/"
] |
The part you haven't understood is that every method - in fact, every call to every method - has its own collection of local variables. That means that
* the `winnings` variable declared in `main` is NOT the same variable as the `winnings` variable declared in `determineWinnings`;
* the `bet` variable declared in `main` is NOT the same variable as the `bet` variable declared in `getBet`.
What you need to do is to make sure that the value returned by each *called* method is assigned to the variable that you want to store it in, in the *caller* method. So in `main`, when you call `getBet`, you actually want to write
```
bet = getBet(inScanner, currentPool);
```
so that the value returned from `getBet` gets assigned to the `bet` variable from `main`. Similarly, when you call `determineWinnings`, you need to write
```
winnings = determineWinnings(highLow, bet, roll);
```
so that the value returned from `determineWinnings` gets assigned to the `winnings` variable from `main`.
If you don't do that, then all the variables in `main` just keep their original values, which are `100` for `currentPool` and `32` for winnings (because `' '` is just another way to write `32`). That's why your final value turns out to be `132`.
|
To avoid skipping if in execution use
`bet = getBet(inScanner, currentPool);
highLow = getHighLow(inScanner);
winnings = determineWinnings(highLow, bet, roll);`
instead of directly calling
`getBet(inScanner, currentPool);
getHighLow(inScanner);
determineWinnings(highLow, bet, roll);`
Reson for skipping if statement and wrong answer:
getBet(inScanner, currentPool) method is returning the value which is not assigned to
int bet = ' ';
java tries to convert ' ' to an integer which is 32. so bet is every time assigned 32 value when you call getBet(inScanner, currentPool) even if the entered value is different(which will give wrong answer).
In the case of getHighLow(inScanner) method is returning char value but since it is not assigned to highLow, highLow will always have value ' '.
since highLow has not assigned the actual value(H/L/S), if statement will be skipped as
' '!=(H/L/S) and always statements in else are executed.
| 7,746
|
74,436,487
|
I am using OR-Tools to solve a problem similar to the Nurse Scheduling problem. The difference in my case is that when I schedule a "Nurse" for a shift, they must then work consecutive days (i.e., there can be no gaps between days worked).
Most of the similar questions point to this [code](https://github.com/google/or-tools/blob/master/examples/python/shift_scheduling_sat.py). I have attempted to implement the answer adapted from there. However, I am getting output solutions which do not respecting the constraints.
The logic I was trying to follow is that I want to forbid patterns that have gaps. For example:
```
[1,0,1]
[1,0,0,1]
[1,0,0,0,1]
```
Below is an example of my code where for
```
# Modified from the code linked above:
def negated_bounded_span(works, start, length):
sequence = []
# Left border
sequence.append(works[start].Not())
# Middle
for i in range(1,length+1):
sequence.append(works[start + i])
# Right border
sequence.append(works[start + length + 1].Not())
return sequence
for n in range(num_nurses):
# nurse_days[(n,d)] is 1 if nurse n works on day d
nrses = [nurse_days[(n, d)] for d in range(5)]
for length in range(1, 4):
for start in range(5 - length - 1):
model.AddBoolOr(negated_bounded_span(nrses, start, length))
```
A modified excerpt of what the output of the above would look like is the following:
```
['Not(nurse_days_n0d0)', nurse_days_n0d1(0..1), 'Not(nurse_days_n0d2)']
['Not(nurse_days_n0d1)', nurse_days_n0d2(0..1), 'Not(nurse_days_n0d3)']
['Not(nurse_days_n0d2)', nurse_days_n0d3(0..1), 'Not(nurse_days_n0d4)']
['Not(nurse_days_n0d0)', nurse_days_n0d1(0..1), nurse_days_n0d2(0..1), 'Not(nurse_days_n0d3)']
['Not(nurse_days_n0d1)', nurse_days_n0d2(0..1), nurse_days_n0d3(0..1), 'Not(nurse_days_n0d4)']
['Not(nurse_days_n0d0)', nurse_days_n0d1(0..1), nurse_days_n0d2(0..1), nurse_days_n0d3(0..1), 'Not(nurse_days_n0d4)']
```
Thanks for your help in advance.
Similar questions reviewed: [[1](https://stackoverflow.com/questions/56872713/setting-binary-constraints-with-google-or-tools/56877058#56877058)], [[2](https://stackoverflow.com/questions/62980487/or-tools-add-2-days-in-a-row-to-the-scheduling-problem?noredirect=1&lq=1)], [[3](https://stackoverflow.com/questions/55100229/google-or-tools-employee-scheduling-the-condition-does-not-work-properly?rq=1)].
|
2022/11/14
|
[
"https://Stackoverflow.com/questions/74436487",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/18497033/"
] |
[Select-String](https://learn.microsoft.com/en-gb/powershell/module/microsoft.powershell.utility/select-string): *By default, the output is a set of `MatchInfo` objects with one for each match found.*
Apply `.substring` method to the `$_.Line` string, e.g. as follows:
```
Select-String -Path "DatabaseBackup - USER_DATABASES - FULL*.txt" -pattern "MB/sec" |
ForEach-Object { $_.Line.substring(42,30) }
```
|
[`Select-String`](https://learn.microsoft.com/en-us/dotnet/api/microsoft.powershell.commands.matchinfo?view=powershellsdk-7.2.0) is returning a `MatchInfo` object so you would need to iterate over the `Matches` property of that object.
So you could do something like this:
```
(Select-String -Path "DatabaseBackup - USER_DATABASES - FULL*.txt" -pattern "MB/sec").Matches |
ForEach-Object { $_.substring(42,30) }
```
| 7,747
|
68,769,200
|
by no means am i an expert when it comes to python but i have tried my best. I have written a short code in python that reads a text file from a blob storage and appends it with some column names and outputs it to target folder. The code executes correctly when i run from VS Code.
When i try to run it via Azure Function i get it throws an error.
Can it be that i have wrong parameters in my function.json binding file?
{
"statusCode": 500,
"message": "Internal server error"
}
This is my original code:
```
import logging
import azure.functions as func
from azure.storage.blob import BlobClient,BlobServiceClient
def main():
logging.info(' start Python Main function processed a request.')
#CONNECTION STRING
blob_service_client = BlobServiceClient.from_connection_string(CONN_STR)
# MAP SOURCE FILE
blob_client = blob_service_client.get_blob_client(container="source", blob="source.txt")
#SOURCE CONTENTS
content= blob_client.download_blob().content_as_text()
# WRITE HEADER TO A OUT PUTFILE
output_file_dest = blob_service_client.get_blob_client(container="target", blob="target.csv")
#INITIALIZE OUTPUT
output_str = ""
#STORE COULMN HEADERS
data= list()
data.append(list(["column1", "column2", "column3", "column4"]))
output_str += ('"' + '","'.join(data[0]) + '"\n')
output_file_dest.upload_blob(output_str,overwrite=True)
logging.info(' END OF FILE UPLOAD')
if __name__ == "__main__":
main()
```
**This is my function.json**
```
{
"scriptFile": "__init__.py",
"bindings": [
{
"type": "blob",
"direction": "in",
"name": "blob_client",
"path": "source/source.txt",
"connection": "STORAGE_CONN_STR"
},
{
"type": "blob",
"direction": "out",
"name": "output_file_dest",
"path": "target/target.csv",
"connection": "STORAGE_CONN_STR"
}
]
}
```
**This is my requirement.txt**
```
azure-functions==1.7.2
azure-storage-blob==12.8.1
```
|
2021/08/13
|
[
"https://Stackoverflow.com/questions/68769200",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4511169/"
] |
You have to revert the order of your aggregation operations since your `$project` will only leave one property `title` so there is no way to filter by `year` afterwards:
```
db.collection.aggregate([
{
"$match": {
"year": {
"$gt": 1967,
"$lt": 2020
}
}
},
{
"$project": {
"title": 1,
"year": 1
}
}])
```
[Mongo Playground](https://mongoplayground.net/p/2VffYA8gfha)
|
```
db.movies.find({year: {$gte:1967, $lte:1995} },{title:1,year:1,"director.last_name":1 }).sort({year:1})
```
| 7,748
|
4,834,036
|
I need to have an array of python objects to be used in creating a trie datastructure. I need a structure that will be fixed-length like a tuple and mutable like a list. I don't want to use a list because I want to be able to ensure that the list is *exactly* the right size (if it starts allocating extra elements, the memory overhead could add up very quickly as the trie grows larger). Is there a way to do this? I tried creating an array of objects:
```
cdef class TrieNode:
cdef object members[32]
```
...but that gave an error:
```
Error compiling Cython file:
------------------------------------------------------------
...
cdef class TrieNode:
cdef object members[32]
^
------------------------------------------------------------
/Users/jason/src/pysistence/source/pysistence/trie.pyx:2:23: Array element cannot be a Python object
```
What is the best way to do what I'm trying to do?
|
2011/01/28
|
[
"https://Stackoverflow.com/questions/4834036",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2147/"
] |
If you only need few fixed sizes of such a structure, I'd look at making classes with uniformly named `__slots__`, including one `size` slot to store the size. You'll need to declare a separate class for each size (number of slots). Define a `cdecl` function to access slots by index. Access performance will probably be not as great as with plain address arithmetics of a C array, but you'll be sure that there's only so many slots and none more.
|
How about this?
```
class TrieNode():
def __init__(self, length = 32):
self.members = list()
self.length = length
for i in range(length):
self.members.append(None)
def set(self, idx, item):
if idx < self.length and idx >= 0:
self.members[idx] = item
else:
print "ERROR: Specified index out of range."
# Alternately, you could raise an IndexError.
def unset(self, idx):
if idx < self.length and idx >= 0:
self.members[idx] = None
else:
raise IndexError("Specified index out of range (0..%d)." % self.length)
```
| 7,749
|
49,868,348
|
I'm trying to install sklearn onm an AWS DeepLearning AMI, with Conda and an assortment of backends pre-installed. I'm following ScikitLearn's website instructions:
```
$ conda install -c anaconda scikit-learn
$ source activate python3
$ jupyter notebook
```
In Jupyter notebook:
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
#Scaling the data
from sklearn.preprocessing import MinMaxScaler
ModuleNotFoundError Traceback (most recent call last)
<ipython-input-1-884b8a303194> in <module>()
12
13 #Scaling the data
---> 14 from sklearn.preprocessing import MinMaxScaler
15 sc = MinMaxScaler() #scaling using normalisation
16 training_set1 = sc.fit_transform(training_set1)
ModuleNotFoundError: No module named 'sklearn'
```
|
2018/04/17
|
[
"https://Stackoverflow.com/questions/49868348",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8268013/"
] |
You need to start the virtual environment **first** "source activate python3", then install scikit-learn. Without activating the virtual environment, you are installing into the base python and not into the virtual environment.
Cheers
|
And in case anybody didn't know how to install packages in each conda environment, it is (as in this case my environment of choice was `Tensorflow` in *Python 3.6*) here is the command I used my *mac bash*, and in my EC2 environment:
```
ubuntu@ip ***.***.**.***:~$ source activate tensorflow_p36
```
and then:
```
ubuntu@ip ***.***.**.***:~$ conda install -c anaconda scikit-learn
```
| 7,752
|
57,277,513
|
I'm generating the pictures of my latex document with the following program `genimg.py`:
```
#!/usr/bin/env python3
def figCircle():
print('generate circle picture')
def figSquare():
print('generate square picture')
def main():
for key, value in globals().items():
if callable(value) and key.startswith('fig'):
value()
if __name__== "__main__":
main()
```
Now, since the images get more and more, I would like to split `genimg.py` in more files, lets say `genimgFirst.py` and `genimgSecond.py` and the `genimg.py` should go through the modules starting with `genimg` and execute all functions starting with `fig`. Lets assume that I add `import genimgFirst` and `import genimgSecond` at the beginning of my new modified `genimg.py`.
Is this possible (without a revolution, that is just copy and paste of my fig\* functions)?
**EDIT**
Here the code after a merge of the two answers of @bluenote and @inspectorG4dget:
```
#!/usr/bin/env python3
import glob
def main():
for fname in glob.glob("genimg*.py"): # get all the relevant files
if fname == "genimg.py":
continue
print('>>>> {}'.format(fname))
mod = fname.rsplit('.',1)[0]
mod = __import__(mod) # import the module
for k,v in mod.__dict__.items():
if callable(v) and k.startswith('fig'): v() # if a module attribute is a function, call it
if __name__== "__main__":
main()
```
|
2019/07/30
|
[
"https://Stackoverflow.com/questions/57277513",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11811021/"
] |
You could call your code recursively for imported modules. Something like
```
import types
def call_functions(module_dict):
for key, value in module_dict.items():
if callable(value) and key.startswith('fig'):
value()
elif isinstance(value, types.ModuleType):
call_functions(value.__dict__)
if __name__=='__main__':
call_functions(globals())
```
|
```
import glob
for fname in glob.glob("genimg*.py"): # get all the relevant files
mod = gname.rsplit('.',1)([0])
mod = __import__(mod) # import the module
for k,v in mod.__dict__.items():
if callable(v): v() # if a module attribute is a function, call it
```
| 7,755
|
62,670,115
|
I have text file containing employee details and various other details. Below is the consolidated data as shown below.
```
Data file created on 4 Jun 2020
GROUPCASEINSENSITIVE ON
#KCT-User-Group
GROUP KCT ALopp190 e190 ARaga789 Lshastri921
GROUP KCT DPatel592 ANaidu026 e026 KRam161 e161
#KBN-User-Group
GROUP KBN SPatil322 e322 LAgarwal908 AKeshri132 e132
GROUP KBN BRaju105 e105 LNaik110 PNeema163 e163
#PDA-User-Group
GROUP PDA SRoy977 AAgarwal594 e594 AMath577 e577
GROUP PDA BSharma865 e865 CUmesh195 RRana354
```
When i run a Python code i need output as shown below
```
ALopp190
ARaga789
Lshastri921
DPatel592
ANaidu026
KRam161
SPatil322
LAgarwal908
AKeshri132
BRaju105
LNaik110
PNeema163
SRoy977
AAgarwal594
AMath577
BSharma865
CUmesh195
RRana354
```
From that text file i need only the above data.This is what i had tried but its not working
```
def user(li):
n = len (li)
for j in range(0, n, 2):
print (li[j])
import os
os.getcwd()
fo = open(r'C:\\Users\\Kiran\\Desktop\\Emplyoees\\User.txt', 'r')
for i in fo.readlines():
li = list(i.split(" "))
#print (li)
li.remove("GROUP")
li.remove("KCT")
li.remove("KBN")
li.remove("PDA")
user (li)
```
I am new to python and not sure how to get the data. Can you please assist me in fixing this issue.
|
2020/07/01
|
[
"https://Stackoverflow.com/questions/62670115",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13761107/"
] |
As others have mentioned, you can source your `.vimrc`, but that doesn't completely reset Vim. If you want to just restart Vim, then you can do so by re-execing it.
Vim doesn't provide a built-in way to exec processes from within it, since typically one doesn't want to replace one's editor with another process, but it is possible to do so with Perl or Ruby, as you see fit:
```
:perl exec "vim"
```
or
```
:ruby exec "vim"
```
This may or may not work on Windows, and it of course requires that your Vim version have been compiled with support for the appropriate interpreters. Debian provides both in the `vim-nox`, `vim-gtk3`, and `vim-athena` packages, but not in `vim` or `vim-tiny`; Ubuntu, last I checked, did not provide Ruby support but did include Perl.
If you want to re-exec with the same buffer, you can use one of these:
```
:perl exec "vim", $curbuf->Name();
```
or
```
:ruby exec "vim", Vim::Buffer.current.name
```
Note that re-execing may cause your screen to be slightly messed up when exiting, so you may need to use `reset` to set it back to normal.
|
I don't know if you had tried this but you can source your vimrc file from vim itself by typing
>
> :so $MYVIMRC
>
>
>
| 7,756
|
28,769,698
|
I am learning python.
I have a file like this
```
str1 str2 str3 str4
str1 str2 str7 str8
***
str9 str10 str12 str13
str9 str10 str16 str17
****
str 18 str19 str20 str21
***
```
and so on.
I want to change it to this format->
```
str1
str2 str3 str4
str2 str7 str8
str9
str10 str12 str13
str10 str16 str17
str 18
str19 str20 str21
```
so if the first 2 words are common between 2 lines, arrange the lines together and remove the first word to a separate line.
This should be changeable recursively but i can't seem to figure it out
|
2015/02/27
|
[
"https://Stackoverflow.com/questions/28769698",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/612258/"
] |
A class diagram shows classes in their relation and their properties and methods.
A state diagram visualizes a class's states and how they can change over time.
In both cases you are talking about diagrams which are only a window into the model. The class relations define how the single classes relate to each other. A state machine can be defined for each class to show its states. In embedded systems you use state machines almost all the time but there are also state machines for business applications (you can do this if that).
|
A state diagram shows the behavior of the class. A class model shows the relationship between two or more classes. It includes its properties/attributes...
A state is an allowable sequence of changes of objects of a class model.
| 7,758
|
69,325,145
|
Suppose I have a string of A's and B's. I can either remove a character from either end of the string for a cost of 1, or I can remove any character from the middle for a cost of 2. What's the minimum cost to create a string of only A's?
For example, if I have the string "BBABAA", then the minimum cost to remove is 4, since I would remove the B's from the left for a cost of 2, then remove the singular B from the right for a cost of 2, resulting in a combined cost of 4.
I attempted a top-down solution with caching below:
```python
def MRC_helper(s, count, cache):
if count == 0:
cache[s] = 0
cache[s[::-1]] = 0
return 0
if s in cache:
return cache[s]
min_cost = len(s)
for i in range(len(s)):
new_count = count - int(s[i] == 'B')
new_s = s[:i]+s[i+1:]
if i == 0 or i == len(s)-1:
min_cost = min(min_cost, 1 + MRC_helper(new_s, new_count, cache))
elif s[i] == 'B':
min_cost = min(min_cost, 2 + MRC_helper(new_s, new_count, cache))
cache[s] = min_cost
cache[s[::-1]] = min_cost
return min_cost
def minRemovalCost(s):
min_cost = MRC_helper(s, s.count('B'), {})
return min_cost
```
The idea is pretty simple: for each character that you can remove, try to remove it and calculate the cost to transform the resulting substring. Then calculate the minimum cost move and cache it. I also cache the string forwards and backwards since it's the same thing. My friend said that it could be done greedily, but I disagree. Can anybody shed some light on a better solution than mine?
|
2021/09/25
|
[
"https://Stackoverflow.com/questions/69325145",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11228246/"
] |
All `B`s must be removed. If we remove one `B` from the middle, we split the string into two sections. For the section on the right, none of the characters could be deleted from the left since otherwise we would have deleted the `B` from the left at a lower cost for that deletion. Mirror for the section on the left. A section that cannot have deletions from one side can be solved by iterating from the side that allows deletion and at each point comparing the costs of that deletion vs a middle deletion. Precalculate the latter and meet in the optimal middle.
```
Example 1:
BBABAA
x
cost_l: --> 122444
cost_r: 642200 <--
x
optimal: 22 = 4
22 = 4
40 = 4
40 = 4
4 = 4
Example 2:
ABBA
x
cost_l: --> 0233
cost_r: 3320 <--
x
optimal: 3 = 3
30 = 3
03 = 3
3 = 3
```
In case it's unclear, the one-sided cost calculation is:
```
cost_l(0) ->
0 if str[0] == 'A' else 1
cost_l(i) ->
if str[i] == 'B':
min(2 + cost_l(i-1), i + 1)
else:
cost_l(i - 1)
```
(Mirror for `cost_r`.)
|
Let `C` be an array where `C[i]` Is the optimal cost for the substring `s[0:i+1]`(assuming right sise Is exclusive].
Consider another game, exactly the same, EXCEPT eating from the right costs 2 instead of 1. Let `D` be the associated cost array of this new game, this cost is helpful to keep track of situations where we restrict ourselves to not eating the last element of a substring.
We compute `C` and `D` in a single pass loop as follows:
The optimal cost values for the subtring consisting of the first character are:
```
C[0] = int(s1=='B' )# (1 if It Is B, 0 otherwise)
D[0] = int( s1=='B')
```
For i+1, if we eat the last character, the cost is
`1+C[i]` (we must eat last character if It is 'B')
If we don't eat the last character,the cost equals
`D[i]` since we can't eat from the right.
So
```
C[i+1]=C[i]+1 if s[i+1]=='B' else min(C[i]+1,D[i])
```
and similarly for the second game, if we eat the last character, then either we pay 2 from eating it from the right, or we have to eat everything to the left of it as well.
So
```
D[i+1]=min(D[i]+2, i+1) if s[i+1]=='B' else D[i].
```
So we iterate over i, and output `C[len(s)]`. This Is O(len(s)) in time and memory, although if we only keep the costs from the previous iteration It becomes constant in memory(not recommended since we can no longer recover the optimal policy by backtracking).
| 7,760
|
61,190,064
|
I am trying to create a docker image with opencv in order to display a video. I have the following Dockerfile:
```
FROM python:3
ADD testDocker_1.py /
ADD video1.mp4 /
RUN pip install opencv-python
CMD [ "python", "./testDocker_1.py" ]
```
And the following python script:
```
import cv2
import os
if __name__ == '__main__':
file_path = './video1.mp4'
cap = cv2.VideoCapture(file_path)
ret, frame = cap.read()
while ret:
ret, frame = cap.read()
if ret:
cv2.imshow('Frame Docker', frame)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
```
So first I build the Image:
```
$ sudo docker build -t test1 .
```
And the problem comes when I run the container:
```
$ sudo docker run -ti --rm -e DISPLAY=$DISPLAY -v /tmp/.X11-unix:/tmp/.X11-unix test1
No protocol specified
: cannot connect to X server :1
```
Regards.
|
2020/04/13
|
[
"https://Stackoverflow.com/questions/61190064",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11266804/"
] |
Try this
```
xhost +
sudo docker run -ti --rm -e DISPLAY=$DISPLAY -v /tmp/.X11-unix:/tmp/.X11-unix test1
```
Although it would solve this particular use case, but you need to make a note of the following:
>
> **Basically, the xhost + allows everybody to use your host x server;**
>
>
>
[Refrence](https://marcosnietoblog.wordpress.com/2017/04/30/docker-image-with-opencv-with-x11-forwarding-for-gui/)
A better and recommended solution is present [here](https://stackoverflow.com/questions/16296753/can-you-run-gui-applications-in-a-docker-container/)
|
Kapil Khandelwal, your solution works for me. But only using the docker image on ubuntu, when I try to share it with windows it does not work.+
| 7,762
|
18,744,584
|
Right, I am trying to setup a django dev site based on a current live site. I've setup the new virtualenv and installed all of the dependencies. I've also made a copy of the database and done a fresh DB dump. I'm now getting the error above and I have no idea why.
My django.wsgi file seems to be pointing at the virtalenv. Here is a little more info on the error.
```
Request Method: GET
Request URL: http://website.co.uk/
Django Version: 1.3
Exception Type: AttributeError
Exception Value:
'NoneType' object has no attribute 'endswith'
Exception Location: /usr/lib/python2.6/posixpath.py in join, line 67
Python Executable: /usr/bin/python
Python Version: 2.6.5
Python Path:
['/var/www/website.co.uk/website/',
'/var/www/website.co.uk/website/apps',
'/var/www/website.co.uk/website/apps',
'/var/www/website.co.uk',
'/var/www/website.co.uk/lib/python2.6/site-packages/distribute-0.6.10-py2.6.egg',
'/var/www/website.co.uk/lib/python2.6/site-packages/pip-1.4.1-py2.6.egg',
'/usr/lib/python2.6',
'/usr/lib/python2.6/plat-linux2',
'/usr/lib/python2.6/lib-tk',
'/usr/lib/python2.6/lib-old',
'/usr/lib/python2.6/lib-dynload',
'/usr/lib/python2.6/dist-packages',
'/usr/lib/python2.6/dist-packages/PIL',
'/usr/lib/pymodules/python2.6',
'/usr/local/lib/python2.6/dist-packages',
'/var/www/website.co.uk/lib/python2.6/site-packages',
'/var/www/website.co.uk/lib/python2.6/site-packages/PIL']
FULL TRACEBACK
Environment:
Request Method: GET
Request URL: http://website.co.uk/
Django Version: 1.3
Python Version: 2.6.5
Installed Applications:
['django.contrib.sites',
'satchmo_store.shop',
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.comments',
'django.contrib.sessions',
'django.contrib.sitemaps',
'registration',
'sorl.thumbnail',
'keyedcache',
'livesettings',
'l10n',
'tax',
'tax.modules.no',
'tax.modules.area',
'tax.modules.percent',
'shipping',
'product',
'payment',
'satchmo_ext.satchmo_toolbar',
'satchmo_utils',
'app_plugins',
'authority',
'foodies',
'localsite',
'django_extensions',
'south',
'django_wysiwyg',
'mptt',
'tinymce',
'tagging',
'pages',
'filebrowser',
'html5lib']
Installed Middleware:
('django.middleware.common.CommonMiddleware',
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.locale.LocaleMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.middleware.doc.XViewMiddleware',
'django.middleware.locale.LocaleMiddleware')
Traceback:
File "/usr/local/lib/python2.6/dist-packages/django/core/handlers/base.py" in get_response
101. request.path_info)
File "/usr/local/lib/python2.6/dist-packages/django/core/urlresolvers.py" in resolve
250. for pattern in self.url_patterns:
File "/usr/local/lib/python2.6/dist-packages/django/core/urlresolvers.py" in _get_url_patterns
279. patterns = getattr(self.urlconf_module, "urlpatterns", self.urlconf_module)
File "/usr/local/lib/python2.6/dist-packages/django/core/urlresolvers.py" in _get_urlconf_module
274. self._urlconf_module = import_module(self.urlconf_name)
File "/usr/local/lib/python2.6/dist-packages/django/utils/importlib.py" in import_module
35. __import__(name)
File "/var/www/website.co.uk/website_app/urls.py" in <module>
4. from satchmo_store.urls import urlpatterns
File "/var/www/website.co.uk/lib/python2.6/site-packages/satchmo_store/urls/__init__.py" in <module>
28. from default import urlpatterns as defaultpatterns
File "/var/www/website.co.uk/lib/python2.6/site-packages/satchmo_store/urls/default.py" in <module>
9. admin.autodiscover()
File "/usr/local/lib/python2.6/dist-packages/django/contrib/admin/__init__.py" in autodiscover
26. import_module('%s.admin' % app)
File "/usr/local/lib/python2.6/dist-packages/django/utils/importlib.py" in import_module
35. __import__(name)
File "/var/www/website.co.uk/website_app/foodies/admin.py" in <module>
2. from foodies.models import *
File "/var/www/website.co.uk/website_app/foodies/models.py" in <module>
24. from pages.admin import *
File "/var/www/website.co.uk/lib/python2.6/site-packages/pages/admin/__init__.py" in <module>
3. from pages import settings
File "/var/www/website.co.uk/lib/python2.6/site-packages/pages/settings.py" in <module>
144. join(_media_url, 'pages/'))
File "/usr/lib/python2.6/posixpath.py" in join
67. elif path == '' or path.endswith('/'):
Exception Type: AttributeError at /
Exception Value: 'NoneType' object has no attribute 'endswith'
```
Code from pages/settings.py
```
# -*- coding: utf-8 -*-
"""Convenience module that provides default settings for the ``pages``
application when the project ``settings`` module does not contain
the appropriate settings."""
from os.path import join
from django.conf import settings
from django.core.exceptions import ImproperlyConfigured
# The path to default template
DEFAULT_PAGE_TEMPLATE = getattr(settings, 'DEFAULT_PAGE_TEMPLATE', None)
if DEFAULT_PAGE_TEMPLATE is None:
raise ImproperlyConfigured('Please make sure you specified a '
'DEFAULT_PAGE_TEMPLATE setting.')
# PAGE_TEMPLATES is a list of tuples that specifies the which templates
# are available in the ``pages`` admin. Templates should be assigned in
# the following format:
#
# PAGE_TEMPLATES = (
# ('pages/nice.html', 'nice one'),
# ('pages/cool.html', 'cool one'),
# )
#
# One can also assign a callable (which should return the tuple) to this
# variable to achieve dynamic template list e.g.:
#
# def _get_templates():
# ...
#
# PAGE_TEMPLATES = _get_templates
PAGE_TEMPLATES = getattr(settings, 'PAGE_TEMPLATES', None)
if (PAGE_TEMPLATES is None and
not (isinstance(PAGE_TEMPLATES, str) or
isinstance(PAGE_TEMPLATES, unicode))):
PAGE_TEMPLATES = ()
# The callable that is used by the CMS
def get_page_templates():
if callable(PAGE_TEMPLATES):
return PAGE_TEMPLATES()
else:
return PAGE_TEMPLATES
# Set ``PAGE_TAGGING`` to ``False`` if you do not wish to use the
# ``django-tagging`` application.
PAGE_TAGGING = getattr(settings, 'PAGE_TAGGING', False)
if PAGE_TAGGING and "tagging" not in getattr(settings, 'INSTALLED_APPS', []):
raise ImproperlyConfigured('django-tagging could not be found.\n'
'Please make sure you\'ve installed it '
'correctly or disable the tagging feature by '
'setting PAGE_TAGGING to False.')
# Set this to ``True`` if you wish to use the ``django-tinymce`` application.
PAGE_TINYMCE = getattr(settings, 'PAGE_TINYMCE', False)
if PAGE_TINYMCE and "tinymce" not in getattr(settings, 'INSTALLED_APPS', []):
raise ImproperlyConfigured('django-tinymce could not be found.\n'
'Please make sure you\'ve installed it '
'correctly or disable the tinymce feature by '
'setting PAGE_TINYMCE to False.')
# Set ``PAGE_UNIQUE_SLUG_REQUIRED`` to ``True`` to enforce unique slug names
# for all pages.
PAGE_UNIQUE_SLUG_REQUIRED = getattr(settings, 'PAGE_UNIQUE_SLUG_REQUIRED',
False)
# Set ``PAGE_CONTENT_REVISION`` to ``False`` to disable the recording of
# pages revision information in the database
PAGE_CONTENT_REVISION = getattr(settings, 'PAGE_CONTENT_REVISION', True)
# A list tuples that defines the languages that pages can be translated into.
#
# gettext_noop = lambda s: s
#
# PAGE_LANGUAGES = (
# ('zh-cn', gettext_noop('Chinese Simplified')),
# ('fr-ch', gettext_noop('Swiss french')),
# ('en-us', gettext_noop('US English')),
#)
PAGE_LANGUAGES = getattr(settings, 'PAGE_LANGUAGES', settings.LANGUAGES)
# Defines which language should be used by default. If
# ``PAGE_DEFAULT_LANGUAGE`` not specified, then project's
# ``settings.LANGUAGE_CODE`` is used
PAGE_DEFAULT_LANGUAGE = getattr(settings, 'PAGE_DEFAULT_LANGUAGE',
settings.LANGUAGE_CODE)
extra = [('can_freeze', 'Can freeze page',)]
for lang in PAGE_LANGUAGES:
extra.append(
('can_manage_' + lang[0].replace('-', '_'),
'Manage' + ' ' + lang[1])
)
PAGE_EXTRA_PERMISSIONS = getattr(settings, 'PAGE_EXTRA_PERMISSIONS', extra)
# PAGE_LANGUAGE_MAPPING should be assigned a function that takes a single
# argument, the language code of the incoming browser request. This function
# maps the incoming client language code to another language code, presumably
# one for which you have translation strings. This is most useful if your
# project only has one set of translation strings for a language like Chinese,
# which has several variants like ``zh-cn``, ``zh-tw``, ``zh-hk`, etc., but
# you want to provide your Chinese translations to all Chinese browsers, not
# just those with the exact ``zh-cn``
# locale.
#
# Enable that behavior here by assigning the following function to the
# PAGE_LANGUAGE_MAPPING variable.
#
# def language_mapping(lang):
# if lang.startswith('zh'):
# return 'zh-cn'
# return lang
# PAGE_LANGUAGE_MAPPING = language_mapping
PAGE_LANGUAGE_MAPPING = getattr(settings, 'PAGE_LANGUAGE_MAPPING', lambda l: l)
# Set SITE_ID to the id of the default ``Site`` instance to be used on
# installations where content from a single installation is served on
# multiple domains via the ``django.contrib.sites`` framework.
SITE_ID = getattr(settings, 'SITE_ID', 1)
# Set PAGE_USE_SITE_ID to ``True`` to make use of the ``django.contrib.sites``
# framework
PAGE_USE_SITE_ID = getattr(settings, 'PAGE_USE_SITE_ID', False)
# Set PAGE_USE_LANGUAGE_PREFIX to ``True`` to make the ``get_absolute_url``
# method to prefix the URLs with the language code
PAGE_USE_LANGUAGE_PREFIX = getattr(settings, 'PAGE_USE_LANGUAGE_PREFIX',
False)
# Assign a list of placeholders to PAGE_CONTENT_REVISION_EXCLUDE_LIST
# to exclude them from the revision process.
PAGE_CONTENT_REVISION_EXCLUDE_LIST = getattr(settings,
'PAGE_CONTENT_REVISION_EXCLUDE_LIST', ()
)
# Set ``PAGE_SANITIZE_USER_INPUT`` to ``True`` to sanitize the user input with
# ``html5lib``
PAGE_SANITIZE_USER_INPUT = getattr(settings, 'PAGE_SANITIZE_USER_INPUT', False)
# URL that handles pages media and uses <MEDIA_ROOT>/pages by default.
_media_url = getattr(settings, "STATIC_URL", settings.MEDIA_URL)
PAGES_MEDIA_URL = getattr(settings, 'PAGES_MEDIA_URL',
join(_media_url, 'pages/'))
# Hide the slug's of the first root page ie: ``/home/`` becomes ``/``
PAGE_HIDE_ROOT_SLUG = getattr(settings, 'PAGE_HIDE_ROOT_SLUG', False)
# Show the publication start date field in the admin. Allows for future dating
# Changing the ``PAGE_SHOW_START_DATE`` from ``True`` to ``False``
# after adding data could cause some weirdness. If you must do this, you
# should update your database to correct any future dated pages.
PAGE_SHOW_START_DATE = getattr(settings, 'PAGE_SHOW_START_DATE', False)
# Show the publication end date field in the admin, allows for page expiration
# Changing ``PAGE_SHOW_END_DATE`` from ``True`` to ``False`` after adding
# data could cause some weirdness. If you must do this, you should update
# your database and null any pages with ``publication_end_date`` set.
PAGE_SHOW_END_DATE = getattr(settings, 'PAGE_SHOW_END_DATE', False)
# ``PAGE_CONNECTED_MODELS`` allows you to specify a model and form for this
# model into your settings to get an automatic form to create
# and directly link a new instance of this model with your page in the admin.
#
# Here is an example:
#
# PAGE_CONNECTED_MODELS = [
# {'model':'documents.models.Document',
# 'form':'documents.models.DocumentForm'},
# ]
#
PAGE_CONNECTED_MODELS = getattr(settings, 'PAGE_CONNECTED_MODELS', False)
# The page link filter enable a output filter on you content links. The goal
# is to transform special page class into real links at the last moment.
# This ensure that even if you have moved a page, the URL will remain correct.
PAGE_LINK_FILTER = getattr(settings, 'PAGE_LINK_FILTER', False)
# This setting is a function that can be defined if you need to pass extra
# context data to the pages templates.
PAGE_EXTRA_CONTEXT = getattr(settings, 'PAGE_EXTRA_CONTEXT', None)
# This setting is the name of a sub-folder where uploaded content, like
# placeholder images, is placed.
PAGE_UPLOAD_ROOT = getattr(settings, 'PAGE_UPLOAD_ROOT', 'upload')
```
I hope this is enough to go on but i'm really stuck as to why this is coming up!
Thanks
|
2013/09/11
|
[
"https://Stackoverflow.com/questions/18744584",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1256637/"
] |
In this stanza:
```
# URL that handles pages media and uses <MEDIA_ROOT>/pages by default.
_media_url = getattr(settings, "STATIC_URL", settings.MEDIA_URL)
PAGES_MEDIA_URL = getattr(settings, 'PAGES_MEDIA_URL',
join(_media_url, 'pages/'))
```
`STATIC_URL` is not in settings, and `settings.MEDIA_URL` is `None`. Looks like you need to fix your settings.
|
in this line,
```
ile "/usr/lib/python2.6/posixpath.py" in join
67. elif path == '' or path.endswith('/'):
```
path is None. so a nonetype("path") has to attribute .endswith
i would suggest like the other users said to put the full traceback error and check your code and see why path is None.
| 7,763
|
24,589,241
|
I just wrote a Google Cloud Endpoints API using Python (using the latest Mac OS), and now need to create an Android Client by following this:
<https://developers.google.com/appengine/docs/python/endpoints/endpoints_tool>
In the instructions, it says that there's a file called endpointscfg.py under google\_appengine, but I can't find them.
When I downloaded the GAE SDK for Mac, I only got a DMG containing GoogleAppEngineLauncher.app. I placed it under my Applications folder.
Below is the contents of GoogleAppEnglineLauncher.app, with clearly no \*.py files.
```
my-macbook-pro:GoogleAppEngineLauncher.app user1$ ls -al
total 0
drwxr-xr-x@ 3 user1 admin 102 May 30 10:16 .
drwxrwxr-x+ 124 root admin 4216 Jul 5 10:50 ..
drwxr-xr-x@ 8 user1 admin 272 May 30 10:16 Contents
my-macbook-pro:GoogleAppEngineLauncher.app user1$
```
Where do I find endpointscfg.py? Thanks.
Gerard
|
2014/07/05
|
[
"https://Stackoverflow.com/questions/24589241",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3108752/"
] |
After a lot of trial and error I figured out the solution.
All in all you need two different things in your project:
**1) A class that inherits from ProminentProjectAction:**
```
import hudson.model.ProminentProjectAction;
public class MyProjectAction implements ProminentProjectAction {
@Override
public String getIconFileName() {
// return the path to the icon file
return "/images/jenkins.png";
}
@Override
public String getDisplayName() {
// return the label for your link
return "MyActionLink";
}
@Override
public String getUrlName() {
// defines the suburl, which is appended to ...jenkins/job/jobname
return "myactionpage";
}
}
```
**2) Even more important is that you add this action somehow to your project.**
In my case I wanted to show the link if and only if the related build step of my plugin is configured for the actual project. So I took my Builder class and overwrote the getProjectActionsMethod.
```
public class MyBuilder extends Builder {
...
@Override
public Collection<? extends Action> getProjectActions(AbstractProject<?,?> project) {
List<Action> actions = new ArrayList<>();
actions.add(new MyProjectAction());
return actions;
}
}
```
Maybe this is not the perfect solution yet (because I'm still trying to figure out how all the artifacts are working together), but it might give people which want to implement the same a good starting point.
The page, which is loaded after clicking the link is defined as index.jelly file under source/main/resources and an underlying package with the name of the package of your Action class appended by its class name (e.g. src/main/resources/org/example/myplugin/MyProjectAction).
|
As it happens, there was a [plugin workshop](http://jenkins-ci.org/content/get-drunk-code-juc-boston) by Steven Christou at the recent [Jenkins User Conference](http://www.cloudbees.com/jenkins/juc-2014/boston) in Boston, which covered this case. You need to add a new RootAction, as shown in the following code from the JUC session
```
package org.jenkinsci.plugins.JUCBeer;
import hudson.Extension;
import hudson.model.RootAction;
@Extension
public class JenkinsRootAction implements RootAction {
public String getIconFileName() {
return "/images/jenkins.png";
}
public String getDisplayName() {
return "Jenkins home page";
}
public String getUrlName() {
return "http://jenkins-ci.org";
}
}
```
| 7,764
|
56,674,284
|
I am trying to use beautiful soup to scrape data from [this](https://www.pro-football-reference.com/play-index/play_finder.cgi?request=1&match=summary_all&year_min=2018&year_max=2018&game_type=R&game_num_min=0&game_num_max=99&week_num_min=0&week_num_max=99&quarter%5B%5D=4&minutes_max=15&seconds_max=00&minutes_min=00&seconds_min=00&down%5B%5D=0&down%5B%5D=1&down%5B%5D=2&down%5B%5D=3&down%5B%5D=4&field_pos_min_field=team&field_pos_max_field=team&end_field_pos_min_field=team&end_field_pos_max_field=team&type%5B%5D=PUNT&no_play=N&turnover_type%5B%5D=interception&turnover_type%5B%5D=fumble&score_type%5B%5D=touchdown&score_type%5B%5D=field_goal&score_type%5B%5D=safety&rush_direction%5B%5D=LE&rush_direction%5B%5D=LT&rush_direction%5B%5D=LG&rush_direction%5B%5D=M&rush_direction%5B%5D=RG&rush_direction%5B%5D=RT&rush_direction%5B%5D=RE&pass_location%5B%5D=SL&pass_location%5B%5D=SM&pass_location%5B%5D=SR&pass_location%5B%5D=DL&pass_location%5B%5D=DM&pass_location%5B%5D=DR&order_by=yards) website. If you scroll down to the **Individual Plays** section, click "share and more > get table as csv" a CSV form of the tabulated data will appear. If I inspect this CSV text I see that it's in a `<pre>` tag and has an id of "csv\_all\_plays"
I'm trying to use the python package beautifulsoup to scrape this data. What I'm currently doing is
```py
nfl_url = #the url I have linked above
driver = webdriver.Chrome(executable_path=r'C:/path/to/chrome/driver')
driver.get(nfl_url)
html = driver.page_source
soup = BeautifulSoup(html, 'html.parser')
print(soup.find(id="csv_all_plays"))
```
This just results in "None" being printed. I know that because this data isn't being displayed when the page is loaded means I can't use the Requests package and I have to use something that actually gets the whole page's source (I'm using Selenium). Is that not what I'm doing here? Is it a different reason that I can't get the CSV data?
|
2019/06/19
|
[
"https://Stackoverflow.com/questions/56674284",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5657167/"
] |
You can use `selenium` to hover over the "Share & more" link to display the menu, from which you can click the "Get table as csv":
```
from selenium import webdriver
from selenium.webdriver.common.action_chains import ActionChains
from bs4 import BeautifulSoup as soup
d = webdriver.Chrome('/path/to/chromedriver')
d.get('https://www.pro-football-reference.com/play-index/play_finder.cgi?request=1&match=summary_all&year_min=2018&year_max=2018&game_type=R&game_num_min=0&game_num_max=99&week_num_min=0&week_num_max=99&quarter%5B%5D=4&minutes_max=15&seconds_max=00&minutes_min=00&seconds_min=00&down%5B%5D=0&down%5B%5D=1&down%5B%5D=2&down%5B%5D=3&down%5B%5D=4&field_pos_min_field=team&field_pos_max_field=team&end_field_pos_min_field=team&end_field_pos_max_field=team&type%5B%5D=PUNT&no_play=N&turnover_type%5B%5D=interception&turnover_type%5B%5D=fumble&score_type%5B%5D=touchdown&score_type%5B%5D=field_goal&score_type%5B%5D=safety&rush_direction%5B%5D=LE&rush_direction%5B%5D=LT&rush_direction%5B%5D=LG&rush_direction%5B%5D=M&rush_direction%5B%5D=RG&rush_direction%5B%5D=RT&rush_direction%5B%5D=RE&pass_location%5B%5D=SL&pass_location%5B%5D=SM&pass_location%5B%5D=SR&pass_location%5B%5D=DL&pass_location%5B%5D=DM&pass_location%5B%5D=DR&order_by=yards')
scroll = ActionChains(d).move_to_element(d.find_element_by_id('all_all_plays'))
scroll.perform()
spans = [i for i in d.find_elements_by_tag_name('span') if 'Share & more' in i.text]
hover = ActionChains(d).move_to_element(spans[-1])
hover.perform()
b = [i for i in d.find_elements_by_tag_name('button') if 'get table as csv' in i.text.lower()][0]
b.send_keys('\n')
csv_data = soup(d.page_source, 'html.parser').find('pre', {'id':'csv_all_plays'}).text
```
Output (shortened due to SO's character limit):
```
"\nDate,Tm,Opp,Quarter,Time,Down,ToGo,Location,Score,Detail,Yds,EPB,EPA,Diff,PYds,PRYds\n2018-09-09,Texans,Patriots,4,4:41,4,8,HTX 36,13-27,Trevor Daniel punts 47 yards muffed catch by Riley McCarron recovered by Johnson Bademosi and returned for no gain,0,-0.980,4.510,5.49,47,\n2018-09-09,Jaguars,Giants,4,0:54,4,6,JAX 40,20-15,Logan Cooke punts 41 yards muffed catch by Kaelin Clay recovered by Donald Payne and returned for no gain,0,-0.720,4.170,4.89,41,\n2018-09-09,Chiefs,Chargers,4,10:35,4,6,KAN 27,31-20,Dustin Colquitt punts 59 yards returned by JJ Jones for no gain. JJ Jones fumbles (forced by De'Anthony Thomas) recovered by James Winchester at LAC-2,0,-1.570,6.740,8.31,59,\n2018-09-23,Dolphins,Raiders,4,12:33,4,5,MIA 39,14-17,Matt Haack punts 42 yards muffed catch by Jordy Nelson recovered by Jordy Nelson and returned for no gain,0,-0.780,0.060,.84,42,\n2018-09-30,Jets,Jaguars,4,8:59,4,10,NYJ 14,12-25,Lac Edwards punts 46 yards muffed catch by Jaydon Mickens ball out of bounds at JAX-41,0,-2.470,-1.660,.81,46,\n2018-10-11,Giants,Eagles,4,12:27,4,17,NYG 33,13-34,Riley Dixon punts 50 yards muffed catch by DeAndre Carter recovered by DeAndre Carter and returned for no gain,0,-1.180,-0.040,1.14,50,\n2018-10-28,Jets,Bears,4,5:37,4,13,NYJ 37,10-24,Lac Edwards punts 48 yards muffed catch by Tarik Cohen recovered by Tarik Cohen and returned for no gain,0,-0.910,0.320,1.23,48,\n2018-11-25,Vikings,Packers,4,6:00,4,13,GNB 37,24-14,Matt Wile punts 21 yards muffed catch by Tramon Williams recovered by Marcus Sherels and returned for no gain,0,0.790,4.580,3.79,21,\n2018-12-13,Chiefs,Chargers,4,2:47,4,15,KAN 6,28-21,Dustin Colquitt punts 55 yards muffed catch by Desmond King recovered by Desmond King and returned for no gain,0,-2.490,-1.600,.89,55,
```
---
To write the csv data to a file:
```
import csv
with open('individual_stats.csv', 'w') as f:
write = csv.writer(f)
write.writerows([list(filter(None, i.split(','))) for i in filter(None, csv_data.split('\n'))])
```
Output (first 16 rows):
```
Date,Tm,Opp,Quarter,Time,Down,ToGo,Location,Score,Detail,Yds,EPB,EPA,Diff,PYds,PRYds
2018-09-09,Texans,Patriots,4,4:41,4,8,HTX 36,13-27,Trevor Daniel punts 47 yards muffed catch by Riley McCarron recovered by Johnson Bademosi and returned for no gain,0,-0.980,4.510,5.49,47
2018-09-09,Jaguars,Giants,4,0:54,4,6,JAX 40,20-15,Logan Cooke punts 41 yards muffed catch by Kaelin Clay recovered by Donald Payne and returned for no gain,0,-0.720,4.170,4.89,41
2018-09-09,Chiefs,Chargers,4,10:35,4,6,KAN 27,31-20,Dustin Colquitt punts 59 yards returned by JJ Jones for no gain. JJ Jones fumbles (forced by De'Anthony Thomas) recovered by James Winchester at LAC-2,0,-1.570,6.740,8.31,59
2018-09-23,Dolphins,Raiders,4,12:33,4,5,MIA 39,14-17,Matt Haack punts 42 yards muffed catch by Jordy Nelson recovered by Jordy Nelson and returned for no gain,0,-0.780,0.060,.84,42
2018-09-30,Jets,Jaguars,4,8:59,4,10,NYJ 14,12-25,Lac Edwards punts 46 yards muffed catch by Jaydon Mickens ball out of bounds at JAX-41,0,-2.470,-1.660,.81,46
2018-10-11,Giants,Eagles,4,12:27,4,17,NYG 33,13-34,Riley Dixon punts 50 yards muffed catch by DeAndre Carter recovered by DeAndre Carter and returned for no gain,0,-1.180,-0.040,1.14,50
2018-10-28,Jets,Bears,4,5:37,4,13,NYJ 37,10-24,Lac Edwards punts 48 yards muffed catch by Tarik Cohen recovered by Tarik Cohen and returned for no gain,0,-0.910,0.320,1.23,48
2018-11-25,Vikings,Packers,4,6:00,4,13,GNB 37,24-14,Matt Wile punts 21 yards muffed catch by Tramon Williams recovered by Marcus Sherels and returned for no gain,0,0.790,4.580,3.79,21
2018-12-13,Chiefs,Chargers,4,2:47,4,15,KAN 6,28-21,Dustin Colquitt punts 55 yards muffed catch by Desmond King recovered by Desmond King and returned for no gain,0,-2.490,-1.600,.89,55
2018-12-16,Bears,Packers,4,2:51,4,6,CHI 12,24-14,Pat O'Donnell punts 51 yards muffed catch by Josh Jackson recovered by Josh Jackson and returned for no gain,0,-2.490,-1.660,.83,51
2018-12-16,Eagles,Rams,4,3:03,4,12,PHI 15,30-23,Cameron Johnston punts 52 yards returned by Jojo Natson for 3 yards. Jojo Natson fumbles recovered by D.J. Alexander at LAR-36,0,-2.440,3.180,5.62,52,3
2018-12-02,Giants,Bears,4,12:46,4,18,NYG 12,24-14,Riley Dixon punts 53 yards returned by Tarik Cohen for 8 yards (tackle by Rhett Ellison). Tarik Cohen fumbles (forced by Rhett Ellison) recovered by Tarik Cohen at CHI-45. Penalty on Josh Bellamy: Illegal Block Above the Waist 10 yards,-2,-2.490,-0.670,1.82,53,8
2018-11-25,Jaguars,Bills,4,13:33,4,25,JAX 15,14-21,Logan Cooke punts 55 yards returned by Isaiah McKenzie for 9 yards (tackle by Jarrod Wilson). Isaiah McKenzie fumbles (forced by Jarrod Wilson) recovered by Isaiah McKenzie at BUF-43. Penalty on Marcus Murphy: Illegal Block Above the Waist 10 yards,-4,-2.440,-0.670,1.77,55,9
2018-09-06,Eagles,Falcons,4,7:42,4,14,PHI 21,10-12,Cameron Johnston punts 46 yards out of bounds,-1.960,-1.140,.82,46
2018-09-06,Falcons,Eagles,4,5:04,4,14,ATL 29,12-10,Matthew Bosher punts 52 yards returned by Darren Sproles for 12 yards (tackle by Eric Saubert). Penalty on Eric Saubert: Face Mask (15 Yards) 15 yards,-1.440,-1.990,-0.55,52,12
```
|
You could just use pandas
```
import pandas as pd
table = pd.read_html('https://www.pro-football-reference.com/play-index/play_finder.cgi?request=1&match=summary_all&year_min=2018&year_max=2018&game_type=R&game_num_min=0&game_num_max=99&week_num_min=0&week_num_max=99&quarter%5B%5D=4&minutes_max=15&seconds_max=00&minutes_min=00&seconds_min=00&down%5B%5D=0&down%5B%5D=1&down%5B%5D=2&down%5B%5D=3&down%5B%5D=4&field_pos_min_field=team&field_pos_max_field=team&end_field_pos_min_field=team&end_field_pos_max_field=team&type%5B%5D=PUNT&no_play=N&turnover_type%5B%5D=interception&turnover_type%5B%5D=fumble&score_type%5B%5D=touchdown&score_type%5B%5D=field_goal&score_type%5B%5D=safety&rush_direction%5B%5D=LE&rush_direction%5B%5D=LT&rush_direction%5B%5D=LG&rush_direction%5B%5D=M&rush_direction%5B%5D=RG&rush_direction%5B%5D=RT&rush_direction%5B%5D=RE&pass_location%5B%5D=SL&pass_location%5B%5D=SM&pass_location%5B%5D=SR&pass_location%5B%5D=DL&pass_location%5B%5D=DM&pass_location%5B%5D=DR&order_by=yards')[4]
table.to_csv(r'C:\Users\User\Desktop\Data.csv', sep=',', encoding='utf-8-sig',index = False )
```
| 7,772
|
44,299,462
|
I run below commands after unzipping that python 3.6 tar.xz file.
```
./configure
make
make install
```
Error log:
```
ranlib libpython3.6m.a
gcc -pthread -Xlinker -export-dynamic -o python Programs/python.o libpython3.6m.a -lpthread -ldl -lutil -lrt -lm
if test "no-framework" = "no-framework" ; then \
/usr/bin/install -c python /usr/local/bin/python3.6m; \
else \
/usr/bin/install -c -s Mac/pythonw /usr/local/bin/python3.6m; \
fi
/usr/bin/install: cannot create regular file `/usr/local/bin/python3.6m': Read-only file system
make: *** [altbininstall] Error 1
```
When i run ./configure followed by make , and then make install i run into this error!
|
2017/06/01
|
[
"https://Stackoverflow.com/questions/44299462",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4909084/"
] |
Have you tried running the above commands using `sudo` powers?
original answer: <https://askubuntu.com/q/865554/667903>
`sudo make install`
**or**
If you are using Ubuntu 16.10 or 17.04, then Python 3.6 is in the universe repository, so you can just run
```
sudo apt-get update
sudo apt-get install python3.6
```
|
Try after installing build essentials whic contains compilers,package dev tools and libs:
sudo apt-get install build-essential
| 7,773
|
52,929,872
|
I am trying to read the text in a cheque using pytesseract OCR. I have installed the required python packages required for this task e.g. pip install pytesseract.
However when I try to use the package to read the file I get the following error:
```
pytesseract.image_to_string(im, lang='eng')
Traceback (most recent call last):
File "<ipython-input-17-d7d9f430493b>", line 1, in <module>
pytesseract.image_to_string(im, lang='eng')
File "C:\Users\BRIGHT\Anaconda3\lib\site-packages\pytesseract\pytesseract.py", line 294, in image_to_string
return run_and_get_output(*args)
File "C:\Users\BRIGHT\Anaconda3\lib\site-packages\pytesseract\pytesseract.py", line 202, in run_and_get_output
run_tesseract(**kwargs)
File "C:\Users\BRIGHT\Anaconda3\lib\site-packages\pytesseract\pytesseract.py", line 172, in run_tesseract
raise TesseractNotFoundError()
TesseractNotFoundError: tesseract is not installed or it's not in your path
```
This error dosent make sense because I actually import the package without getting any error. But when I try to use it I get the error.
Here is my code:
```
from PIL import Image
import pytesseract
im=Image.open('BritishChequeAnnotated.jpg')
text=pytesseract.image_to_string(im, lang='eng')
```
|
2018/10/22
|
[
"https://Stackoverflow.com/questions/52929872",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10122096/"
] |
The documentation for tesseract makes this clear.
<https://pypi.org/project/pytesseract/>
```
# If you don't have tesseract executable in your PATH, include the following:
pytesseract.pytesseract.tesseract_cmd = r'<full_path_to_your_tesseract_executable>'
```
|
You need to install teserract executable file and include the path into the program then it won't gives any error
| 7,775
|
41,419,093
|
I am fairly new to Python and started learning. I am trying to automate data entry. I am stuck at the "save" button. How do I find the right information and click it to save?
Thank you so much
PyGuy
---
Element
```
<input type="submit" value="Save">
```
Xpath
```
//*[@id="decorated-admin-content"]/div/div/form/div[10]/div/input
```
Selector
```
#decorated-admin-content > div > div > form > div.buttons-container > div > input[type="submit"]
```
---
On my python script, I have entered
```
from selenium import webdriver
from selenium.webdriver.common.by import By
driver.findElement(By.xpath("//input[@type='submit'and @value='save']")).click()
# I also tried below
# driver.findElement(By.xpath("//input[@type='submit'][@value='Save']")).click();
# driver.findElement(By.xpath("//*[@id="decorated-admin-content"]"))
```
|
2017/01/01
|
[
"https://Stackoverflow.com/questions/41419093",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7363397/"
] |
If you're using python, the syntax is not right. Python uses snake\_case and By uses CONSTANT convention
```
from selenium import webdriver
from selenium.webdriver.common.by import By
driver.find_element(By.XPATH, "//input[@type='submit' and @value='save']").click()
```
It's actually suggested to use the individual methods for each By if you don't need to be dynamic:
```
driver.find_element_by_xpath("//input[@type='submit' and @value='save']").click()
```
Or css:
```
driver.find_element_by_css_selector('input[type="submit"]').click()
```
If that doesn't work, can you post the error traceback you are getting?
|
Did you try with other parameter than xpath ?
I also had some difficulties with selenium, you can try the following line :
```
driver.findElement(By.tagName("form")).submit()
```
It's works for me and is useful to validate forms
| 7,776
|
57,567,892
|
Previously I asked a similar question: [cx\_Freeze unable fo find mkl: MKL FATAL ERROR: Cannot load mkl\_intel\_thread.dll](https://stackoverflow.com/questions/57493584/cx-freeze-unable-fo-find-mkl-mkl-fatal-error-cannot-load-mkl-intel-thread-dll)
But now I have a subtle difference. I want to run the program without installing anaconda, just within a `cmd.exe` terminal, but it seems I am doing something wrong or that it is not possible.
After generating my application with `python setup.py bdist_msi` using `cx-freeze`, I am able to install and then run it within an anaconda environment, but if I simply open a `cmd.exe` terminal and run it, I get
```
INTEL MKL ERROR: The specified module could not be found. mkl_intel_thread.dll.
Intel MKL FATAL ERROR: Cannot load mkl_intel_thread.dll.
```
However, when running
```
where mkl_intel_thread.dll
```
the dll is found, so I think this means it's registered in the system (I am more used to use Linux, so may be I am wrong).
How could I solve this problem?
|
2019/08/20
|
[
"https://Stackoverflow.com/questions/57567892",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1612432/"
] |
Maybe another DLL necessary for MKL, such as `libiomp5md.dll` for example, is missing and causes the error. See [Cannot load mkl\_intel\_thread.dll on python executable](https://stackoverflow.com/q/54337644/8516269), my answer there and its comments.
If this still does not solve your problem, try to manually copy other DLLs from the anaconda environment's library path into the app installation directory and its `lib` subdirectory. Once you have found which dependency is missing, you can use the `include_files` option of cx\_Freeze to automatize this step in the setup (as you know).
Another possible issue would be that you have an incompatible version of MKL installed on your system and that the frozen application finds this wrong one, but this is unlikely unless you have a 32-bit Python installation on a 64-bit system or have installed the application on another system.
EDIT:
It could still also simply be that the frozen application does not find `mkl_intel_thread.dll` although `where` finds it. `where` looks in the system search path given by the `PATH` environment variable, while Python looks in the modules search path given by `sys.path`, which usually does not include the content of `PATH`, see [Where is Python's sys.path initialized from?](https://stackoverflow.com/q/897792/8516269) But on Windows there is a fallback mechanism for registered DLLs (I don't know how it works). Anyway, one should not rely on this fallback as soon as one intends to install and run the application on another system, because the necessary DLL may not be installed there. Thus the necessary dependencies should always be included in the installation directory.
|
Recently I faced the same error in python3.7 . I did not have the option of moving Dll, I Solved the problem by just doing.
```
conda install cython
```
After the cython install all dll's were in proper place.
| 7,777
|
62,501,832
|
I was using following code to download the NSE stock data(indian stocks) :
```
from alpha_vantage.timeseries import TimeSeries
ts = TimeSeries(key='my api key',output_format='pandas')
data, meta_data = ts.get_daily_adjusted(symbol='VEDL.NS', outputsize='full')
data.to_csv('/content/gdrive/My Drive/ColabNotebooks/NSEDATA/VEDL.NS.csv')
```
Now i am getting
```
ValueError Traceback (most recent call last)
<ipython-input-11-d1a160a06338> in <module>()
1 i='VEDL'
----> 2 data, meta_data = ts.get_daily_adjusted(symbol='{0}.NS'.format(i), outputsize='full')
3 data.to_csv('/content/gdrive/My Drive/Colab Notebooks/NSEDATA/{0}.csv'.format(i))
2 frames
/usr/local/lib/python3.6/dist-packages/alpha_vantage/alphavantage.py in _handle_api_call(self, url)
333 if not json_response:
334 raise ValueError(
--> 335 'Error getting data from the api, no return was given.')
336 elif "Error Message" in json_response:
337 raise ValueError(json_response["Error Message"])
ValueError: Error getting data from the api, no return was given.
```
I have been using above api for 3 months and it has been working since then. Please help me
|
2020/06/21
|
[
"https://Stackoverflow.com/questions/62501832",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13787047/"
] |
there are several ways to do this, and the one I will offer are not the best or the nicest ones, but hope that they will help
**1) Simply write all your data to table**
You can just insert all your data to the table setting the [ConflictAlgorithm](https://pub.dev/documentation/sqflite_common/latest/sql/ConflictAlgorithm-class.html) to ***replace*** or ***ignore***
```
db.insert(table, data, conflictAlgorithm: ConflictAlgorithm.replace);
```
This will replace/ignore same entries
**2) Query, compare, replace**
This is a less 'elegant' solution, you can first query all your data from table
```
db.query(table, columns: availableColumns, where: 'columnToQueryBy = ?', whereArgs: [neededValue]);
```
Then compare to the data you have
Then write using `db.insert()` as above
I think that in your case the first option suits better, [this](https://flutter.dev/docs/cookbook/persistence/sqlite) example pretty much covers most things that might help you
Hope it helps!
|
WHAT ABOUT Read Data from Sqflite and Show in datatable?
| 7,785
|
39,067,203
|
This seems to work fine -
```
%time a = "abc"
print(a)
CPU times: user 0 ns, sys: 0 ns, total: 0 ns
Wall time: 19.1 µs
abc
```
This doesn't -
```
def func():
%time b = "abc"
print(b)
func()
CPU times: user 0 ns, sys: 0 ns, total: 0 ns
Wall time: 31 µs
---------------------------------------------------------------------------
NameError Traceback (most recent call last)
<ipython-input-8-57f7d48952b8> in <module>()
3 print(b)
4
----> 5 func()
<ipython-input-8-57f7d48952b8> in func()
1 def func():
2 get_ipython().magic(u'time b = "abc"')
----> 3 print(b)
4
5 func()
NameError: global name 'b' is not defined
```
Here's a link to a [notebook](https://gist.github.com/jayantj/2ead360ffb326f5e5e78e9e58b8a153e)
I'm using python 2.7, haven't tried it with python3 yet.
Is this expected behaviour?
|
2016/08/21
|
[
"https://Stackoverflow.com/questions/39067203",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1500929/"
] |
POST data is given in a protected Map getParams () and not the URL:
```
@Override
protected Map<String,String> getParams(){
Map<String,String> params = new HashMap<String, String>();
params.put("parametr1","value1");
params.put("parametr2","value2");
params.put("parametr3","value3");
return params;
}
@Override
public Map<String, String> getHeaders() throws AuthFailureError {
Map<String,String> params = new HashMap<String, String>();
params.put("Content-Type","application/x-www-form-urlencoded");
return params;
}
```
Fix your url and use JsonObjectRequest
|
You want to parse a array into a boolean, you have to loop through the array like this:
```
JSONObject jsonObject = new JSONObject(response);
JSONArray jsonArray= jsonObject.getJSONArray("example");
if (jsonArray.length() != 0) {
for (int i = 0; i < jsonArray.length(); i++) {
JSONObject jo = jsonArray.getJSONObject(i);
boolean isLoginSuccess = Boolean.parseBoolean(jo.getString("exampleString"));
}
}
```
| 7,786
|
70,582,851
|
I'm trying to create a webscraping Flask app that creates a new Webdriver instance for each user session, so that different users can scrape content from different pages. This would be simpler if the `driver.get()` and data collection happened in the same API call, but they can't due to the nature of the scraping I'll be doing. Here's what I have so far:
```python
from flask import Flask, session
from flask_session import Session
app = Flask(__name__)
SESSION_TYPE = 'filesystem'
app.config.from_object(__name__)
Session(app)
@app.route('/get_site/<link>', methods=['GET'])
def get_site(link):
session['driver'] = webdriver.Chrome(options=options)
session['driver'].get(link)
return 'link opened!' # confirmation message
@app.route('/scrape_current_site', methods=['GET'])
def scrape_current_site():
return session['driver'].title # collecting arbitrary data from page
app.run()
```
This doesn't work, though:
`AttributeError: Can't pickle local object '_createenviron.<locals>.encode'`
Conceptually a flask session *feels* like it's what I'm looking for (a new, unique object for each session), but I can't figure out how to get it to work.
|
2022/01/04
|
[
"https://Stackoverflow.com/questions/70582851",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13592179/"
] |
I advise you to follow many good techniques when developing a responsive webpage, here I explain to you:
* **Replacing in your CSS absolute units such as px for percentages or em**. It is always much better to work with relative measurements rather than absolute ones. From my experience, I always try to work with em, [here](https://www.w3schools.com/TAGS/ref_pxtoemconversion.asp) is a conversion from px to em.
* **Using responsive layouts like [flex](https://developer.mozilla.org/en-US/docs/Web/CSS/CSS_Flexible_Box_Layout/Basic_Concepts_of_Flexbox) or [grid](https://developer.mozilla.org/es/docs/Web/CSS/CSS_Grid_Layout)**.
* **Adding metadata related to the viewport in the HTML head tag**. I can see you haven't written it. As we can read in W3Schools viewport is the user's visible area of a web page. It varies with the device so it will be smaller on a mobile phone than on a computer screen. You should include the following element in all your web pages:
`<meta name="viewport" content="width=device-width, initial-scale=1.0">`
In conclusion, try to avoid absolute positioning because it is not the best option. Try to follow this advice and I am sure your webpage will turn so much better. :D.
|
This is a common problem I would advise that you read up on what `viewports` are on [w3schools.com](https://www.w3schools.com/css/css_rwd_viewport.asp).
A viewport is basically the visible area on a user’s device.
As we both know, the visible area of a desktop is different from that of a notebook, tablet, and mobile phone.
The HTML, CSS you have displayed on your PC is not responsive to other viewports.
The way you can go about this is by writing the HTML and CSS in such a way that it will respond to all devices.
You need to use relative and absolute units like `em` and `%`, that way the code you have written will scale with respect to the viewport.
Media queries are also very functional. Media queries will help customize for a particular viewport.
W3schools.com will give you a great direction on these topics. I hope you enjoy yourself while learning.
| 7,787
|
15,572,171
|
The python syntax of `for x in y:` to iterate over a list must somehow remember what element the pointer is at currently right? How would we access that value as I am trying to solve this without resorting to `for index, x in enumerate(y):`
The technical reason why I want to do this is that I assume that `enumerate()` costs performance while somehow accessing the 'pointer' which is already existing would not. I see that from the answers however this pointer is quite private and inaccessible however.
The functional reason why I wanted to do this is to be able to skip for instance 100 elements if the current element float value is far off from the 'desired' float range.
-- Answer --
The way this was solved was as follows (pure schematic example):
```
# foo is assumed to be ordered in this example
foo = [1,2,3,4,5,6....,99,100]
low = 60
high = 70
z = iter(foo)
for x in z:
if x < low-10
next(islice(z,10,10),None)
if x > high
break
```
|
2013/03/22
|
[
"https://Stackoverflow.com/questions/15572171",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1093485/"
] |
You cannot. `for` uses the [Python iteration protocol](http://docs.python.org/2/glossary.html#term-iterator), which for lists means it'll create a *private* iterator object. That object keeps track of the position in the list.
Even if you were to create the iterator explicitly with [`iter()`](http://docs.python.org/2/library/functions.html#iter), the current position is *not* a public property on that object:
```
>>> list_iter = iter([])
>>> list_iter
<listiterator object at 0x10056a990>
>>> dir(list_iter)
['__class__', '__delattr__', '__doc__', '__format__', '__getattribute__', '__hash__', '__init__', '__iter__', '__length_hint__', '__new__', '__reduce__', '__reduce_ex__', '__repr__', '__setattr__', '__sizeof__', '__str__', '__subclasshook__', 'next']
```
The point of the iteration protocol is that it lets you treat any sequence, even one that continues without bounds, in exactly the same way. It is trivial to create a sequence generator that *does not have a position*:
```
def toggle():
while True:
yield True
yield False
```
That generator will toggle between `True` and `False` values as you iterate over it. There is no position in a sequence there, so there is no point exposing a position either.
Just stick to `enumerate()`. All that `enumerate()` has to do is keep a *counter*. It doesn't keep position in the wrapped iterator at all. Incrementing that integer does not cost you much performance or memory.
[`enumerate()`](http://docs.python.org/2/library/functions.html#enumerate) is *basically* this, [implemented in C](http://hg.python.org/cpython/file/tip/Objects/enumobject.c#l122):
```
def enumerate(sequence, start=0):
n = start
for elem in sequence:
yield n, elem
n += 1
```
Because it is implemented in C, it'll beat trying to read an attribute on the original iterator any day, which would require more bytecode operations in each iteration.
|
No, it uses the underlying **iterator**, which is not forced to keep track of a current index.
Unless you manually incerement a counter, this is not possible:
```
idx = 0
for x in y:
idx+=1
# ...
```
so, just keep with `enumerate()`
| 7,790
|
52,729,841
|
I have only very rudimentary experience in Python. I am trying to install the package `pyslim` (see [here on the pypi website](https://pypi.org/project/pyslim/)). I did
```
$ pip install pyslim
Requirement already satisfied: pyslim in ./Library/Python/2.7/lib/python/site-packages/pyslim-0.1-py2.7.egg (0.1)
Requirement already satisfied: msprime in /usr/local/lib/python2.7/site-packages (from pyslim) (0.6.1)
Requirement already satisfied: attrs in /usr/local/lib/python2.7/site-packages (from pyslim) (16.3.0)
Requirement already satisfied: svgwrite in /usr/local/lib/python2.7/site-packages (from msprime->pyslim) (1.1.12)
Requirement already satisfied: jsonschema in /usr/local/lib/python2.7/site-packages (from msprime->pyslim) (2.6.0)
Requirement already satisfied: six in /usr/local/lib/python2.7/site-packages (from msprime->pyslim) (1.10.0)
Requirement already satisfied: numpy>=1.7.0 in /usr/local/lib/python2.7/site-packages/numpy-1.10.4-py2.7-macosx-10.11-x86_64.egg (from msprime->pyslim) (1.10.4)
Requirement already satisfied: h5py in /usr/local/lib/python2.7/site-packages (from msprime->pyslim) (2.8.0)
Requirement already satisfied: pyparsing>=2.0.1 in /usr/local/lib/python2.7/site-packages (from svgwrite->msprime->pyslim) (2.2.0)
Requirement already satisfied: functools32; python_version == "2.7" in /usr/local/lib/python2.7/site-packages (from jsonschema->msprime->pyslim) (3.2.3.post2)
```
But when I open python and try to import `pyslim`, it fails
```
> import pyslim
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/remi/Library/Python/2.7/lib/python/site-packages/pyslim-0.1-py2.7.egg/pyslim/__init__.py", line 4, in <module>
from pyslim.slim_metadata import * # NOQA
File "/Users/remi/Library/Python/2.7/lib/python/site-packages/pyslim-0.1-py2.7.egg/pyslim/slim_metadata.py", line 1, in <module>
import attr
ImportError: No module named attr
```
So, I did
```
$ pip install attr
Requirement already satisfied: attr in /usr/local/lib/python2.7/site-packages (0.3.1)
```
and
```
$ pip install attrs
Requirement already satisfied: attrs in /usr/local/lib/python2.7/site-packages (16.3.0)
```
I restarted python and tried to import `pyslim` again but I keep receiving the same error message. I also tried to download and install the files from github by doing
```
$ git clone https://github.com/tskit-dev/pyslim.git
$ cd pyslim
$ python setup.py install --user
```
as indicated [here on the pypi website](https://pypi.org/project/pyslim/). On this last line of code, I get a long output ending with
```
Download error on https://pypi.python.org/simple/: [SSL: TLSV1_ALERT_PROTOCOL_VERSION] tlsv1 alert protocol version (_ssl.c:590) -- Some packages may not be found!
No local packages or download links found for attrs
error: Could not find suitable distribution for Requirement.parse('attrs')
```
---
I am using `Python 2.7.10` on a `MAC OSX 10.11.6`. Not sure if it matter but I usually install things with homebrew. I am using `pip 18.1 from /usr/local/lib/python2.7/site-packages/pip (python 2.7)`.
**Edit**
```
$ which python
/usr/bin/python
$ which pip
/usr/local/bin/pip
```
|
2018/10/09
|
[
"https://Stackoverflow.com/questions/52729841",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2051137/"
] |
Take a look to [this other question](https://stackoverflow.com/questions/49228744/attributeerror-module-attr-has-no-attribute-s) that is related to the `attrs` package.
In your case, you have `attr` and `attrs` installed at the same time, and they are incompatible between them, so python is unable to resolve the package name on the import correctly.
You should use `attrs` only, so try uninstalling `attr` and try it again:
```
python -m pip uninstall attr
```
If, in the future, you need to have some packages incompatibles between them, take a look about using [virtual environments in Python](https://help.dreamhost.com/hc/en-us/articles/215489338-Installing-and-using-virtualenv-with-Python-2), which are really easy to use and will be very useful to play with packages and packages versions without break anything.
|
First uninstall pyslim. Use "pip uninstall pyslim". Then try installing again using
"conda install -c conda-forge pyslim"
Refer <https://anaconda.org/conda-forge/pyslim>
| 7,793
|
1,075,905
|
My first attempt at jython is a java/jython project I'm writing in eclipse with pydev.
I created a java project and then made it a pydev project by the RightClick project >> pydev >> set as... you get the idea. I then added two source folders, one for java and one for jython, and each source folder has a package. And I set each folder as a buildpath for the project. I guess I'm letting you know all this so hopefully you can tell me wether or not I set the project up correctly.
But the real question is: **how do I get my jython code made into a class file so the java code can use it?** The preferred method would be that eclipse/pydev would do this for me automatically, but I can't figure it out. Something mentioned in the jython users guide implies that it's possible but I can't find info on it anywhere.
EDIT: I did find some information [here](http://wiki.python.org/jython/JythonMonthly/Articles/September2006/1) and [here](http://www.jython.org/archive/22/jythonc.html#what-is-jythonc), but things are not going too smooth.
I've been following the guide in the second link pretty closely but I can't figure out how to get jythonc to make a constructor for my python class.
|
2009/07/02
|
[
"https://Stackoverflow.com/questions/1075905",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/125946/"
] |
Jythonc doesn't exist anymore, it has been forked off to another project called [Clamp](http://github.com/groves/clamp/), but with that said...
>
> ...you can pre-compile
> your python scripts to .class files
> using:
>
>
> jython [jython home]/Lib/compileall.py
> [the directory where you keep your
> python code]
>
>
>
Source - [*Jython Newsletter, March 2009*](http://wiki.python.org/jython/JythonMonthly/Newsletters/March2009)
When I fed it a folder with Python **2.7** code (knowing it would fail in Jython **2.5**) it did output a .class file, even though it doesn't function. Try that with your Jython scripts. If it works, please let us know, because I'll be where you are soon enough.
Once you're that far, it isn't hard to convert your command line statement to an External Tool in PyDev that can be called as needed.
|
Following the "Accessing Jython from Java Without Using jythonc" tutorial it became possible to use the jython modules inside java code. The only tricky point is that the \*.py modules do not get compiled to \*.class files. So it turns out to be exotic scripting inside java. The performance may of course degrade vs jythonc'ed py modules, but as I got from the jython community pages they are not going to support jythonc (and in fact have already dropped it in jython2.5.1).
So if you decide to follow non-jythonc approach, the above tutorial is perfect. I had to modify the JythonFactory.java code a bit:
```
String objectDef = "=" + javaClassName + "(your_constructor_params here)";
try {
Class JavaInterface = Class.forName(interfaceName);
System.out.println("JavaInterface=" + JavaInterface);
javaInt =
interpreter.get("instance name of a jython class from jython main function").__tojava__(JavaInterface);
} catch (ClassNotFoundException ex) {
ex.printStackTrace(); // Add logging here
}
```
| 7,794
|
68,654,385
|
This is my code to read the LDS sensor. LDS sensor is used to estimate the distance from robot to walls. I can print the data(estimate distance) in the terminal but I cant write it in the csv file automatically. I would like to write about 1000 data into csv file by using python code [enter image description here](https://i.stack.imgur.com/FF8d8.png).
```
#! /usr/bin/env python
import rospy
import csv
from sensor_msgs.msg import LaserScan
def callback(msg):
d = msg.ranges [90]
print d
rospy.init_node("read")
sub = rospy.Subscriber('/scan', LaserScan, callback)
rospy.spin()
f= open("\\testread.csv", "w")
c=csv.writer(f)
for x in range(1000):
c.writerow(d)
f.close()
```
|
2021/08/04
|
[
"https://Stackoverflow.com/questions/68654385",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/16594158/"
] |
Outputting a topic formatted as csv is built into `rostopic echo`, the -p flag [rostopic\_echo](http://wiki.ros.org/rostopic#rostopic_echo). You can redirect the terminal output to a file instead of the terminal with `>`
So to save `/scan` to csv you could just run the following in a terminal:
```
rostopic echo -p /scan > scan.csv
```
That page also mentions that the -b flag can be used to echo a specific topic from a rosbag. Not sure if you are aware of [rosbag](http://wiki.ros.org/rosbag) which is purpose built for recording ros data. That way you could also record other relevant data at the same time and use other ros tools to analyze your data like [plotjuggler](http://wiki.ros.org/plotjuggler) for example.
|
In a more classic ROS style, do all your setup in the "main" function, have global variables for either the returned values of callbacks or for initialized parameters, and do the processing for all your received msgs in your callbacks or timers.
```py
#! /usr/bin/env python
# lds_to_csv_node.py
import rospy
import csv
from sensor_msgs.msg import LaserScan
# Global vars
csv_writer = None
def scan_callback(msg):
d = msg.ranges [90]
if csv_writer is not None:
csv_writer.writerow(d)
def main():
rospy.init_node("lds_to_csv_node")
sub = rospy.Subscriber('/scan', LaserScan, scan_callback)
fh = open("\\testread.csv", "wb")
csv_writer = csv.writer(fh) # More constant params
rospy.spin()
fh.close()
csv_writer = None
if __name__ == '__main__':
main()
```
| 7,795
|
17,891,704
|
What does the [None] do in this code?
```
public class Example { //Java
public Example (int _input, int _outputs){ //Java
scratch = [None] * (_input + _outputs); //Python Code
```
I'm porting a python implementation into Java and need to better understand what this means.
Many thanks for your help.
|
2013/07/26
|
[
"https://Stackoverflow.com/questions/17891704",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2624276/"
] |
```
scratch = [None] * (_input + _outputs)
```
This makes a list of length `_input + _outputs`. Each element in this list is a `None` object.
This list is assigned to a variable, named `scratch`
|
```
[value] * number
```
means list of `number` elements, each of them is `value`.
So your code means list of `(_input + _outputs)` `None`'s
`null` is closest thing to Python's `None` I know.
| 7,796
|
3,330,280
|
I'm weaving my c code in python to speed up the loop:
```
from scipy import weave
from numpy import *
#1) create the array
a=zeros((200,300,400),int)
for i in range(200):
for j in range(300):
for k in range(400):
a[i,j,k]=i*300*400+j*400+k
#2) test on c code to access the array
code="""
for(int i=0;i<200;++i){
for(int j=0;j<300;++j){
for(int k=0;k<400;++k){
printf("%ld,",a[i*300*400+j*400+k]);
}
printf("\\n");
}
printf("\\n\\n");
}
"""
test =weave.inline(code, ['a'])
```
It's working all well, but it is still costly when the array is big.
Someone suggested me to use a.strides instead of the nasty "a[i\*300\*400+j\*400+k]"
I can't understand the document about .strides.
Any ideas
Thanks in advance
|
2010/07/25
|
[
"https://Stackoverflow.com/questions/3330280",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/389799/"
] |
You could replace the 3 for-loops with
```
grid=np.ogrid[0:200,0:300,0:400]
a=grid[0]*300*400+grid[1]*400+grid[2]
```
The following suggests this may result in a ~68x (or better? see below) speedup:
```
% python -mtimeit -s"import test" "test.m1()"
100 loops, best of 3: 17.5 msec per loop
% python -mtimeit -s"import test" "test.m2()"
1000 loops, best of 3: 247 usec per loop
```
test.py:
```
import numpy as np
n1,n2,n3=20,30,40
def m1():
a=np.zeros((n1,n2,n3),int)
for i in range(n1):
for j in range(n2):
for k in range(n3):
a[i,j,k]=i*300*400+j*400+k
return a
def m2():
grid=np.ogrid[0:n1,0:n2,0:n3]
b=grid[0]*300*400+grid[1]*400+grid[2]
return b
if __name__=='__main__':
assert(np.all(m1()==m2()))
```
With n1,n2,n3 = 200,300,400,
```
python -mtimeit -s"import test" "test.m2()"
```
took 182 ms on my machine, and
```
python -mtimeit -s"import test" "test.m1()"
```
has yet to finish.
|
The problem is that you are printing out 2.4 million numbers to the screen in your C code. This is of course going to take a while because the numbers have to be converted into strings and then printed to the screen. Do you really need to print them all to the screen? What is your end goal here?
For a comparison, I tried just setting another array as each of the elements in a. This process took about .05 seconds in weave. I gave up on timing the printing of all elements to the screen after 30 seconds or so.
| 7,798
|
74,091,484
|
I am trying to complete a project with javascript for which I need to import a library, however, npm is giving me errors whenever I try to use it.
the error:
```
npm WARN deprecated har-validator@5.1.5: this library is no longer supported
npm WARN deprecated uuid@3.4.0: Please upgrade to version 7 or higher. Older versions may use Math.random() in certain circumstances, which is known to be problematic. See https://v8.dev/blog/math-random for details.
npm WARN deprecated request@2.88.2: request has been deprecated, see https://github.com/request/request/issues/3142
npm ERR! code 1
npm ERR! path C:\Users\danie\OneDrive\Desktop\science fair\node_modules\gl
npm ERR! command failed
npm ERR! command C:\Windows\system32\cmd.exe /d /s /c prebuild-install || node-gyp rebuild
npm ERR! gyp info it worked if it ends with ok
npm ERR! gyp info using node-gyp@7.1.2
npm ERR! gyp info using node@16.18.0 | win32 | x64
npm ERR! gyp info find Python using Python version 3.10.8 found at "C:\Python310\python.exe"
npm ERR! gyp ERR! find VS
npm ERR! gyp ERR! find VS msvs_version was set from command line or npm config
npm ERR! gyp ERR! find VS - looking for Visual Studio version 2015
npm ERR! gyp ERR! find VS VCINSTALLDIR not set, not running in VS Command Prompt
npm ERR! gyp ERR! find VS unknown version "undefined" found at "D:\Applications\VS studio"
npm ERR! gyp ERR! find VS checking VS2019 (16.11.32929.386) found at:
npm ERR! gyp ERR! find VS "C:\Program Files (x86)\Microsoft Visual Studio\2019\BuildTools"
npm ERR! gyp ERR! find VS - found "Visual Studio C++ core features"
npm ERR! gyp ERR! find VS - found VC++ toolset: v142
npm ERR! gyp ERR! find VS - missing any Windows SDK
npm ERR! gyp ERR! find VS could not find a version of Visual Studio 2017 or newer to use
npm ERR! gyp ERR! find VS looking for Visual Studio 2015
npm ERR! gyp ERR! find VS - not found
npm ERR! gyp ERR! find VS not looking for VS2013 as it is only supported up to Node.js 8
npm ERR! gyp ERR! find VS
npm ERR! gyp ERR! find VS valid versions for msvs_version:
npm ERR! gyp ERR! find VS
npm ERR! gyp ERR! find VS **************************************************************
npm ERR! gyp ERR! find VS You need to install the latest version of Visual Studio
npm ERR! gyp ERR! find VS including the "Desktop development with C++" workload.
npm ERR! gyp ERR! find VS For more information consult the documentation at:
npm ERR! gyp ERR! find VS https://github.com/nodejs/node-gyp#on-windows
npm ERR! gyp ERR! find VS **************************************************************
npm ERR! gyp ERR! find VS
npm ERR! gyp ERR! configure error
npm ERR! gyp ERR! stack Error: Could not find any Visual Studio installation to use
npm ERR! gyp ERR! stack at VisualStudioFinder.fail (C:\Users\danie\OneDrive\Desktop\science fair\node_modules\node-gyp\lib\find-visualstudio.js:121:47)
npm ERR! gyp ERR! stack at C:\Users\danie\OneDrive\Desktop\science fair\node_modules\node-gyp\lib\find-visualstudio.js:74:16
npm ERR! gyp ERR! stack at VisualStudioFinder.findVisualStudio2013 (C:\Users\danie\OneDrive\Desktop\science fair\node_modules\node-gyp\lib\find-visualstudio.js:351:14)
npm ERR! gyp ERR! stack at C:\Users\danie\OneDrive\Desktop\science fair\node_modules\node-gyp\lib\find-visualstudio.js:70:14
npm ERR! gyp ERR! stack at C:\Users\danie\OneDrive\Desktop\science fair\node_modules\node-gyp\lib\find-visualstudio.js:372:16
npm ERR! gyp ERR! stack at C:\Users\danie\OneDrive\Desktop\science fair\node_modules\node-gyp\lib\util.js:54:7
npm ERR! gyp ERR! stack at C:\Users\danie\OneDrive\Desktop\science fair\node_modules\node-gyp\lib\util.js:33:16
npm ERR! gyp ERR! stack at ChildProcess.exithandler (node:child_process:410:5)
npm ERR! gyp ERR! stack at ChildProcess.emit (node:events:513:28)
npm ERR! gyp ERR! stack at maybeClose (node:internal/child_process:1100:16)
npm ERR! gyp ERR! System Windows_NT 10.0.19044
npm ERR! gyp ERR! command "C:\\Program Files\\nodejs\\node.exe" "C:\\Users\\danie\\OneDrive\\Desktop\\science fair\\node_modules\\node-gyp\\bin\\node-gyp.js" "rebuild"
npm ERR! gyp ERR! cwd C:\Users\danie\OneDrive\Desktop\science fair\node_modules\gl
npm ERR! gyp ERR! node -v v16.18.0
npm ERR! gyp ERR! node-gyp -v v7.1.2
npm ERR! gyp ERR! not ok
npm ERR! A complete log of this run can be found in:
npm ERR! C:\Users\danie\AppData\Local\npm-cache\_logs\2022-10-16T23_54_51_958Z-debug-0.log
```
I tried upwards of 20 solutions on this site and none worked for me. Does anyone have any ideas? I tried npm update, reinstalling npm, reinstalling node, etc. I believe it has something to do with visual studio as with this very similar question ([How can I fix "npm ERR! code1"](https://stackoverflow.com/questions/72033205/how-can-i-fix-npm-err-code1)), however, I don't understand what they mean with the answer or what I should do.
EDIT: Forgot to mention I have Visual Studio 2022 already installed on this device, but it is on my D: drive. I didn't think that was the problem but I am unsure.
|
2022/10/17
|
[
"https://Stackoverflow.com/questions/74091484",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/17289555/"
] |
You wrote that you need a *"function that sorts a List of integers"*, but in your code you pass a list **of lists** to `binsort`. This only makes sense if those sublists represent digits of a number. It makes no more sense when these digits are no single digits anymore. Either:
* You sort a list of integers like `[142, 642, 141]`, or
* You sort a list of lists of **digits**, like `[[1,4,2], [6,4,2], [1,4,1]]`, where each sublist is a digit-by-digit representation of an integer
Mixing both representations of integers makes little sense.
Here is how you would do it when using the first representation:
```
def binsort(a):
if not a: # Boundary case: list is empty
return a
bins = [[], a]
power = 1
while len(bins[0]) < len(a): # while there could be more digits to binsort by
binsTwo = [[] for _ in range(10)]
for bin in bins:
# Distribute the numbers in the new bins according to i-th digit (power)
for num in bin:
binsTwo[num // power % 10].append(num)
# Prepare for extracting next digit
power *= 10
bins = binsTwo
return bins[0]
```
If you use the second representation (list of lists of **single** digits), then your code is fine, provided you add a check for an empty input list, which is a boundary case.
```
def binsort(a):
if not a: # Boundary case: list is empty
return a
bins = [a]
for i in range(len(a[0]) - 1, -1, -1): # For each digit
binsTwo = [[] for _ in range(10)]
for bin in bins:
# Distribute the numbers in the new bins according to i-th digit
for digits in bin:
binsTwo[digits[i]].append(digits)
# Prepare for extracting next digit
bins = binsTwo
return [digits for bin in bins for digits in bin]
```
But again, you should not try to "solve" the case where you have sublists that consist of multi-digit numbers. Reconsider why you think you need support for that, because it seems like you have mixed two input formats -- as I explained above.
|
```
def bucketsort(a):
# Create empty buckets
buckets = [[] for _ in range(10)]
# Sort into buckets
for i in a:
buckets[i].append(i)
# Flatten buckets
a.clear()
for bucket in buckets:
a.extend(bucket)
return a
```
| 7,803
|
26,709,731
|
I'm writing a Google App Engine project in Python with Flask. Here's my directory structure for a Hello, World! app (contents of third party libraries ommitted for brevity's sake):
```
project_root/
flask/
jinja2/
markupsafe/
myapp/
__init__.py
simplejson/
werkzeug/
app.yaml
itsdangerous.py
main.py
```
Here's `main.py`:
```
from google.appengine.ext.webapp.util import run_wsigi_app
from myapp import app
run_wsgi_app(app)
```
And `myapp/__init__.py`:
```
from flask import Flask
app = Flask("myapp")
@app.route("/")
def hello():
return "Hello, World!"
```
Since Flask has so many dependencies and sub dependencies, I thought it would be nice to tidy up the directory structure by putting all the third party code in a subdirectory (say `project_root/lib`). Of course, then `sys.path` doesn't know where to find the libraries.
I've tried the solutions in [How do you modify sys.path in Google App Engine (Python)?](https://stackoverflow.com/questions/2354166/how-do-you-modify-sys-path-in-google-app-engine-python), but that doesn't seem to work. I've also tried changing `from flask import Flask` to `from lib/flask import Flask`, to no avail. Is there a good way to do this?
|
2014/11/03
|
[
"https://Stackoverflow.com/questions/26709731",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2752467/"
] |
try to add the full path to your lib, like here `sys.path[0:0] = ['/home/user/project_root/lib']`
|
I've actually run into this problem a number of time when writing my own Google App Engine® products before. My solution was to cat the files together to form one large python file. The command to do this is:
```
cat *.py
```
Of course, you may need to fix up the import statements inside the individual files. This can be done with a simple sed command to search for import statements in the files that match file names used in the cat command. But this is a relatively simple command compared to actually creating the huge python file (HPF, for short).
Another advantage of combining them all is increased speed: Because python doesn't need to go loading a whole bunch of useless tiny files all the time, it will run much faster. This means that the users of your Google App Engine® product will not need to wait for crazy long file load latency.
Of course, updating the libraries can be tricky. I recommend keeping the directory tree you used to cat the files together in order to easily update things in the future. All you need to do is copy the new files in and re-run the cat and sed commands. Think of it like pre-compiling your python library.
All the best!
-Milo
| 7,804
|
22,752,521
|
I'm trying to setup an application webserver using uWSGI + Nginx, which runs a Flask application using SQLAlchemy to communicate to a Postgres database.
When I make requests to the webserver, every other response will be a 500 error.
The error is:
```
Traceback (most recent call last):
File "/var/env/argos/lib/python3.3/site-packages/sqlalchemy/engine/base.py", line 867, in _execute_context
context)
File "/var/env/argos/lib/python3.3/site-packages/sqlalchemy/engine/default.py", line 388, in do_execute
cursor.execute(statement, parameters)
psycopg2.OperationalError: SSL error: decryption failed or bad record mac
The above exception was the direct cause of the following exception:
sqlalchemy.exc.OperationalError: (OperationalError) SSL error: decryption failed or bad record mac
```
The error is triggered by a simple `Flask-SQLAlchemy` method:
```
result = models.Event.query.get(id)
```
---
`uwsgi` is being managed by `supervisor`, which has a config:
```
[program:my_app]
command=/usr/bin/uwsgi --ini /etc/uwsgi/apps-enabled/myapp.ini --catch-exceptions
directory=/path/to/my/app
stopsignal=QUIT
autostart=true
autorestart=true
```
and `uwsgi`'s config looks like:
```
[uwsgi]
socket = /tmp/my_app.sock
logto = /var/log/my_app.log
plugins = python3
virtualenv = /path/to/my/venv
pythonpath = /path/to/my/app
wsgi-file = /path/to/my/app/application.py
callable = app
max-requests = 1000
chmod-socket = 666
chown-socket = www-data:www-data
master = true
processes = 2
no-orphans = true
log-date = true
uid = www-data
gid = www-data
```
The furthest that I can get is that it has something to do with uwsgi's forking. But beyond that I'm not clear on what needs to be done.
|
2014/03/31
|
[
"https://Stackoverflow.com/questions/22752521",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1097920/"
] |
The issue ended up being uwsgi's forking.
When working with multiple processes with a master process, uwsgi initializes the application in the master process and then copies the application over to each worker process. The problem is if you open a database connection when initializing your application, you then have multiple processes sharing the same connection, which causes the error above.
The solution is to set the `lazy` [configuration option for uwsgi](http://uwsgi-docs.readthedocs.org/en/latest/Options.html), which forces a complete loading of the application in each process:
>
> `lazy`
>
>
> Set lazy mode (load apps in workers instead of master).
>
>
> This option may have memory usage implications as Copy-on-Write semantics can not be used. When lazy is enabled, only workers will be reloaded by uWSGI’s reload signals; the master will remain alive. As such, uWSGI configuration changes are not picked up on reload by the master.
>
>
>
There's also a `lazy-apps` option:
>
> `lazy-apps`
>
>
> Load apps in each worker instead of the master.
>
>
> This option may have memory usage implications as Copy-on-Write semantics can not be used. Unlike lazy, this only affects the way applications are loaded, not master’s behavior on reload.
>
>
>
This uwsgi configuration ended up working for me:
```
[uwsgi]
socket = /tmp/my_app.sock
logto = /var/log/my_app.log
plugins = python3
virtualenv = /path/to/my/venv
pythonpath = /path/to/my/app
wsgi-file = /path/to/my/app/application.py
callable = app
max-requests = 1000
chmod-socket = 666
chown-socket = www-data:www-data
master = true
processes = 2
no-orphans = true
log-date = true
uid = www-data
gid = www-data
# the fix
lazy = true
lazy-apps = true
```
|
As an alternative you might dispose the engine. This is how I solved the problem.
Such issues may happen if there is a query during the creation of the app, that is, in the module that creates the app itself. If that states, the engine allocates a pool of connections and then uwsgi forks.
By invoking 'engine.dispose()', the connection pool itself is closed and new connections will come up as soon as someone starts making queries again. So if you do that at the end of the module where you create your app, new connections will be created after the UWSGI fork.
| 7,809
|
28,511,485
|
I am not super experienced in writing code. Could someone help me out?
I am trying to write some simple python to work with some GPS coordinates. Basically I want it to check if the gps is +or- ten feet to my a point then print if it is true. Think I got that part figured out but I am not sure about this part. After the gps = the point I want it to only print I am here once until the next time it is true.
Basically set a 10 foot bubble around a gps coordinate, if I walk into that bubble print “I am here” Just once until I walk out and reenter the bubble. Does that make sense?
```
if "point_lat-10feet" <= gps_lat <= "point_lat+10" and "point_log-10" <= gps_log <= "point_log+10"
Print "You are at point 1"
```
Update
======
Finally had some time to work on this project, here is how it ended up
I used the package utm 0.4.0 to convert the lat and long to Utm north and east.
```
north, east, zone, band = utm.from_latlon(gpsc.fix.latitude, gpsc.fix.longitude)
northgoal, eastgoal, heading = (11111, 22222, 90)
# +- 50 degree
headerror = 50
#bubble size 10m
pointsize = 10
flag = False
if ((north - northgoal) ** 2) + ((east - eastgoal) ** 2) <= (pointsize ** 2) and ((heading - headerror) % 360) <= gpsc.fix.track <= ((heading + headerror) % 360):
if not flag :
print "Arriving at point 1" , north, east, gpsc.fix.track, gpsc.fix.speed
flag = True
else:
print "Still inside point 1"
else:
# print "Outside point 1"
flag = False
```
I have this in a while loop with a few other points. There Is probably a better way but hey this works.
Thanks again for all the help!
|
2015/02/14
|
[
"https://Stackoverflow.com/questions/28511485",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2798178/"
] |
Just wrap it in a condition check within the if.
Not language specific, but the general idea
```
if <in coord condition>
if <not printed condition>
print
set print condition
else
clear print condition
```
|
If your syntax &c were corrected (those strings are definitely wrong) you'd be checking for a square, not a circle.
Assuming `point_lat`, `gps_lat`, &c, are all numbers (not strings), and expressed in feet (and assuming `_log` is a typo for `_lon`, standing for "longitude"), you'd need:
```
import math
if math.hypot(point_lat - gps_lat, point_lon - gps_lon) < 10:
print("You are at point 1")
```
`math.hypot` applies Pythagoras' theorem to compute the hypothenuse of the right-angled triangle with the given other two sides -- which also happens to be the distance between the two points. "The two points are less than 10 feet apart" defines a **circle**, as desired.
If the latitudes and longitudes are not numbers measured in feet, you'll need to make them such, e.g converting from strings to `float` and/or using the appropriate scale factors.
| 7,815
|
32,884,277
|
The code is:
```
import sys
execfile('test.py')
```
In test.py I have:
```
import zipfile
with zipfile.ZipFile('test.jar', 'r') as z:
z.extractall("C:\testfolder")
```
This code produces:
```
AttributeError ( ZipFile instance has no attribute '__exit__' ) # edited
```
The code from "test.py" works when run from python idle.
I am running python v2.7.10
|
2015/10/01
|
[
"https://Stackoverflow.com/questions/32884277",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3438538/"
] |
I made my code on python 2.7 but when I put it on my server which use 2.6 I have this error :
`AttributeError: ZipFile instance has no attribute '__exit__'`
For solve this problems I use Sebastian's answer on this post :
[Making Python 2.7 code run with Python 2.6](https://stackoverflow.com/questions/21268470/making-python-2-7-code-run-with-python-2-6/21268586#21268586)
```
import contextlib
def unzip(source, target):
with contextlib.closing(zipfile.ZipFile(source , "r")) as z:
z.extractall(target)
print "Extracted : " + source + " to: " + target
```
Like he said :
>
> contextlib.closing does exactly what the missing `__exit__` method on
> the ZipFile would be supposed to do. Namely, call the close method
>
>
>
|
According to the Python documentation, [`ZipFile.extractall()`](https://docs.python.org/2/library/zipfile.html#zipfile.ZipFile.extractall) was added in version 2.6. I expect that you'll find that you are running a different, older (pre 2.6), version of Python than that which idle is using. You can find out which version with this:
```
import sys
print sys.version
```
and the location of the running interpreter can be obtained with
```
print sys.executable
```
The title of your question supports the likelihood that an old version of Python is being executed because the `with` statement/context managers (classes with a `__exit__()` method) were not introduced until 2.6 (well 2.5 if explicitly enabled).
| 7,820
|
41,991,602
|
I have been trying to make a program that runs over a dictionary which makes anagrams from an inputted string. I found most of the code to this problem here [Algorithm to generate anagrams](https://stackoverflow.com/questions/55210/algorithm-to-generate-anagrams) but i need to have the program only output a line of max amount of words instead of only using words of a set length (MIN\_WORD\_SIZE).
If the input was: python Anagram.py "Finding Anagrams" 3 dictionary.txt.
The output would be "Gaming fans nadir" because it only uses 3 words at most to create the anagram.
?
|
2017/02/01
|
[
"https://Stackoverflow.com/questions/41991602",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7503133/"
] |
This is a slight work around, however if you only want to *filter* out words that have a length shorter than a specified argument, then you could make a change in your programs main.
Your program currently says:
```
for word in words.anagram(letters):
print word
```
You could change your program to say:
```
for word in words.anagram(letters):
if len(word.split()) <= int(argv[2]):
print word
```
Not the most elegant answer but, I hope this helps!
|
I don't like to make programs `for` programmers, usually.
To the best of my understanding:
```
import random
word = "scramble"
scramble = list(word)
for i in range(len(scramble)):
char = scramble.pop(i)
scramble.insert(random.randint(0,len(word)),char)
print(''.join(scramble))
```
This will scramble the `word` variable.
Hope this helps!
Thanks :)
| 7,821
|
4,145,331
|
I am tring to call the cmd command "move" from python.
```
cmd1 = ["move", spath , npath]
startupinfo = subprocess.STARTUPINFO()
startupinfo.dwFlags |= subprocess.STARTF_USESHOWWINDOW
p = subprocess.Popen(cmd1, startupinfo=startupinfo)
```
While the comammand works in the cmd. I can move files. With this python code i get:
>
> WindowsError: [Error 2] The system
> cannot find the file specified
>
>
>
Spath and npath, are absolute paths to folders, so being in another directory should not matter.
**[edit]**
Responding to Tim's answear: The how do i move a folder?
|
2010/11/10
|
[
"https://Stackoverflow.com/questions/4145331",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/354399/"
] |
`move` is built-in into the `cmd` shell, so it's not a file command that you can call this way.
You could use [`shutil.move()`](http://docs.python.org/library/shutil.html#shutil.move), but this "forgets" all alternate data stream, ACLs etc.
|
try to use `cmd1 = ["cmd", "/c", "move", spath, npath]`
| 7,822
|
74,053,822
|
I want to calculate QTD values from different columns based on months in a pandas dataframe.
Code:
```
data = {'month': ['April', 'May', 'June', 'July', 'August', 'September', 'October', 'November', 'December', 'January', 'February', 'March'],
'kpi': ['sales', 'sales quantity', 'sales', 'sales', 'sales', 'sales', 'sales', 'sales quantity', 'sales', 'sales', 'sales', 'sales'],
're': [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12],
're3+9': [10, 20, 30, 40, 50, 60, 70, 80, 90, 10, 10, 20],
're6+6': [5, 10, 15, 20, 25, 30, 35, 40, 45, 50, 55, 60],
're9+3': [2, 4, 6, 8, 10, 12, 14, 16, 20, 10, 10, 20],
're_o' : [1, 1, 1, 11, 11, 11, 12, 12, 12, 13, 13, 13]
}
# Create DataFrame
df = pd.DataFrame(data)
g = pd.to_datetime(df['month'], format='%B').dt.to_period('Q')
if (df['month'].isin(['April', 'May', 'June'])):
df['Q-Total'] = df.groupby([g,'kpi'])['re'].cumsum()
elif (df['month'].isin(['July', 'August', 'September'])):
df['Q-Total'] = df.groupby([g, 'kpi'])['re3+9'].cumsum()
elif (df['month'].isin(['October', 'November', 'December'])):
df['Q-Total'] = df.groupby([g, 'kpi'])['re6+6'].cumsum()
elif (df['month'].isin(['January', 'February', 'March'])):
df['Q-Total'] = df.groupby([g, 'kpi'])['re9+3'].cumsum()
else:
print("zero")
```
My required output is given below:
```
month kpi re re3+9 re6+6 re9+3 re_o Q-Total
0 April sales 1 10 5 2 1 1
1 May sales quantity 2 20 10 4 1 2
2 June sales 3 30 15 6 1 4
3 July sales 4 40 20 8 11 40
4 August sales 5 50 25 10 11 90
5 September sales 6 60 30 12 11 150
6 October sales 7 70 35 14 12 35
7 November sales quantity 8 80 40 16 12 40
8 December sales 9 90 45 20 12 80
9 January sales 10 10 50 10 13 10
10 February sales 11 10 55 10 13 20
11 March sales 12 20 60 20 13 40
```
Here there is four columns named re,re3+9,re6+6,re9+3 for taking the cumulative sum values.I want to calculate cumulative sum based on the below conditions:
1. If the months are April,May and June, cumulative sum will be taken only from column re
2. If the months are July,August and September , cumulative sum will be taken only from re3+9
3. If the months are October,November and December , cumulative sum will be taken only from re6+6
4. If the months are January,February and March,Cumulative sum will be taken only from re9+3
But I got an error like below , when I run the code:
```
Traceback (most recent call last):
File "/home/a/p/s.py", line 54, in <module>
if (df['month'].isin(['April', 'May', 'June'])):
File "/home/a/anaconda3/envs/p/lib/python3.9/site-packages/pandas/core/generic.py", line 1527, in __nonzero__
raise ValueError(
ValueError: The truth value of a Series is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all().
```
Can anyone suggest a solution to solve this issue?
|
2022/10/13
|
[
"https://Stackoverflow.com/questions/74053822",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/16134366/"
] |
Use [`numpy.select`](https://numpy.org/doc/stable/reference/generated/numpy.select.html) for new column and then use [`GroupBy.cumsum`](http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.core.groupby.GroupBy.cumsum.html):
```
g = pd.to_datetime(df['month'], format='%B').dt.to_period('Q')
m1 = df['month'].isin(['April', 'May', 'June'])
m2 = df['month'].isin(['July', 'August', 'September'])
m3 = df['month'].isin(['October', 'November', 'December'])
m4 = df['month'].isin(['January', 'February', 'March'])
df['Q-Total'] = np.select([m1, m2, m3, m4],
[df['re'], df['re3+9'], df['re6+6'], df['re9+3']], default=0)
df['Q-Total'] = df.groupby([g,'kpi'])['Q-Total'].cumsum()
print (df)
month kpi re re3+9 re6+6 re9+3 re_o Q-Total
0 April sales 1 10 5 2 1 1
1 May sales quantity 2 20 10 4 1 2
2 June sales 3 30 15 6 1 4
3 July sales 4 40 20 8 11 40
4 August sales 5 50 25 10 11 90
5 September sales 6 60 30 12 11 150
6 October sales 7 70 35 14 12 35
7 November sales quantity 8 80 40 16 12 40
8 December sales 9 90 45 20 12 80
9 January sales 10 10 50 10 13 10
10 February sales 11 10 55 10 13 20
11 March sales 12 20 60 20 13 40
```
|
You can use a dictionary to map the quarter to your columns, then [indexing lookup](https://pandas.pydata.org/docs/user_guide/indexing.html#indexing-lookup) and [`groupby.cumsum`](https://pandas.pydata.org/docs/reference/api/pandas.core.groupby.DataFrameGroupBy.cumsum.html):
```
quarters = {1: 're9+3', 2: 're', 3: 're3+9', 4: 're6+6'}
col = pd.to_datetime(df['month'], format='%B').dt.quarter.map(quarters)
idx, cols = pd.factorize(col)
df['Q-total'] = (
pd.Series(df.reindex(cols, axis=1).to_numpy()[np.arange(len(df)), idx],
index=df.index)
.groupby([col, df['kpi']]).cumsum()
)
```
output:
```
month kpi re re3+9 re6+6 re9+3 re_o Q-total
0 April sales 1 10 5 2 1 1
1 May sales quantity 2 20 10 4 1 2
2 June sales 3 30 15 6 1 4
3 July sales 4 40 20 8 11 40
4 August sales 5 50 25 10 11 90
5 September sales 6 60 30 12 11 150
6 October sales 7 70 35 14 12 35
7 November sales quantity 8 80 40 16 12 40
8 December sales 9 90 45 20 12 80
9 January sales 10 10 50 10 13 10
10 February sales 11 10 55 10 13 20
11 March sales 12 20 60 20 13 40
```
| 7,823
|
46,756,555
|
I made a dictionary using python
```
dictionary = {'key':'a', 'key':'b'})
print(dictionary)
print (dictionary.get("key"))
```
When run this code then shows the last value of dictionary, Is there any way to access the first value of dictionary if keys of both elements of dictionary are same.
|
2017/10/15
|
[
"https://Stackoverflow.com/questions/46756555",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4450400/"
] |
It seems I ran into existing [issue](https://github.com/spring-projects/spring-boot/issues/10647)
So I used the solution provided by [@wilkinsona](https://github.com/wilkinsona) to resolve it. I added `configuration` with `mainClass` and project was successfully built.
```
<build>
<plugins>
<plugin>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-maven-plugin</artifactId>
<configuration>
<mainClass>com.lapots.breed.hero.journey.web.HeroJourneyWebApplication</mainClass>
</configuration>
</plugin>
</plugins>
</build>
```
|
You can use this way . You add a configuration to pom but this way only works in the compiler. It doesnt work real world. If you want to use .jar or .war . It will not work.
**You must add com.lapots.breed.hero.journey file under java file.** Then It always works
| 7,824
|
60,927,188
|
Having recently upgraded a Django project from 2.x to 3.x, I noticed that the `mysql.connector.django` backend (from `mysql-connector-python`) no longer works. The last version of Django that it works with is 2.2.11. It breaks with 3.0. I am using `mysql-connector-python==8.0.19`.
When running `manage.py runserver`, the following error occurs:
```
django.core.exceptions.ImproperlyConfigured: 'mysql.connector.django' isn't an available database backend.
Try using 'django.db.backends.XXX', where XXX is one of:
'mysql', 'oracle', 'postgresql', 'sqlite3'
```
I am aware that this is not an official Django backend but I have to use it on this project for reasons beyond my control.
I am 80% sure this is an issue with the library but I'm just looking to see if there is anything that can be done to resolve it beyond waiting for an update.
UPDATE:
`mysql.connector.django` now works with Django 3+.
|
2020/03/30
|
[
"https://Stackoverflow.com/questions/60927188",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6619548/"
] |
For `Django 3.0` and `Django 3.1` I managed to have it working with `mysql-connector-python 8.0.22`. See this <https://dev.mysql.com/doc/relnotes/connector-python/en/news-8-0-22.html>.
|
Connector/Python still supports Python 2.7, which was dropped by Django 3.
We are currently working on adding support for Django 3, stay tunned.
| 7,830
|
11,144,245
|
I realize that this is a known [problem](http://python.6.n6.nabble.com/Internationalization-and-caching-not-working-td84364.html), [problem](http://www.mail-archive.com/django-users@googlegroups.com/msg63780.html) but I still have not found and adequate solution.
I want to use the @cache\_page for a some views in my Django apps, like so:
```
@cache_page(24 * 60 * 60)
def some_view(request):
...
```
The problem is that I am also using i18n with a language switcher to switch languages of each page. So, if I turn on the caching I do not get the results I expect. It seems I get whatever was the last cached page.
I have tried this:
```
@cache_page(24 * 60 * 60)
@vary_on_headers('Content-Language', 'Accept-Language')
def some_view(request):
...
```
**EDIT** ...and this:
```
@cache_page(24 * 60 * 60)
@vary_on_cookie
def some_view(request):
...
```
**END EDIT**
But I get the same results.
Of course, if I remove the caching everything works as expected.
Any help would be MUCH appreciated.
|
2012/06/21
|
[
"https://Stackoverflow.com/questions/11144245",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/791335/"
] |
1. How long is a string? It depends on how the server is configured. And "application" could be a single form or hundreds. Only a test can tell. In general: build a [high performance server](http://www.wissel.net/blog/d6plinks/SHWL-7RB3P5) preferably with 64Bit architecture and lots of RAM. Make that RAM [available for the JVM](http://xpageswiki.com/web/youatnotes/wiki-xpages.nsf/dx/Memory_Usage_and_Performance). If the applications use attachments, use DAOS, put it on a separate disk - and of course make sure you have the latest version of Domino (8.5.3FP1 at time of this writing)
2. There is the [XPages Toolbox](http://www.openntf.org/internal/home.nsf/project.xsp?action=openDocument&name=xpages%20toolbox) that includes a memory and CPU profiler.
3. It depends on the type of application. Clever use of the scopes for caching, Expression Language and beans instead of SSJS. You leak memory whey you forget `.recycle`. Hire an experienced lead developer and [read the book](http://www.ibmpressbooks.com/bookstore/product.asp?isbn=0132486318) also [the other one](http://www.ibmpressbooks.com/bookstore/product.asp?isbn=0132901811) and [two](http://www.ibmpressbooks.com/bookstore/product.asp?isbn=0132943050). Consider threading off longer running code, so users don't need to wait.
4. Depends on your needs. The general lessons of Domino development apply when it comes to db operations, so FTSearch over DBSearch, scope usage over @DBColumn for parameters. EL over SSJS.
5. Typical errors include: all code in the XPages -> use script libraries. Too much @dblookup, @dbcolumn instead of scope. Validation in buttons instead of validators. Violation of decomposition principles. Forgetting to use .recycle(). Designing applications "like old Notes screens" instead of single page interaction. Too little use of partial refresh. No use of caching. Too little object orientation (crating function graves in script libraries).
6. This is a summary of question 1-5, nothing new to answer
7. When clustering Domino servers for XPages and putting a load balancer in front, the load balancer needs to be configured to keep a session on the same server, so partial refreshes and Ajax calls reach the server that has the component tree rendered for that user.
|
1. It depends on the server setup, I have i.e XPage extranet with 12000 registered users spanning over aprox 20 XPage applications. That runs on 1 Windows 2003 server with 4GB Ram and quad core cpu. Data aount is about 60GB over these 20 applications. No Daos, no beans just SSJS. Performance is excellent. So when I upgrade this installation to 64 bit and Daos the application will scale when more. So 64Bit and Lots of Ram is the key to alot of users.
2. I haven't done anything around this
3. Make sure to recyle when you do document loops, Use the openntf.org debug toolbar it will save alot of time before we have a debugger for XPages.
4. Always think when you are doing things this will be done by several users, so try to cut down number of lookup or getElementByKey. Try to use ViewNavigator when you can.
5. It all depends on how many users that uses the system concurrent. If you have 10000 - 15000 users concurrent then you have to look at what the applications does and how many users will use the same application at the same time.
Thats my insights into the question
| 7,833
|
62,552,693
|
I'm a newbie when it comes to Python. I have some python code in an azure sql notebook which gets run by a scheduled job. The job notebook runs a couple of sql notebooks. If the 1st notebook errors I want an exception to be thrown so that the scheduled job shows as failed and I don't want the subsequent sql notebook to run to run. The python code is as follows
```
%python
try:
dbutils.notebook.run("/01. SMETS1Mig/" + dbutils.widgets.get("env_parent_directory") + "/02 Processing Curated Staging/02 Build - Parameterised/STA_1A - CS note Issued", 6000, {
"env_ingest_db": dbutils.widgets.get("env_ingest_db")
, "env_stg_db": dbutils.widgets.get("env_stg_db")
, "env_tech_db": dbutils.widgets.get("env_tech_db")
})
except :
print ' Failure in STA_1S'
try:
dbutils.notebook.run("/01. SMETS1Mig/" + dbutils.widgets.get("env_parent_directory") + "/02 Processing Curated Staging/02 Build - Parameterised/Output CS Notes", 6000, {
"env_ingest_db": dbutils.widgets.get("env_ingest_db")
, "env_stg_db": dbutils.widgets.get("env_stg_db")
, "env_tech_db": dbutils.widgets.get("env_tech_db")
})
except :
print ' Error in Output CS Notes'
```
Am I heading in the right direction?
What's the best way to achieve this?
Many Thanks in advance
|
2020/06/24
|
[
"https://Stackoverflow.com/questions/62552693",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11989075/"
] |
**Option 1**: Simply don't catch the exception. Just run the first command without the `try` ... `except`, and let the exception bubble up in the normal way. If it is raised (i.e. thrown), then the second command (or anything after it) will not be run. You would then only catch exceptions with the second command.
```
dbutils.notebook.run(...first invocation...) # exceptions not caught - will bubble
try:
dbutils.notebook.run(...second invocation...)
except Exception as exc:
print(' Error in Output CS Notes: {}'.format(exc))
```
**Option 2**: If you want to do some kind of handling of your own, then do catch the exception, but re-`raise` it (or another exception) within the `except` block.
```
try:
dbutils.notebook.run(...first invocation...)
except Exception as exc:
print(' Failure in STA_1S: {}'.format(exc))
raise exc
# or for example: raise RuntimeError
# ... then do as before with second invocation...
```
(You can also use `raise` without any arguments instead of `raise exc`, as it will default to re-raising the original exception.)
|
Sorry - I did not read the question carefully enough.
My original answer is not what you are looking for.
I wrote:
>
> Add a `return` statement:
>
>
>
> ```
> try:
> # ...
> except Exception as error:
> print f'Failure in STA_1S ({error})'
> return
>
> ```
>
>
But you not only want to leave the function, but actually end it's execution with an error code.
What you are looking for is [`sys.exit()`](https://docs.python.org/3/library/sys.html#sys.exit):
>
> Exit from Python. This is implemented by raising the SystemExit
> exception, so cleanup actions specified by finally clauses of try
> statements are honored, and it is possible to intercept the exit
> attempt at an outer level.
>
>
> The optional argument arg can be an integer giving the exit status
> (defaulting to zero), or another type of object. If it is an integer,
> zero is considered “successful termination” and any nonzero value is
> considered “abnormal termination” by shells and the like. Most systems
> require it to be in the range 0–127, and produce undefined results
> otherwise. Some systems have a convention for assigning specific
> meanings to specific exit codes, but these are generally
> underdeveloped; Unix programs generally use 2 for command line syntax
> errors and 1 for all other kind of errors. If another type of object
> is passed, None is equivalent to passing zero, and any other object is
> printed to stderr and results in an exit code of 1. In particular,
> sys.exit("some error message") is a quick way to exit a program when
> an error occurs.
>
>
>
So, here is my revised answer:
```
import sys
try:
# ...
except Exception as error:
sys.exit(f'Failure in STA_1S ({error})')
```
| 7,834
|
11,026,205
|
I'm currently concatenating adjacent cells in excel to repeat common HTML elements and divs - it feels like I've gone down a strange excel path in developing my webpage, and I was wondering if an experienced web designer could let me know how I might accomplish my goals for the site with a more conventional method (aiming to use python and mysql).
1. I have about 40 images in my site. On this page I want to see them all lined up in a grid so I have three divs next to each other on each row, all floating left.
2. Instead of manually typing all the code needed for each row of images, I started concatenating repeating portions of the code with distinct portions of the code. I took the four div classes and separated the code that would need to change for each image (src="XXX" and "XXX").
Example:
```
> Column D Column E Column F
> '1 <div> <img src=" filename.jpg "></div>'
```
The formula to generate my HTML looks like this:
= D1 & E1 & F1
I'm sure it would be easier to create a MySQL database with the filepaths and attributes saved for each of my images, so I could look through the data with a scripting language. Can anyone offer their advice or a quick script for automating the html generation?
|
2012/06/14
|
[
"https://Stackoverflow.com/questions/11026205",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1152532/"
] |
Wow, that sounds really painful.
If all you have is 40 images that you want to generate HTML for, and the rest of your site is static, it may be simplest just to have a single text file with each line containing an image file path. Then, use Python to look at each line, generate the appropriate HTML, and concatenate it.
If your site has more complex interaction requirements, then Django could be the way to go. Django is a great Python web app framework, and it supports MySQL, as well as a number of different db backends.
|
You could keep these and only these images in their own directory and then use simple shell scripts to generate that section of static html.
Assuming you already put the files in place maybe like this:
```
cp <all_teh_kitteh_images> images/grid
```
This command will generate html
```
for file in images/grid/*.jpg ; do echo "<div><img src=\"$file\"></div>" ; done
```
Oh, I'm sorry, I missed the python part of your question (IMO MySQL is overkill, you have no relations, don't use a relational database) Here is the same thing in python.
```
import glob
for file in glob.glob('images/grid/*.jpg'):
print "<div><img src=\"%s\"></div>" % file
```
| 7,837
|
24,294,015
|
I tried to install psycopg2 for python2.6 but my system also has 2.7 installed.
I did
```
>sudo -E pip install psycopg2
Downloading/unpacking psycopg2
Downloading psycopg2-2.5.3.tar.gz (690kB): 690kB downloaded
Running setup.py (path:/private/var/folders/yw/qn50zv_151bfq2vxhc60l8s5vkymm3/T/pip_build_root/psycopg2/setup.py) egg_info for package psycopg2
Installing collected packages: psycopg2
Running setup.py install for psycopg2
building 'psycopg2._psycopg' extension
cc -DNDEBUG -g -fwrapv -Os -Wall -Wstrict-prototypes -Qunused-arguments -Qunused-arguments -arch x86_64 -arch i386 -pipe -DPSYCOPG_DEFAULT_PYDATETIME=1 -DPSYCOPG_VERSION="2.5.3 (dt dec pq3 ext)" -DPSYCOPG_EXTENSIONS=1 -DPSYCOPG_NEW_BOOLEAN=1 -DHAVE_PQFREEMEM=1 -DPG_VERSION_HEX=0x09010B -DPSYCOPG_EXTENSIONS=1 -DPSYCOPG_NEW_BOOLEAN=1 -DHAVE_PQFREEMEM=1 -I/System/Library/Frameworks/Python.framework/Versions/2.7/include/python2.7 -I. -I/usr/local/Cellar/postgresql91/9.1.11/include -I/usr/local/Cellar/postgresql91/9.1.11/include/server -c psycopg/psycopgmodule.c -o build/temp.macosx-10.9-intel-2.7/psycopg/psycopgmodule.o
...
```
You can see that pip is associating psycopg2 with python2.7 during the install process. This pip install completed with a reported success result but when I ran a 2.6 script after that that imports psycopg2, it couldn't find it:
```
import psycopg2
ImportError: No module named psycopg2
```
How do I install psycopg2 for python2.6 when there is also 2.7 installed?
|
2014/06/18
|
[
"https://Stackoverflow.com/questions/24294015",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1312080/"
] |
You can use [shoryuken](https://github.com/phstc/shoryuken/).
It will consume your messages continuously until your queue has messages.
```
shoryuken -r your_worker.rb -C shoryuken.yml \
-l log/shoryuken.log -p shoryuken.pid -d
```
|
As you've probably already discovered, there isn't one obvious **right way™** to handle this kind of thing. It depends a lot on what work you do for each job, the size of your app and infrastrucure, and your personal preferences on APIs, message queuing philosophies, and architecture.
That said, I'd probably lean towards option 2 based on your description. Sidekiq and delayed\_job don't speak SQS, and while you could teach them with something like [sidekiq-sqs](https://github.com/jmoses/sidekiq-sqs), it sounds like you might outgrow them pretty quick. Unless you need your Rails environment available to your workers, you'd have better luck separating your queue consumers into distinct applications, which makes it easy to scale horizontally just by starting more processes. It also allows you to further decouple the workers from your Rails app, which can make things easier to deploy and administer.
Option 3 is a non-starter IMO. You'll want to have a daemon running to process jobs as they come in, and if rake has to load your environment on each job, things are going to get sloooow.
| 7,838
|
46,582,577
|
I need help. I'm trying to filter out and write to another csv file that consists of data which collected after 10769s in the column elapsed\_seconds together with the acceleration magnitude. However, I'm getting KeyError: 0...
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
data = pd.read_csv(accelDataPath)
data.columns = ['t', 'x', 'y', 'z']
# calculate the magnitude of acceleration
data['m'] = np.sqrt(data['x']**2 + data['y']**2 + data['z']**2)
data['datetime'] = pd.DatetimeIndex(pd.to_datetime(data['t'], unit = 'ms').dt.tz_localize('UTC').dt.tz_convert('US/Eastern'))
data['elapsed_seconds'] = (data['datetime'] - data['datetime'].iloc[0]).dt.total_seconds()
i=0
csv = open("filteredData.csv", "w+")
csv.write("Event at, Magnitude \n")
while (i < len(data[data.elapsed_seconds > 10769])):
csv.write(str(data[data.elapsed_seconds > 10769][i]) + ", " + str(data[data.m][i]) + "\n")
csv.close()
```
Error that I am getting is:
```
Traceback (most recent call last):
File "C:\Users\Desktop\AnalyzingData.py", line 37, in <module>
csv.write(str(data[data.elapsed_seconds > 10769][i]) + ", " + str(data[data.m][i]) + "\n")
File "C:\python\lib\site-packages\pandas\core\frame.py", line 1964, in __getitem__
return self._getitem_column(key)
File "C:\python\lib\site-packages\pandas\core\frame.py", line 1971, in _getitem_column
return self._get_item_cache(key)
File "C:\python\lib\site-packages\pandas\core\generic.py", line 1645, in _get_item_cache
values = self._data.get(item)
File "C:\python\lib\site-packages\pandas\core\internals.py", line 3590, in get
loc = self.items.get_loc(item)
File "C:\python\lib\site-packages\pandas\core\indexes\base.py", line 2444, in get_loc
return self._engine.get_loc(self._maybe_cast_indexer(key))
File "pandas\_libs\index.pyx", line 132, in pandas._libs.index.IndexEngine.get_loc (pandas\_libs\index.c:5280)
File "pandas\_libs\index.pyx", line 154, in pandas._libs.index.IndexEngine.get_loc (pandas\_libs\index.c:5126)
File "pandas\_libs\hashtable_class_helper.pxi", line 1210, in pandas._libs.hashtable.PyObjectHashTable.get_item (pandas\_libs\hashtable.c:20523)
File "pandas\_libs\hashtable_class_helper.pxi", line 1218, in pandas._libs.hashtable.PyObjectHashTable.get_item (pandas\_libs\hashtable.c:20477)
KeyError: 0
```
|
2017/10/05
|
[
"https://Stackoverflow.com/questions/46582577",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2995019/"
] |
change this line
```
csv.write(
str(data[data.elapsed_seconds > 10769][i]) + ", " + str(data[data.m][i]) + "\n"
)
```
To this:
```
csv.write(
str(data[data.elapsed_seconds > 10769].iloc[i]) + ", " + str(data[data.m].iloc[i]) +"\n"
)
```
Also, notice that you are not increasing `i`, like this `i += 1`, in the while loop.
---
Or, better, use `df.to_csv` as follows:
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
data = pd.read_csv(accelDataPath)
data.columns = ['t', 'x', 'y', 'z']
# calculate the magnitude of acceleration
data['m'] = np.sqrt(data['x']**2 + data['y']**2 + data['z']**2)
data['datetime'] = pd.DatetimeIndex(pd.to_datetime(data['t'], unit = 'ms').dt.tz_localize('UTC').dt.tz_convert('US/Eastern'))
data['elapsed_seconds'] = (data['datetime'] - data['datetime'].iloc[0]).dt.total_seconds()
# write to csv using data.to_csv
data[data.elapsed_seconds > 10769][['elapsed_seconds', 'm']].to_csv("filteredData.csv",
sep=",",
index=False)
```
|
I had the same issue, which was resolved by following this [recommendation](https://github.com/influxdata/influxdb-python/issues/497). In short, instead of `df[0]`, do an explicit `df['columnname']`
| 7,839
|
72,429,361
|
Hello I am beginner programmer in python and I am having trouble with this code. It is a rock paper scissors game I am not finished yet but it is supposed to print "I win" if the user does not pick rock when the program picks scissors. But when the program picks paper and the user picks rock it does not print "I win". I would like some help thanks. EDIT - answer was to add indent on else before "i win" thank you everybody.
```
import random
ask0 = input("Rock, Paper, or Scissors? ")
list0 = ["Rock", "Paper", "Scissors"]
r0 = (random.choice(list0))
print("I pick " + r0)
if ask0 == r0:
print("Tie")
elif ask0 == ("Rock"):
if r0 == ("Scissors"):
print("You Win")
else:
print("I win")
```
|
2022/05/30
|
[
"https://Stackoverflow.com/questions/72429361",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/19223648/"
] |
I had same issue with "testImplementation 'io.cucumber:cucumber-java8:7.3.3'" (gradle).
I changed to "testImplementation 'io.cucumber:cucumber-java8:7.0.0'" and when I changed it and ran again test then I got a correct error message about the real problem that was "Unrecognized field 'programId'" (for example) and then I can fix the problem (that was a missing field in my class).
|
I installed Jackson JDK8 repository in my pom.xml file and it fixed this problem:
<https://mvnrepository.com/artifact/com.fasterxml.jackson.datatype/jackson-datatype-jdk8>
| 7,840
|
71,979,765
|
If I have python list like
```py
pyList=[‘x@x.x’,’y@y.y’]
```
And I want it to convert it to json array and add `{}` around every object, it should be like that :
```py
arrayJson=[{“email”:”x@x.x”},{“ email”:”y@y.y”}]
```
any idea how to do that ?
|
2022/04/23
|
[
"https://Stackoverflow.com/questions/71979765",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/18918903/"
] |
You can achieve this by using built-in [json](https://docs.python.org/3/library/json.html#json.dumps) module
```
import json
arrayJson = json.dumps([{"email": item} for item in pyList])
```
|
Try to Google this kind of stuff first. :)
```
import json
array = [1, 2, 3]
jsonArray = json.dumps(array)
```
By the way, the result you asked for can not be achieved with the list you provided.
You need to use python dictionaries to get json objects. The conversion is like below
```
Python -> JSON
list -> array
dictionary -> object
```
And here is the link to the docs
<https://docs.python.org/3/library/json.html>
| 7,841
|
34,328,759
|
I'd like to put together a command that will print out a string of 32 hexadecimal digits. I've got a Python script that works:
```
python -c 'import random ; print "".join(map(lambda t: format(t, "02X"), [random.randrange(256) for x in range(16)]))'
```
This generates output like:
```
6EF6B30F9E557F948C402C89002C7C8A
```
Which is what I need.
On a Mac, I can even do this:
```
uuidgen | tr -d '-'
```
However, I don't have access to the more sophisticated scripting languages ruby and python, and I won't be on a Mac (so no uuidgen). I need to stick with more bash'ish tools like sed, awk, /dev/random because I'm on a limited platform. Is there a way to do this?
|
2015/12/17
|
[
"https://Stackoverflow.com/questions/34328759",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1002430/"
] |
If you are looking for a single command and have openssl installed, see below. Generate random 16 bytes (32 hex symbols) and encode in hex (also -base64 is supported).
```
openssl rand -hex 16
```
|
If you want to generate output of **arbitrary length, including even/odd number of characters**:
```sh
cat /dev/urandom | hexdump --no-squeezing -e '/1 "%x"' | head -c 31
```
Or to maximize efficiency over readability/composeability:
```
hexdump --no-squeezing -e '/1 "%x"' -n 15 /dev/urandom
```
| 7,843
|
62,175,053
|
I'm working on Windows 7 64bits with Anaconda 3. On my environment Nifti, I have installed Tensorflow 2.1.0, Keras 2.3.1 and Python 3.7.7.
On Visual Studio Code there is a problem with all of these imports:
```
from tensorflow.python.keras.models import Model
from tensorflow.keras.layers import Input, Dense, Conv2D, Conv2DTranspose, UpSampling2D, MaxPooling2D, Flatten, ZeroPadding2D
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from tensorflow.keras.optimizers import Adam
```
I get these errors:
```
No name 'python' in module 'tensorflow'
Unable to import 'tensorflow.python.keras.models'
Unable to import 'tensorflow.keras.layers'
Unable to import 'tensorflow.keras.preprocessing.image'
Unable to import 'tensorflow.keras.optimizers'
```
Visual Studio Code is using the same anaconda environment: `D:\Users\VansFannel\Programs\anaconda3\envs\nifti`. I have checked it on "Python: Select Interpreter command" option in Visual Studio.
If I do this on a CMD shell with nifti environment activate, `python -c 'from tensorflow.python.keras.models import Model`, I don't get any error.
If I do with **iPython**:
```
from tensorflow.python.keras.models import Model
```
I don't get any error either.
I have checked `python.pythonpath` settings, and it points to: `D:\Users\VansFannel\Programs\anaconda3\envs\nifti`
And in the bottom left corner I can see:
[](https://i.stack.imgur.com/eruXC.png)
When I open a new Terminal on Visual Studio Code, I get these messages:
>
>
> ```
> Microsoft Windows [Versión 6.1.7601]
> Copyright (c) 2009 Microsoft Corporation. Reservados todos los derechos.
>
> D:\Sources\Repos\University\TFM\PruebasPython\Nifty>D:/Usuarios/VansFannel/Programs/anaconda3/Scripts/activate
>
> (base) D:\Sources\Repos\University\TFM\PruebasPython\Nifty>conda activate nifti
>
> (nifti) D:\Sources\Repos\University\TFM\PruebasPython\Nifty>
>
> ```
>
>
If I run the code in Visual Studio Code with `Ctrl. + F5`, **it runs** without any errors although it displays the errors on the `Problems` tab.
With pyCharm, I don't get any errors.
How an I fix this problem?
|
2020/06/03
|
[
"https://Stackoverflow.com/questions/62175053",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/68571/"
] |
I had tried your code, and my suggestion is to switch from pylint to other lintings. Keep away from that stupid linting. Maybe you can take a try of flake8:
"python.linting.pylintEnabled": false,
"python.linting.flake8Enabled": true,
This problem occurs because the pylint can't search the path of Anaconda, as 'tensorflow' only can be installed through conda, you only can choose the environment which created by Anaconda. But pylint can't follow the change of the environment to change the search path, so it will prompt import error. Stupid linting.
|
**If you're using Anaconda Virtual Environment**
1. Close all **Open Terminals** in VS Code
2. Open a new terminal
3. Write the path to your activate folder of Anaconda in Terminal
>
> Example: `E:/Softwares/AnacondaFolder/Scripts/activate`
>
>
>
This should now show **(base)** written at the start of your folder path
4. Now, conda activate
>
> Example: `conda activate Nifti`
>
>
>
This should now show **(Nifti)** written at the start of your folder path
Now, if your import something, VS Code will recognize it.
| 7,853
|
14,737,499
|
I followed the top solution to Flattening an irregular list of lists in python ([Flatten (an irregular) list of lists](https://stackoverflow.com/questions/2158395/flatten-an-irregular-list-of-lists-in-python)) using the following code:
```
def flatten(l):
for el in l:
if isinstance(el, collections.Iterable) and not isinstance(el, basestring):
for sub in flatten(el):
yield sub
else:
yield el
L = [[[1, 2, 3], [4, 5]], 6]
L=flatten(L)
print L
```
And got the following output:
"generator object flatten at 0x100494460"
I'm not sure which packages I need to import or syntax I need to change to get this to work for me.
|
2013/02/06
|
[
"https://Stackoverflow.com/questions/14737499",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1009215/"
] |
Add a `data-select` attribute to those links:
```
<a href="#contact" data-select="mother@mail.com">Send a mail to mother!</a>
<a href="#contact" data-select="father@mail.com">Send a mail to father!</a>
<a href="#contact" data-select="sister@mail.com">Send a mail to sister!</a>
<a href="#contact" data-select="brother@mail.com">Send a mail to brother!</a>
```
Then use the value of the clicked link to set the value of the `select` element:
```js
var $select = $('#recipient');
$('a[href="#contact"]').click(function () {
$select.val( $(this).data('select') );
});
```
Here's the fiddle: <http://jsfiddle.net/Dw6Yv/>
---
If you don't want to add those `data-select` attributes to your markup, you can use this:
```js
var $select = $('#recipient'),
$links = $('a[href="#contact"]');
$links.click(function () {
$select.prop('selectedIndex', $links.index(this) );
});
```
Here's the fiddle: <http://jsfiddle.net/Bxz24/>
Just keep in mind that this will require your links to be in the exact same order as the `select` options.
|
If the order is allways the same you can do this
```
$("a").click(function(){
$("#recipient").prop("selectedIndex", $(this).index());
});
```
Otherwise do this by defining the index on the link:
```
<a href="#contact" data-index="0">Send a mail to mother!</a>
<a href="#contact" data-index="1">Send a mail to father!</a>
<a href="#contact" data-index="2">Send a mail to sister!</a>
<a href="#contact" data-index="3">Send a mail to brother!</a>
<form id="contact">
<select id="recipient">
<option value="mother@mail.com">Mother</option>
<option value="father@mail.com">Father</option>
<option value="sister@mail.com">Sister</option>
<option value="brother@mail.com">Brother</option>
</select>
</form>
$("a").click(function(){
$("#recipient").prop("selectedIndex", $(this).data("index"));
});
```
| 7,854
|
50,925,488
|
so I have some problems with my dictionaries in python. For example I have dictionary like below:
```
d1 = {123456:xyz, 892019:kjl, 102930491:{[plm,kop]}
d2= {xyz:987, kjl: 0902, plm: 019240, kop:09829}
```
And I would like to have nested dictionary that looks something like that.
```
d={123456 :{xyz:987}, 892019:{kjl:0902}, 102930491:{plm:019240,kop:09829}}
```
is this possible? I was searching for nested dictionaries but nothing works for me.
|
2018/06/19
|
[
"https://Stackoverflow.com/questions/50925488",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9961204/"
] |
If you have two DNS names that can be used as a active/active or active/passive for your API endpoints, you can add them to a Traffic Manager profile and set the routing method you want to use. As indicated in an earlier answer, use only the DNS name and not the protocol identifier (http/https) when you add an endpoint to a Traffic manager profile
|
Traffic manager only wants the DNS name (FQDN) for external endpoints not the protocol. So drop the http: or https: from your API management address and it will accept that as an external endpoint.
Or is your problem not with adding the endpoint, but with the health endpoint monitoring? That can happen as the endpoint for the API Management gateway will return a 404 by default as it does not have a publicly exposed default page.
| 7,857
|
45,199,083
|
I am new to python and I had no difficulty with one example of learning try and except blocks:
```
try:
2 + "s"
except TypeError:
print "There was a type error!"
```
Which outputs what one would expect:
```
There was a type error!
```
However, when trying to catch a syntax error like this:
```
try:
print 'Hello
except SyntaxError:
print "There was a syntax error!"
finally:
print "Finally, this was printed"
```
I would ironically get the EOL syntax error. I was trying this a few times in the jupyter notebook environment and only when I moved over to a terminal in VIM did it make sense to me that the compiler was interpreting the except and finally code blocks as the rest of the incomplete string.
My question is how would one go about syntax error handling in this format? Or is there a more efficient (pythonic?) way of going about this?
It might not be something that one really comes across but it would be interesting to know if there was a clean workaround.
Thanks!
|
2017/07/19
|
[
"https://Stackoverflow.com/questions/45199083",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5941556/"
] |
The reason you can not use a try/except block to capture SyntaxErrors is that these errors happen before your code executes.
High level steps of Python code execution
1. Python interpreter translates the Python code into executable instructions. (Syntax Error Raised)
2. Instructions are executed. (Try/Except block executed)
Since the error happens during step 1 you can not use a try/except to intercept them since it is only executed in step 2.
|
The answer is easy cake:
The `SyntaxError` nullifies the `except` and `finally` statement because they are inside of a string.
| 7,858
|
44,503,913
|
This is literally day 1 of python for me. I've coded in VBA, Java, and Swift in the past, but I am having a particularly hard time following guides online for coding a pdf scraper. Since I have no idea what I am doing, I keep running into a wall every time I want to test out some of the code I've found online.
**Basic Info**
* Windows 7 64bit
* python 3.6.0
* Spyder3
* I have many of the pdf related code packages (PyPDF2, pdfminer, pdfquery, pdfwrw, etc)
**Goals**
To create something in python that allows me to convert PDFs from a folder into an excel file (ideallY) OR a text file (from which I will use VBA to convert).
**Issues**
Every time I try some sample code from guides i've found online, I always run into syntax errors on the lines where I am calling the pdf that I want to test the code on. Some guide links and error examples below. Should I be putting my test.pdf into the same file as the .py file?
* [How to scrape tables in thousands of PDF files?](https://stackoverflow.com/questions/25125178/how-to-scrape-tables-in-thousands-of-pdf-files)
+ I got an invalid syntax error due to "for" on the last line
* PDFMiner guide ([Link](https://stackoverflow.com/questions/36902496/python-pdfminer-pdf-to-csv))
```html
runfile('C:/Users/U587208/Desktop/pdffolder/pdfminer.py', wdir='C:/Users/U587208/Desktop/pdffolder')
File "C:/Users/U587208/Desktop/pdffolder/pdfminer.py", line 79
print pdf_to_csv('test.pdf', separator, threshold)
^
SyntaxError: invalid syntax
```
|
2017/06/12
|
[
"https://Stackoverflow.com/questions/44503913",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8149767/"
] |
It seems that the tutorials you are following make use of python 2. There are usually few noticable differences, the the biggest is that in python 3, print became a funtion so
```
print()
```
I would recomment either changing you version of python or finding a tutorial for python 3. Hope this helps
|
Here
[Pdfminer python 3.5](https://stackoverflow.com/questions/39854841/pdfminer-python-3-5/40877143#40877143) an example, how to extract informations from a PDF.
But it does not solve the problem with tables you want to export to Excel. Commercial products are probably better in doing that...
| 7,859
|
61,365,111
|
friends.
Yesterday I used the below python piece of code to retrieve some comments on youtube videos sucessfully:
```
!pip install --upgrade google-api-python-client
import os
import googleapiclient.discovery
DEVELOPER_KEY = "my_key"
YOUTUBE_API_SERVICE_NAME = "youtube"
YOUTUBE_API_VERSION = "v3"
youtube = googleapiclient.discovery.build(YOUTUBE_API_SERVICE_NAME, YOUTUBE_API_VERSION, developerKey=DEVELOPER_KEY)
youtube
```
It seems that the build function is suddenly not working. I have even refreshed the API, but in Google Colab I keep receiving the following error message:
```
UnknownApiNameOrVersion Traceback (most recent call last)
<ipython-input-21-064a9ae417b9> in <module>()
13
14
---> 15 youtube = googleapiclient.discovery.build(YOUTUBE_API_SERVICE_NAME, YOUTUBE_API_VERSION, developerKey=DEVELOPER_KEY)
16 youtube
17
1 frames
/usr/local/lib/python3.6/dist-packages/googleapiclient/discovery.py in build(serviceName, version, http, discoveryServiceUrl, developerKey, model, requestBuilder, credentials, cache_discovery, cache, client_options)
241 raise e
242
--> 243 raise UnknownApiNameOrVersion("name: %s version: %s" % (serviceName, version))
244
245
UnknownApiNameOrVersion: name: youtube version: V3
```
If anyone could help. I´m using this type of authentication because I dont know to put the credentials file in google drive and open it in Colab. But it worked yesterday:
[Results for yesterday´s run](https://i.stack.imgur.com/DKZOg.png)
Thank you very much in advance. And sorry for anything, Im new in the community.
Regards
|
2020/04/22
|
[
"https://Stackoverflow.com/questions/61365111",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13380927/"
] |
The problem is on the server side as discussed [here](https://github.com/googleapis/google-api-python-client/issues/882). Until the server problem is fixed, this solution may help (as suggested by [@busunkim96](https://github.com/googleapis/google-api-python-client/issues/882#issuecomment-618514530)):
First, download this json file: <https://www.googleapis.com/discovery/v1/apis/youtube/v3/rest>
Then:
```py
import json
from googleapiclient import discovery
# Path to the json file you downloaded:
path_json = '/path/to/file/rest'
with open(path_json) as f:
service = json.load(f)
# Replace with your actual API key:
api_key = 'your API key'
yt = discovery.build_from_document(service,
developerKey=api_key)
# Make a request to see whether this works:
request = yt.search().list(part='snippet',
channelId='UCYO_jab_esuFRV4b17AJtAw',
publishedAfter='2020-02-01T00:00:00.000Z',
publishedBefore='2020-04-23T00:00:00.000Z',
order='date',
type='video',
maxResults=50)
response = request.execute()
```
|
I was able to resolve this issue by putting making putting `static_discovery=False` into the build command
Examples:
**Previous Code**
`self.youtube = googleapiclient.discovery.build(API_SERVICE_NAME, API_VERSION, credentials=creds`
**New Code**
`self.youtube = googleapiclient.discovery.build(API_SERVICE_NAME, API_VERSION, credentials=creds, static_discovery=False)`
For some reason this issue only arised when I compiled my program using Github Actions
| 7,861
|
47,043,606
|
I created and saved simple nn in tensorflow:
```
import tensorflow as tf
import numpy as np
x = tf.placeholder(tf.float32, [1, 1],name='input_placeholder')
y = tf.placeholder(tf.float32, [1, 1],name='input_placeholder')
W = tf.get_variable('W', [1, 1])
layer = tf.matmul(x, W, name='layer')
loss = tf.subtract(y,layer)
train_step = tf.train.AdagradOptimizer(0.1).minimize(loss, name='train_step')
all_saver = tf.train.Saver()
sess = tf.Session()
sess.run(tf.global_variables_initializer())
x_test = np.zeros((1, 1))
y_test = np.zeros((1, 1))
some_output = sess.run([train_step],feed_dict = {x:x_test,y:y_test})
save_path = r'C:\Temp\tf_exp\save_folder\test'
all_saver.save(sess,save_path)
```
Then I took all files in `C:\Temp\tf_exp\save_folder\` and moved them (exactly moved not copied) to `C:\Temp\tf_exp\restore_folder`. The files that I moved are:
```
checkpoint
test.data-00000-of-00001
test.index
test.meta
```
Then I tried to restore nn from new location:
```
meta_path = r'C:\Temp\tf_exp\restore_folder\test.meta'
checkpoint_path = r'C:\Temp\tf_exp\restore_folder\\'
print(checkpoint_path)
new_all_saver = tf.train.import_meta_graph(meta_path)
sess=tf.Session()
new_all_saver.restore(sess, tf.train.latest_checkpoint(checkpoint_path))
graph = tf.get_default_graph()
layer= graph.get_tensor_by_name('layer:0')
x=graph.get_tensor_by_name('input_placeholder:0')
```
Here is error that restore code generated:
```
C:\Temp\tf_exp\restore_folder\\
ERROR:tensorflow:Couldn't match files for checkpoint C:\Temp\tf_exp\save_folder\test
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-2-9af4e683fc4b> in <module>()
5 new_all_saver = tf.train.import_meta_graph(meta_path)
6 sess=tf.Session()
----> 7 new_all_saver.restore(sess, tf.train.latest_checkpoint(checkpoint_path))
8 graph = tf.get_default_graph()
9 layer= graph.get_tensor_by_name('layer:0')
~\AppData\Local\conda\conda\envs\tensorflow\lib\site-packages\tensorflow\python\training\saver.py in restore(self, sess, save_path)
1555 return
1556 if save_path is None:
-> 1557 raise ValueError("Can't load save_path when it is None.")
1558 logging.info("Restoring parameters from %s", save_path)
1559 sess.run(self.saver_def.restore_op_name,
ValueError: Can't load save_path when it is None.
```
How can I avoid it? What is the proper way of moving files around?
**Update:**
As I am searching for the answer, it looks like using relative path is the way to go. But I am not sure how to use relative path. Should I change Python's current working directory to where I save model data?
|
2017/10/31
|
[
"https://Stackoverflow.com/questions/47043606",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1700890/"
] |
Just add `save_relative_paths=True` when creating `tf.train.Saver()`:
```
# original code: all_saver = tf.train.Saver()
all_saver = tf.train.Saver(save_relative_paths=True)
```
Please refer to [official doc](https://www.tensorflow.org/api_docs/python/tf/train/Saver) for more details.
|
You could try to restore by:
```
with tf.Session() as sess:
saver = tf.train.import_meta_graph(/path/to/test.meta)
saver.restore(sess, "path/to/checkpoints/test")
```
In this case, because you put the name of the checkpoint is "test" so you got 3 files:
```
test.data-00000-of-00001
test.index
test.meta
```
Therefore, when you restore, you need to put the path to checkpoint folder + "**/test**". The system will automatically load the corresponding data and index files.
| 7,862
|
45,385,832
|
I downloaded nitroshare.tar.gz file as i want to install it on my Kali Linux.
I followed up instructions as said on there github page to install it & downloaded all the packages that are required for the installation.
When i use cmake command it is showing me this error:
```
Unknown cmake command qt5_wrap_ui
```
Here is `CMakeList.txt` file:
```
cmake_minimum_required(VERSION 3.7)
configure_file(config.h.in "${CMAKE_CURRENT_BINARY_DIR}/config.h")
set(SRC
application/aboutdialog.cpp
application/application.cpp
application/splashdialog.cpp
bundle/bundle.cpp
device/device.cpp
device/devicedialog.cpp
device/devicelistener.cpp
device/devicemodel.cpp
icon/icon.cpp
icon/trayicon.cpp
settings/settings.cpp
settings/settingsdialog.cpp
transfer/transfer.cpp
transfer/transfermodel.cpp
transfer/transferreceiver.cpp
transfer/transfersender.cpp
transfer/transferserver.cpp
transfer/transferwindow.cpp
util/json.cpp
util/platform.cpp
main.cpp
)
if(WIN32)
set(SRC ${SRC} data/resource.rc)
endif()
if(APPLE)
set(SRC ${SRC}
data/icon/nitroshare.icns
transfer/transferwindow.mm
)
set_source_files_properties("data/icon/nitroshare.icns" PROPERTIES
MACOSX_PACKAGE_LOCATION Resources
)
endif()
if(QHttpEngine_FOUND)
set(SRC ${SRC}
api/apihandler.cpp
api/apiserver.cpp
)
endif()
if(APPINDICATOR_FOUND)
set(SRC ${SRC}
icon/indicatoricon.cpp
)
endif()
qt5_wrap_ui(UI
application/aboutdialog.ui
application/splashdialog.ui
device/devicedialog.ui
settings/settingsdialog.ui
transfer/transferwindow.ui
)
# Update the TS files from the source directory
file(GLOB TS data/ts/*.ts)
add_custom_target(ts
COMMAND "${Qt5_LUPDATE_EXECUTABLE}"
-silent
-locations relative
"${CMAKE_CURRENT_SOURCE_DIR}"
-ts ${TS}
)
# Create a directory for the compiled QM files
file(MAKE_DIRECTORY "${CMAKE_CURRENT_BINARY_DIR}/data/qm")
# Create commands for each of the translation files
set(QM_XML)
set(QM)
foreach(_ts_path ${TS})
get_filename_component(_ts_name "${_ts_path}" NAME)
get_filename_component(_ts_name_we "${_ts_path}" NAME_WE)
set(_qm_name "${_ts_name_we}.qm")
set(_qm_path "${CMAKE_CURRENT_BINARY_DIR}/data/qm/${_qm_name}")
set(QM_XML "${QM_XML}<file>qm/${_qm_name}</file>")
list(APPEND QM "${_qm_path}")
add_custom_command(OUTPUT "${_qm_path}"
COMMAND "${Qt5_LRELEASE_EXECUTABLE}"
-compress
-removeidentical
-silent
"${_ts_path}"
-qm "${_qm_path}"
COMMENT "Generating ${_ts_name}"
)
endforeach()
# Create a target for compiling all of the translation files
add_custom_target(qm DEPENDS ${QM})
# Configure the i18n resource file
set(QRC_QM "${CMAKE_CURRENT_BINARY_DIR}/data/resource_qm.qrc")
configure_file(data/resource_qm.qrc.in "${QRC_QM}")
qt5_add_resources(QRC
data/resource.qrc
"${QRC_QM}"
)
add_executable(nitroshare WIN32 MACOSX_BUNDLE ${SRC} ${UI} ${QRC})
target_compile_features(nitroshare PRIVATE
cxx_lambdas
cxx_nullptr
cxx_strong_enums
cxx_uniform_initialization
)
# In order to support Retina, a special tag is required in Info.plist.
if(APPLE)
set_target_properties(nitroshare PROPERTIES
MACOSX_BUNDLE_INFO_PLIST "${CMAKE_CURRENT_SOURCE_DIR}/Info.plist.in"
MACOSX_BUNDLE_ICON_FILE "nitroshare.icns"
MACOSX_BUNDLE_GUI_IDENTIFIER "com.NathanOsman.NitroShare"
MACOSX_BUNDLE_BUNDLE_NAME "NitroShare"
)
target_link_libraries(nitroshare "-framework ApplicationServices")
endif()
target_link_libraries(nitroshare Qt5::Widgets Qt5::Network Qt5::Svg)
if(Qt5WinExtras_FOUND)
target_link_libraries(nitroshare Qt5::WinExtras)
endif()
if(Qt5MacExtras_FOUND)
target_link_libraries(nitroshare Qt5::MacExtras)
endif()
if(QHttpEngine_FOUND)
target_link_libraries(nitroshare QHttpEngine)
endif()
if(APPINDICATOR_FOUND)
target_include_directories(nitroshare PRIVATE ${APPINDICATOR_INCLUDE_DIRS})
target_link_libraries(nitroshare ${APPINDICATOR_LIBRARIES})
endif()
include(GNUInstallDirs)
install(TARGETS nitroshare
RUNTIME DESTINATION ${CMAKE_INSTALL_BINDIR}
BUNDLE DESTINATION .
)
if(WIN32)
# If windeployqt is available, use it immediately after the build completes to
# ensure that the dependencies are available.
include(DeployQt)
if(WINDEPLOYQT_EXECUTABLE)
windeployqt(nitroshare)
endif()
# If QHttpEngine was used, include it in the installer
if(QHttpEngine_FOUND)
add_custom_command(TARGET nitroshare POST_BUILD
COMMAND "${CMAKE_COMMAND}" -E
copy_if_different "$<TARGET_FILE:QHttpEngine>" "$<TARGET_FILE_DIR:nitroshare>"
COMMENT "Copying QHttpEngine..."
)
endif()
# If Inno Setup is available, provide a target for building an EXE installer.
find_package(InnoSetup)
if(INNOSETUP_EXECUTABLE)
configure_file(dist/setup.iss.in "${CMAKE_CURRENT_BINARY_DIR}/setup.iss")
add_custom_target(exe
COMMAND "${INNOSETUP_EXECUTABLE}"
/Q
/DTARGET_FILE_NAME="$<TARGET_FILE_NAME:nitroshare>"
"${CMAKE_CURRENT_BINARY_DIR}/setup.iss"
DEPENDS nitroshare
COMMENT "Building installer..."
)
endif()
endif()
if(APPLE)
# If QHttpEngine was used, copy the dynlib over and correct its RPATH
if(QHttpEngine_FOUND)
add_custom_command(TARGET nitroshare POST_BUILD
COMMAND "${CMAKE_COMMAND}" -E
make_directory "$<TARGET_FILE_DIR:nitroshare>/../Frameworks"
COMMAND "${CMAKE_COMMAND}" -E
copy_if_different
"$<TARGET_FILE:QHttpEngine>"
"$<TARGET_FILE_DIR:nitroshare>/../Frameworks/$<TARGET_SONAME_FILE_NAME:QHttpEngine>"
COMMAND install_name_tool
-change
"$<TARGET_SONAME_FILE_NAME:QHttpEngine>"
"@rpath/$<TARGET_SONAME_FILE_NAME:QHttpEngine>"
"$<TARGET_FILE:nitroshare>"
COMMENT "Copying QHttpEngine and correcting RPATH..."
)
endif()
# If macdeployqt is available, use it immediately after the build completes to
# copy the Qt frameworks into the application bundle.
include(DeployQt)
if(MACDEPLOYQT_EXECUTABLE)
macdeployqt(nitroshare)
endif()
# Create a target for building a DMG that contains the application bundle and a
# symlink to the /Applications folder.
set(sym "${CMAKE_BINARY_DIR}/out/Applications")
set(dmg "${CMAKE_BINARY_DIR}/nitroshare-${PROJECT_VERSION}-osx.dmg")
add_custom_target(dmg
COMMAND rm -f "${sym}" "${dmg}"
COMMAND ln -s /Applications "${sym}"
COMMAND hdiutil create
-srcfolder "${CMAKE_BINARY_DIR}/out"
-volname "${PROJECT_NAME}"
-fs HFS+
-size 30m
"${dmg}"
DEPENDS nitroshare
COMMENT "Building disk image..."
)
endif()
# On Linux, include the icons, manpage, .desktop file, and file manager extensions when installing.
if(${CMAKE_SYSTEM_NAME} MATCHES "Linux")
install(DIRECTORY "${CMAKE_CURRENT_SOURCE_DIR}/dist/icons"
DESTINATION "${CMAKE_INSTALL_DATAROOTDIR}"
)
install(FILES "${CMAKE_CURRENT_SOURCE_DIR}/dist/nitroshare.1"
DESTINATION "${CMAKE_INSTALL_MANDIR}/man1"
)
install(FILES "${CMAKE_CURRENT_SOURCE_DIR}/dist/nitroshare.desktop"
DESTINATION "${CMAKE_INSTALL_DATAROOTDIR}/applications"
)
# Configure and install Python extensions for Nautilus-derived file managers
foreach(_file_manager Nautilus Nemo Caja)
string(TOLOWER ${_file_manager} _file_manager_lower)
set(_py_filename "${CMAKE_CURRENT_BINARY_DIR}/dist/nitroshare_${_file_manager_lower}.py")
configure_file("${CMAKE_CURRENT_SOURCE_DIR}/dist/nitroshare.py.in" "${_py_filename}")
install(FILES "${_py_filename}"
DESTINATION "${CMAKE_INSTALL_DATAROOTDIR}/${_file_manager_lower}-python/extensions"
RENAME nitroshare.py
)
endforeach()
endif()
```
|
2017/07/29
|
[
"https://Stackoverflow.com/questions/45385832",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8282220/"
] |
CMake doesn't know the function `qt5_wrap_ui` because you did not import Qt5 or any of the functions it defines.
Before calling `qt5_wrap_ui`, add this:
```
find_package(Qt5 COMPONENTS Widgets REQUIRED)
```
|
I used this line
find\_package(Qt5Widgets)
In my Cmakelist.txt file
I found this on their Document Page of Qt5
| 7,864
|
16,300,464
|
I have two classes in a python source file.
```
class A:
def func(self):
pass
class B:
def func(self):
pass
def run(self):
self.func()
```
When my cursor is in class B's `'self.func()'` line, if press `CTRL`+`]`, it goes to class A's func method. But instead I would like it to go B's func method.
|
2013/04/30
|
[
"https://Stackoverflow.com/questions/16300464",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/323000/"
] |
The `<C-]>` command jumps to the *first* tag match, but it also takes a `[count]` to jump to another one.
Alternatively, you can use the `g<C-]>` command, which (like the `:tjump` Ex command) will list all matches and query you for where you want to jump to (when there are multiple matches).
|
Have a look at [Jedi-Vim](https://github.com/davidhalter/jedi-vim). It defines a new “go to definition” command, that will properly handle those situations.
| 7,865
|
3,040,716
|
I have a list of regexes in python, and a string. Is there an elegant way to check if the at least one regex in the list matches the string? By elegant, I mean something better than simply looping through all of the regexes and checking them against the string and stopping if a match is found.
Basically, I had this code:
```
list = ['something','another','thing','hello']
string = 'hi'
if string in list:
pass # do something
else:
pass # do something else
```
Now I would like to have some regular expressions in the list, rather than just strings, and I am wondering if there is an elegant solution to check for a match to replace `if string in list:`.
Thanks in advance.
|
2010/06/14
|
[
"https://Stackoverflow.com/questions/3040716",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/363078/"
] |
```
import re
regexes = [
"foo.*",
"bar.*",
"qu*x"
]
# Make a regex that matches if any of our regexes match.
combined = "(" + ")|(".join(regexes) + ")"
if re.match(combined, mystring):
print "Some regex matched!"
```
|
```
import re
regexes = [
# your regexes here
re.compile('hi'),
# re.compile(...),
# re.compile(...),
# re.compile(...),
]
mystring = 'hi'
if any(regex.match(mystring) for regex in regexes):
print 'Some regex matched!'
```
| 7,866
|
71,649,019
|
I've went through multiple excamples of data classification however either I didn't get it or those are not applicable in my case.
I have a list of values:
```
values = [-130,-110,-90,-80,-60,-40]
```
All I need to do is to classify `input` integer value to appropriate "bin".
E.g., `input=-97` should get `index 1` (between -110, to -90) , while `input=-140` should get 0.
I don't want to do it `if...elif else style` since values could be dynamic.
What would be the most pythonic way to do it? I guess no pandas/numpy is necessary
Regards
|
2022/03/28
|
[
"https://Stackoverflow.com/questions/71649019",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11191256/"
] |
One of your `classes`-props is a `boolean`. You cannot push a `boolean` (true/false) to `className`.
You could `console.log(classes)`, then you will see, which prop causes the warning.
|
It means at least one of the className values is boolean instead of string. we can not say anything more with this piece of code.
| 7,876
|
47,508,790
|
I have the following JSON saved in a text file called test.xlsx.txt. The JSON is as follows:
```
{"RECONCILIATION": {0: "Successful"}, "ACCOUNT": {0: u"21599000"}, "DESCRIPTION": {0: u"USD to be accrued. "}, "PRODUCT": {0: "7500.0"}, "VALUE": {0: "7500.0"}, "AMOUNT": {0: "7500.0"}, "FORMULA": {0: "3 * 2500 "}}
```
The following is my python code:
```
f = open(path_to_analysis_results,'r')
message = f.read()
datastore = json.loads(str(message))
print datastore
f.close()
```
With the json.loads, I get the error "*ValueError: Expecting property name: line 1 column 21 (char 20)*". I have tried with json.load, json.dump and json.dumps, with all of them giving various errors. All I want to do is to be able to extract the key and the corresponding value and write to an Excel file. I have figured out how to write data to an Excel file, but am stuck with parsing this json.
```
RECONCILIATION : Successful
ACCOUNT : 21599000
DESCRIPTION : USD to be accrued.
PRODUCT : 7500.0
VALUE : 7500.0
AMOUNT : 7500.0
FORMULA : 3 * 2500
```
I would like the data to be in the above format to be able to write them to an Excel sheet.
|
2017/11/27
|
[
"https://Stackoverflow.com/questions/47508790",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3163920/"
] |
Your txt file does not contain valid JSON.
For starters, keys must be strings, not numbers.
The `u"..."` notation is not valid either.
You should fix your JSON first (maybe run it through a linter such as <https://jsonlint.com/> to make sure it's valid).
|
As Mike mentioned, your text file is not a valid JSON. It should be like:
```
{"RECONCILIATION": {"0": "Successful"}, "ACCOUNT": {"0": "21599000"}, "DESCRIPTION": {"0": "USD to be accrued. "}, "PRODUCT": {"0": "7500.0"}, "VALUE": {"0": "7500.0"}, "AMOUNT": {"0": "7500.0"}, "FORMULA": {"0": "3 * 2500 "}}
```
Note: keys are within doube quotes as JSON requires double quotes. And, your code should be (without **str()**):
```
import json
f = open(path_to_analysis_results,'r')
message = f.read()
print(message) # print message before, just to check it.
datastore = json.loads(message) # note: str() is not required. Message is already a string
print (datastore)
f.close()
```
| 7,879
|
31,303,728
|
Using pandas 0.16.2 on python 2.7, OSX.
I read a data-frame from a csv file like this:
```
import pandas as pd
data = pd.read_csv("my_csv_file.csv",sep='\t', skiprows=(0), header=(0))
```
The output of `data.dtypes` is:
```
name object
weight float64
ethnicity object
dtype: object
```
I was expecting string types for name, and ethnicity. But I found reasons here on SO on why they're "object" in newer pandas versions.
Now, I want to select rows based on ethnicity, for example:
```
data[data['ethnicity']=='Asian']
Out[3]:
Empty DataFrame
Columns: [name, weight, ethnicity]
Index: []
```
I get the same result with `data[data.ethnicity=='Asian']` or `data[data['ethnicity']=="Asian"]`.
But when I try the following:
```
data[data['ethnicity'].str.contains('Asian')].head(3)
```
I get the results I want.
However, I do not want to use "contains"- I would like to check for direct equality.
Please note that `data[data['ethnicity'].str=='Asian']` raises an error.
Am I doing something wrong? How to do this correctly?
|
2015/07/08
|
[
"https://Stackoverflow.com/questions/31303728",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/228177/"
] |
There is probably whitespace in your strings, for example,
```
data = pd.DataFrame({'ethnicity':[' Asian', ' Asian']})
data.loc[data['ethnicity'].str.contains('Asian'), 'ethnicity'].tolist()
# [' Asian', ' Asian']
print(data[data['ethnicity'].str.contains('Asian')])
```
yields
```
ethnicity
0 Asian
1 Asian
```
To strip the leading or trailing whitespace off the strings, you could use
```
data['ethnicity'] = data['ethnicity'].str.strip()
```
after which,
```
data.loc[data['ethnicity'] == 'Asian']
```
yields
```
ethnicity
0 Asian
1 Asian
```
|
You might try this:
```
data[data['ethnicity'].str.strip()=='Asian']
```
| 7,880
|
56,191,697
|
What is the `__weakrefoffset__` attribute for? What does the integer value signify?
```
>>> int.__weakrefoffset__
0
>>> class A:
... pass
...
>>> A.__weakrefoffset__
24
>>> type.__weakrefoffset__
368
>>> class B:
... __slots__ = ()
...
>>> B.__weakrefoffset__
0
```
All types seem to have this attribute. But there's no mention about that in [docs](https://docs.python.org/3/search.html?q=__weakrefoffset__) nor in [PEP 205](https://www.python.org/dev/peps/pep-0205/).
|
2019/05/17
|
[
"https://Stackoverflow.com/questions/56191697",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/674039/"
] |
In CPython, the layout of a simple type like `float` is three fields: its type pointer, its reference count, and its value (here, a `double`). For `list`, the value is three variables (the pointer to the separately-allocated array, its capacity, and the used size). These types do not support attributes or weak references to save space.
If a Python class inherits from one of these, it’s critical that it be able to support attributes, and yet there is no fixed offset at which to place the `__dict__` pointer (without wasting memory by putting it at a large unused offset). So the dictionary is stored wherever there is room, and its offset in bytes is [recorded in the type](https://stackoverflow.com/a/46591145/8586227). For base classes where the size is variable (like `tuple`, which includes all its pointers directly), there is special support for storing the `__dict__` pointer past the end of the variable-size section (*e.g.*, `type("",(tuple,),{}).__dictoffset__` is -8).
The situation with weak references is [exactly analogous](https://docs.python.org/3/extending/newtypes.html#weakref-support), except that there is no support for (subclasses of) variable-sized types. A `__weakrefoffset__` of 0 (which is conveniently the default for C static variables) indicates no support, since of course the object’s type is known to always be at the beginning of its layout.
|
It is referenced in [PEP 205](https://www.python.org/dev/peps/pep-0205/) here:
>
> Many built-in types will participate in the weak-reference management, and any extension type can elect to do so. **The type structure will contain an additional field which provides an offset into the instance structure which contains a list of weak reference structures.** If the value of the field is <= 0, the object does not participate. In this case, `weakref.ref()`, `<weakdict>.__setitem__()` and `.setdefault()`, and item assignment will raise `TypeError`. If the value of the field is > 0, a new weak reference can be generated and added to the list.
>
>
>
And you can see exactly how that works in the source code [here](https://github.com/python/cpython/blob/master/Objects/weakrefobject.c).
Check out the other answer for more why it exists.
| 7,881
|
47,013,083
|
I am trying to establish a Web-Socket connection to a server and enter into receive mode.Once the client starts receiving the data, it immediately closes the connection with below exception
```
webSoc_Received = await websocket.recv()
File "/root/envname/lib/python3.6/site-packages/websockets/protocol.py", line 319, in recv
raise ConnectionClosed(self.close_code, self.close_reason)
websockets.exceptions.ConnectionClosed: WebSocket connection is closed: code = 1007, no reason.
```
Client-side Code Snippet :
```
import asyncio
import websockets
async def connect_ws():
print("websockets.client module defines a simple WebSocket client API::::::")
async with websockets.client.connect(full_url,extra_headers=headers_conn1) as websocket:
print ("starting")
webSoc_Received = await websocket.recv()
print ("Ending")
Decode_data = zlib.decompress(webSoc_Received)
print(Decode_data)
asyncio.get_event_loop().run_until_complete(connect_ws())
```
Any thoughts on this?
|
2017/10/30
|
[
"https://Stackoverflow.com/questions/47013083",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3913710/"
] |
You can use hidden property or `*ngIf` directive :
```html
<app-form></app-form>
<app-results *ngIf="isOn"></app-results>
<app-contact-primary *ngIf="!isOn"></<app-contact-primary>
<app-contact-second *ngIf="!isOn"></app-contact-second>
<button pButton label="Contact" (click)="isOn= false">Contact</button>
<button pButton label="Results" (click)="isOn= true">Results</button>
```
|
Just make the variable true and false
```html
<button pButton label="Contact" (click)="isOn = true">Contact</button>
<button pButton label="Results" (click)="isOn = false">Results</button>
```
| 7,882
|
73,003,060
|
I made a simple example of taking a screenshot.
I debugged this, but there was an error.
cord
```
import pyautogui
import PIL
pyautogui.screenshot('screenshot.png')
```
error
```
The Pillow package is required to use this function.
```
I've already set up a pillow and version is 9.2.0(lastest version)
I'm using python 3.9.12
My Python version is compatible with the Pillow version.
I tried Pip install fill, pip install fill --upgrade, and so on
But it hasn't been fixed.
```
Requirement already satisfied: Pillow in d:\anaconda3\lib\site-packages (9.2.0)
```
Is there a way to fix it?
|
2022/07/16
|
[
"https://Stackoverflow.com/questions/73003060",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/19560998/"
] |
I've just installed everything new and it's fixed!
|
Try this, it should work perfectly on my side.
```
import pyautogui
myScreenshot = pyautogui.screenshot()
myScreenshot.save(r'D:\\image.png')
```
[](https://i.stack.imgur.com/7aGpM.png)
| 7,884
|
57,813,557
|
I would like to multiply every element in a numpy array by a constant raised to the power of the index of the array element without a for loop. I am using python 2.7.
I am new to this and can use a for loop by trying to not do that for no real reason.
This for loop would solve the problem
```py
x = 3
for i in range(test_array.size):
test_array[i] = test_array[i] * x**i
```
|
2019/09/05
|
[
"https://Stackoverflow.com/questions/57813557",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12027672/"
] |
What you want is something like this:
```
data = [
[{'Status': 'active', 'id': '0f1fb86da9c7ee380'}],
[{'Status': 'active', 'id': '0d6b330e4960c3382'}, {'Status': 'active', 'id': '033cfb634e595ccfa'}],
[{'Status': 'active', 'id': '0457f623cbb9f7c95'}],
[{'Status': 'active', 'id': '01b69eb6a3048f749'}, {'Status': 'active', 'id': '0f7ce44a9a5fc82f5'}, {'Status': 'active', 'id': '05417e161acf3ec5d'}],
[{'Status': 'active', 'id': '033cfb634e595ccfa'}, {'Status': 'active', 'id': '01eab32f9808acf19'}],
]
new_data = []
for l in data:
current_ids = []
for d in l:
current_ids.append(d["id"])
new_data.append(current_ids)
new_data
```
output:
```
[['0f1fb86da9c7ee380'],
['0d6b330e4960c3382', '033cfb634e595ccfa'],
['0457f623cbb9f7c95'],
['01b69eb6a3048f749', '0f7ce44a9a5fc82f5', '05417e161acf3ec5d'],
['033cfb634e595ccfa', '01eab32f9808acf19']]
```
|
You can also use list comprehension to achieve this.
```
output = [[item["id"] for item in items] for items in data]
```
| 7,885
|
1,653,460
|
What would be the best way to handle lightweight crash recovery for my program?
I have a Python program that runs a number of test cases and the results are stored in a dictionary which serves as a cache. If I could save (and then restore) each item that is added to the dictionary, I could simply run the program again and the caching would provide suitable crash recovery.
1. You may assume that the keys and values in the dictionary are easily convertible to strings ie. using either str or the [pickle module](http://docs.python.org/library/pickle.html).
2. I want this to be completely cross platform - well at least as cross platform as Python is
3. I don't want to simply write out each value to a file and load it in my program might crash while I am writing the file
4. **UPDATE**: This is intended to be a lightweight module so a DBMS is out of the question.
5. **UPDATE**: Alex is correct in that I don't actually *need* to protect against crashes while writing out, but there are circumstances where I would like to be able to manually terminate it in a recoverable state.
6. **UPDATE** Added a highly limited solution using standard input below
|
2009/10/31
|
[
"https://Stackoverflow.com/questions/1653460",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/165495/"
] |
There's no good way to guard against "your program crashing **while** writing a checkpoint to a file", but why should you worry so much about **that**?! What ELSE is your program doing at that time BESIDES "saving checkpoint to a file", that could easily cause it to crash?!
It's hard to beat `pickle` (or `cPickle`) for portability of serialization in Python, but, that's just about "turning your keys and values to strings". For saving key-value pairs (once stringified), few approaches are safer than just appending to a file (**don't** pickle to files if your crashes are far, far more frequent than normal, as you suggest tjey are).
If your environment is incredibly crash-prone for whatever reason (*very* cheap HW?-), just make sure you close the file (and fflush if the OS is also crash-prone;-), then reopen it for append. This way, worst that can happen is that the very latest append will be incomplete (due to a crash in the middle of things) -- then you just catch the exception raised by unpickling that incomplete record and redo only the things that weren't saved (because they weren't completed due to a crash, OR because they were completed but not fully saved due to a crash, comes to much the same thing in the end).
If you have the option of checkpointing to a database engine (instead of just doing so to files), consider it seriously! The DB engine will keep transaction logs and ensure ACID properties, making your application-side programming much easier IF you can count on that!-)
|
The pickle module supports serializing objects to a file (and loading from file):
<http://docs.python.org/library/pickle.html>
| 7,888
|
34,460,369
|
I had a bunch of bash scripts in a directory that I "backed up" doing `$ tail -n +1 -- *.sh`
The output of that tail is something like:
```
==> do_stuff.sh <==
#! /bin/bash
cd ~/my_dir
source ~/my_dir/bin/activate
python scripts/do_stuff.py
==> do_more_stuff.sh <==
#! /bin/bash
cd ~/my_dir
python scripts/do_more_stuff.py
```
These are all fairly simple scripts with 2-10 lines.
Given the output of that `tail`, I want to recreate all of the above files with the same content.
That is, I'm looking for a command that can ingest the above text and create `do_stuff.sh` and `do_more_stuff.sh` with the appropriate content.
This is more of a one-off task so I don't really need anything robust and I believe there are no big edge cases given files are simple (e.g none of the files actually contain `==>` in them).
I started with trying to come up with a matching regex and it will probably look something like this `(==>.*\.sh <==)(.*)(==>.*\.sh <==)`, but I'm stuck into actually getting it to capture filename, content and output to file.
Any ideas?
|
2015/12/25
|
[
"https://Stackoverflow.com/questions/34460369",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/845169/"
] |
Presume your backup file is named backup.txt
```
perl -ne "if (/==> (\S+) <==/){open OUT,'>',$1;next}print OUT $_" backup.txt
```
Above version is for Windows
fixed version on \*nix:
```
perl -ne 'if (/==> (\S+) <==/){open OUT,">",$1;next}print OUT $_' backup.txt
```
|
```
#!/bin/bash
while read -r line; do
if [[ $line =~ ^==\>[[:space:]](.*)[[:space:]]\<==$ ]]; then
out="${BASH_REMATCH[1]}"
continue
fi
printf "%s\n" "$line" >> "$out"
done < backup.txt
```
Drawback: extra blank line at the end of every created file except the last one.
| 7,895
|
14,075,337
|
I have a csv file with date, time., price, mag, signal.
62035 rows; there are 42 times of day associated to each unique date in the file.
For each date, when there is an 'S' in the signal column append the corresponding price at the time the 'S' occurred. Below is the attempt.
```
from pandas import *
from numpy import *
from io import *
from os import *
from sys import *
DF1 = read_csv('___.csv')
idf=DF1.set_index(['date','time','price'],inplace=True)
sStore=[]
for i in idf.index[i][0]:
sStore.append([idf.index[j][2] for j in idf[j][1] if idf['signal']=='S'])
sStore.head()
```
```
Traceback (most recent call last)
<ipython-input-7-8769220929e4> in <module>()
1 sStore=[]
2
----> 3 for time in idf.index[i][0]:
4
5 sStore.append([idf.index[j][2] for j in idf[j][1] if idf['signal']=='S'])
NameError: name 'i' is not defined
```
I do not understand why the i index is not permitted here. Thanks.
I also think it's strange that :
idf.index.levels[0] will show the dates "not parsed" as it is in the file but out of order. Despite that parse\_date=True as an argument in set\_index.
I bring this up since I was thinking of side swiping the problem with something like:
```
for i in idf.index.levels[0]:
sStore.append([idf.index[j][2] for j in idf.index.levels[1] if idf['signal']=='S'])
sStore.head()
```
**My edit 12/30/2012 based on DSM's comment below:**
I would like to use your idea to get the P&L, as I commented below. Where if S!=B, for any given date, we difference using the closing time, 1620.
```
v=[df["signal"]=="S"]
t=[df["time"]=="1620"]
u=[df["signal"]!="S"]
df["price"][[v and (u and t)]]
```
That is, "give me the price at 1620; (even when it doesn't give a "sell signal", S) so that I can diff. with the "extra B's"--for the special case where B>S. This ignores the symmetric concern (where S>B) but for now I want to understand this logical issue.
On traceback, this expression gives:
```
ValueError: boolean index array should have 1 dimension
```
Note that in order to invoke df["time'] **I do not set\_index here**. Trying the union operator | gives:
```
TypeError: unsupported operand type(s) for |: 'list' and 'list'
```
**Looking at Max Fellows's approach**,
@Max Fellows
The point is to close out the positions at the end of the day; so we need to capture to price at the close to "unload" all those B, S which were accumulated; but didn't net each other out.
If I say:
```
filterFunc1 = lambda row: row["signal"] == "S" and ([row["signal"] != "S"][row["price"]=="1620"])
filterFunc2 =lambda row: ([row["price"]=="1620"][row["signal"] != "S"])
filterFunc=filterFunc1 and filterFunc2
filteredData = itertools.ifilter(filterFunc, reader)
```
On traceback:
```
IndexError: list index out of range
```
|
2012/12/28
|
[
"https://Stackoverflow.com/questions/14075337",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1374969/"
] |
Try something like this:
```
for i in range(len(idf.index)):
value = idf.index[i][0]
```
Same thing for the iteration with the `j` index variable. As has been pointed, you can't reference the iteration index in the expression to be iterated, and besides you need to perform a very specific iteration (traversing over a column in a matrix), and Python's default iterators won't work "out of the box" for this, so a custom index handling is needed here.
|
It's because `i` is not yet defined, just like the error message says.
In this line:
```
for i in idf.index[i][0]:
```
You are telling the Python interpreter to iterate over all the values yielded by the list returning from the expression `idf.index[i][0]` but you have not yet defined what `i` is (although you are attempting to set each item in the list to the variable `i` as well).
The way the Python `for ... in ...` loop works is that it takes the right most component and asks for the `next` item from the iterator. It then assigns the value yielded by the call to the variable name provided on the left hand side.
| 7,896
|
70,352,477
|
I am wondering if it is possible to do grouping of lists in python in a simple way without importing any libraries.
and example of input could be:
```py
a="0 0 1 2 1"
```
and output
```
[(0,0),(1,1),(2)]
```
I am thinking something like the implementation of itertools groupby
```py
class groupby:
# [k for k, g in groupby('AAAABBBCCDAABBB')] --> A B C D A B
# [list(g) for k, g in groupby('AAAABBBCCD')] --> AAAA BBB CC D
def __init__(self, iterable, key=None):
if key is None:
key = lambda x: x
self.keyfunc = key
self.it = iter(iterable)
self.tgtkey = self.currkey = self.currvalue = object()
def __iter__(self):
return self
def __next__(self):
self.id = object()
while self.currkey == self.tgtkey:
self.currvalue = next(self.it) # Exit on StopIteration
self.currkey = self.keyfunc(self.currvalue)
self.tgtkey = self.currkey
return (self.currkey, self._grouper(self.tgtkey, self.id))
def _grouper(self, tgtkey, id):
while self.id is id and self.currkey == tgtkey:
yield self.currvalue
try:
self.currvalue = next(self.it)
except StopIteration:
return
self.currkey = self.keyfunc(self.currvalue)
```
but shorter
|
2021/12/14
|
[
"https://Stackoverflow.com/questions/70352477",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12868928/"
] |
I just realized how to do it.
```py
{k:[v for v in a.split() if v == k] for k in set(a.split())}
```
or
```py
[tuple(v for v in a.split() if v == k) for k in set(a.split())]
```
however - this solution can not be used on dictionaties because dicts are not hashable - so maybe thats one of the reasons why the itertools implementation is so much more complicated.
|
How about using `a.split(' ')` then using for loop to do that?
| 7,901
|
63,315,233
|
If I have two matrices a and b, is there any function I can find the matrix x, that when dot multiplied by a makes b? Looking for python solutions, for matrices in the form of numpy arrays.
|
2020/08/08
|
[
"https://Stackoverflow.com/questions/63315233",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12818944/"
] |
This problem of finding X such as `A*X=B` is equivalent to search the "inverse of A", i.e. a matrix such as `X = Ainverse * B`.
For information, in math `Ainverse` is noted `A^(-1)` ("A to the power -1", but you can say "A inverse" instead).
In numpy, this is a builtin function to find the inverse of a matrix `a`:
```
import numpy as np
ainv = np.linalg.inv(a)
```
see for instance [this tutorial](https://scriptverse.academy/tutorials/python-matrix-inverse.html) for explanations.
You need to be aware that some matrices are not "invertible", most obvious examples (roughly) are:
* matrix that are not square
* matrix that represent a projection
numpy can still approximate some value in certain cases.
|
if `A` is a full rank, square matrix
```
import numpy as np
from numpy.linalg import inv
X = inv(A) @ B
```
if not, then such a matrix does not exist, but we can approximate it
```
import numpy as np
from numpy.linalg import inv
X = inv(A.T @ A) @ A.T @ B
```
| 7,903
|
3,292,643
|
Which is the most pythonic way to convert a list of tuples to string?
I have:
```
[(1,2), (3,4)]
```
and I want:
```
"(1,2), (3,4)"
```
My solution to this has been:
```
l=[(1,2),(3,4)]
s=""
for t in l:
s += "(%s,%s)," % t
s = s[:-1]
```
Is there a more pythonic way to do this?
|
2010/07/20
|
[
"https://Stackoverflow.com/questions/3292643",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/170912/"
] |
How about:
```
>>> tups = [(1, 2), (3, 4)]
>>> ', '.join(map(str, tups))
'(1, 2), (3, 4)'
```
|
The most pythonic solution is
```
tuples = [(1, 2), (3, 4)]
tuple_strings = ['(%s, %s)' % tuple for tuple in tuples]
result = ', '.join(tuple_strings)
```
| 7,904
|
14,897,426
|
My XML file test.xml contains the following tags
```
<?xml version="1.0" encoding="ISO-8859-1"?>
<AppName>
<author>Subho Halder</author>
<description> Description</description>
<date>2012-11-06</date>
<out>Output 1</out>
<out>Output 2</out>
<out>Output 3</out>
</AppName>
```
I want to count the number of times the `<out>` tag has occured
This is my python code so far which I have written:
```
from xml.dom.minidom import parseString
file = open('test.xml','r')
data = file.read()
file.close()
dom = parseString(data)
if (len(dom.getElementsByTagName('author'))!=0):
xmlTag = dom.getElementsByTagName('author')[0].toxml()
author = xmlTag.replace('<author>','').replace('</author>','')
print author
```
Can someone help me out here?
|
2013/02/15
|
[
"https://Stackoverflow.com/questions/14897426",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/122840/"
] |
Try [`len(dom.getElementsByTagName('out'))`](http://docs.python.org/2/library/xml.dom.html#document-objects)
```
from xml.dom.minidom import parseString
file = open('test.xml','r')
data = file.read()
file.close()
dom = parseString(data)
print len(dom.getElementsByTagName('out'))
```
gives
```
3
```
|
I would recommend using lxml
```
import lxml.etree
doc = lxml.etree.parse(test.xml)
count = doc.xpath('count(//out)')
```
You can look up more information on XPATH [here](http://infohost.nmt.edu/tcc/help/pubs/pylxml/web/xpath.html).
| 7,914
|
56,911,541
|
I am new to python/flask so need your help.
I have multiselect dropdown like below
I have a multiselect in html file like this:
```
<select multiple id="mymultiselect" name="mymultiselect">
<option value="1">India</option>
<option value="2">USA</option>
<option value="3">Japanthing</option>
</select>
```
as i have multiple button in form tag, and each tag point to different route, so instead of Post, I am using Get method.
But in Python wile not able to get the value from `request.args.get('mymultiselect')` , its getting none.
I tried `request.args.getlist('mymultiselect')` as well but no luck.
Could you please help me on same
|
2019/07/06
|
[
"https://Stackoverflow.com/questions/56911541",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11746629/"
] |
One way would be to create a pipe in the parent process. Each child process closes the write end of the pipe, then calls `read` (which blocks). When the parent process is ready, it closes both ends of its pipe. This makes all the children return from `read` and they can now close their read end of the pipe and proceed with their code.
|
For situations like this, I sometimes use [atomic](https://en.cppreference.com/w/c/atomic) operation. You can use atomic flags to notify sub-process, but it creates busy-waiting. So you can use it on a very special cases.
Other method is to create an event with something like `pthread_cond_broadcast()` and `pthread_cond_timedwait()`. All sub-process wait on your condition. When you are ready to go, unlock it on parent process.
| 7,916
|
45,734,960
|
I want to write my own small mailserver application in python with **aiosmtpd**
a) for educational purpose to better understand mailservers
b) to realize my own features
So my question is, what is missing (besides aiosmtpd) for an **Mail-Transfer-Agent**, that can send and receive emails to/from other full MTAs (gmail.com, yahoo.com ...)?
I'm guessing:
1.) Of course a ***domain*** and static ip
2.) Valid ***certificate*** for this domain
...should be doable with Lets Encrypt
3.) ***Encryption***
...should be doable with SSL/Context/Starttls... with aiosmtpd itself
4.) ***Resolving*** MX DNS entries for outgoing emails!?
...should be doable with python library dnspython
5.) ***Error*** handling for SMTP communication errors, error replies from other MTAs, bouncing!?
6.) ***Queue*** for handling inbound and pending outbund emails!?
Are there any other ***"essential"*** features missing?
Of course i know, there are a lot more "advanced" features for a mailserver like spam checking, malware checking, certificate validation, blacklisting, rules, mailboxes and more...
Thanks for all hints!
---
EDIT:
Let me clarify what is in my mind:
I want to write a mailserver for a club. Its main purpose will be a mailing-list-server. There will be different lists for different groups of the club.
Lets say my domain is *myclub.org* then there will be for example *youth@myclub.org*, *trainer@myclub.org* and so on.
**Only members** will be allowed to use this mailserver and only the members will receive emails from this mailserver. No one else will be allowed to send emails to this mailserver nor will receive emails from it. The members email-addresses and their group(s) are stored in a database.
In the future i want to integrate some other useful features, for example:
* Auto-reminders
* A chatbot, where members can control services and request informations by email
What i don't need:
* User Mailboxes
* POP/IMAP access
* Webinterface
**Open relay issue**:
* I want to reject any [FROM] email address that is not in the members database during SMTP negotiation.
* I want to check the sending mailservers for a valid certificate.
* The number of emails/member/day will be limited.
* I'm not sure, if i really need spam detection for the incoming emails?
**Losing emails issue**:
I think i will need a "lightweight" retry mechanism. However if an outgoing email can't be delivered after some retries, it will be dropped and only the administrator will be notified, not the sender. The members should not be bothered by email delivery issues. Is there any **Python Library** that can **generate RFC3464 compliant** error reply emails?
**Reboot issue**:
I'm not sure if i really need persistent storage for emails, that are not yet sent? In my use case, all the outgoing emails should be delivered usually within a few seconds (if no delivery problem occurs). Before a (planned) reboot i can check for an empty send queue.
|
2017/08/17
|
[
"https://Stackoverflow.com/questions/45734960",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6002171/"
] |
The most important thing about running your own SMTP server is that you **must not be an open relay**. That means you must not accept messages from strangers and relay them to any destination on the internet, since that would enable spammers to send spam through your SMTP server -- which would quickly get you blocked.
Thus, your server should
* relay from *authenticated* users/senders to remote destinations, or
* relay from strangers to *your own domains*.
Since your question talks about resolving MX records for outgoing email, I'm assuming you want your server to accept emails from authenticated users. Thus you need to consider how your users will authenticate themselves to the server. aiosmtpd currently has an [open pull request](https://github.com/aio-libs/aiosmtpd/pull/66) providing a basic SMTP AUTH implementation; you may use that, or you may implement your own (by subclassing `aiosmtpd.smtp.SMTP` and implementing the `smtp_AUTH()` method).
---
The second-most important thing about running your own SMTP server is that you **must not lose emails without notifying the sender**. When you accept an email from an authenticated user to be relayed to an external destination, you should let the user know (by sending an [RFC 3464 Delivery Status Notification](https://www.rfc-editor.org/rfc/rfc3464) via email) if the message is delayed or if it is not delivered at all.
You should not drop the email immediately if the remote destination fails to receive it; you should try again later and repeatedly try until you deem that you have tried for long enough. Postfix, for instance, waits 10 minutes before trying to deliver the email after the first delivery attempt fails, and then it waits 20 minutes if the second attempt fails, and so on until the message has been attempted delivered for a couple days.
You should also take care to allow the host running your mail server to be rebooted, meaning you should store queued messages on disk. For this you might be able to use the [mailbox module](https://docs.python.org/3/library/mailbox.html).
---
Of course, I haven't covered every little detail, but I think the above two points are the most important, and you didn't seem to cover them in your question.
|
You may consider the following features:
* Message threading
* Support for Delivery status
* Support for POP and IMAP protocols
* Supports for protocols such as RFC 2821 SMTP and RFC 2033 LMTP email message transport
* Support Multiple message tagging
* Support for PGP/MIME (RFC2015)
* Support list-reply
* Lets each user manage their own mail lists Supports
* Control of message headers during composition
* Support for address groups
* Prevention of mailing list loops
* Junk mail control
| 7,917
|
34,147,515
|
When playing around with the Python interpreter, I stumbled upon this conflicting case regarding the `is` operator:
If the evaluation takes place in the function it returns `True`, if it is done outside it returns `False`.
```
>>> def func():
... a = 1000
... b = 1000
... return a is b
...
>>> a = 1000
>>> b = 1000
>>> a is b, func()
(False, True)
```
Since the `is` operator evaluates the `id()`'s for the objects involved, this means that `a` and `b` point to the same `int` instance when declared inside of function `func` but, on the contrary, they point to a different object when outside of it.
Why is this so?
---
**Note**: I am aware of the difference between identity (`is`) and equality (`==`) operations as described in [Understanding Python's "is" operator](https://stackoverflow.com/questions/13650293/understanding-pythons-is-operator). In addition, I'm also aware about the caching that is being performed by python for the integers in range `[-5, 256]` as described in ["is" operator behaves unexpectedly with integers](https://stackoverflow.com/questions/306313/pythons-is-operator-behaves-unexpectedly-with-integers).
This **isn't the case here** since the numbers are outside that range and **I do** want to evaluate identity and **not** equality.
|
2015/12/08
|
[
"https://Stackoverflow.com/questions/34147515",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4952130/"
] |
tl;dr:
------
As the [reference manual](https://docs.python.org/3.5/reference/executionmodel.html#structure-of-a-program) states:
>
> A block is a piece of Python program text that is executed as a unit.
> The following are blocks: a module, a function body, and a class definition.
> **Each command typed interactively is a block.**
>
>
>
This is why, in the case of a function, you have a **single** code block which contains a **single** object for the numeric literal
`1000`, so `id(a) == id(b)` will yield `True`.
In the second case, you have **two distinct code objects** each with their own different object for the literal `1000` so `id(a) != id(b)`.
Take note that this behavior doesn't manifest with `int` literals only, you'll get similar results with, for example, `float` literals (see [here](https://stackoverflow.com/questions/38834770/is-operator-gives-unexpected-results-on-floats/38835101#38835101)).
Of course, comparing objects (except for explicit `is None` tests ) should always be done with the equality operator `==` and *not* `is`.
*Everything stated here applies to the most popular implementation of Python, CPython. Other implementations might differ so no assumptions should be made when using them.*
---
Longer Answer:
--------------
To get a little clearer view and additionally verify this *seemingly odd* behaviour we can look directly in the [`code`](https://docs.python.org/3.5/reference/datamodel.html#the-standard-type-hierarchy) objects for each of these cases using the [`dis`](https://docs.python.org/3.5/library/dis.html) module.
**For the function `func`**:
Along with all other attributes, function objects also have a `__code__` attribute that allows you to peek into the compiled bytecode for that function. Using [`dis.code_info`](https://docs.python.org/3.5/library/dis.html#dis.code_info) we can get a nice pretty view of all stored attributes in a code object for a given function:
```
>>> print(dis.code_info(func))
Name: func
Filename: <stdin>
Argument count: 0
Kw-only arguments: 0
Number of locals: 2
Stack size: 2
Flags: OPTIMIZED, NEWLOCALS, NOFREE
Constants:
0: None
1: 1000
Variable names:
0: a
1: b
```
We're only interested in the `Constants` entry for function `func`. In it, we can see that we have two values, `None` (always present) and `1000`. We only have a **single** int instance that represents the constant `1000`. This is the value that `a` and `b` are going to be assigned to when the function is invoked.
Accessing this value is easy via `func.__code__.co_consts[1]` and so, another way to view our `a is b` evaluation in the function would be like so:
```
>>> id(func.__code__.co_consts[1]) == id(func.__code__.co_consts[1])
```
Which, of course, will evaluate to `True` because we're referring to the same object.
**For each interactive command:**
As noted previously, each interactive command is interpreted as a single code block: parsed, compiled and evaluated independently.
We can get the code objects for each command via the [`compile`](https://docs.python.org/3.5/library/functions.html#compile) built-in:
```
>>> com1 = compile("a=1000", filename="", mode="single")
>>> com2 = compile("b=1000", filename="", mode="single")
```
For each assignment statement, we will get a similar looking code object which looks like the following:
```
>>> print(dis.code_info(com1))
Name: <module>
Filename:
Argument count: 0
Kw-only arguments: 0
Number of locals: 0
Stack size: 1
Flags: NOFREE
Constants:
0: 1000
1: None
Names:
0: a
```
The same command for `com2` looks the same but *has a fundamental difference*: each of the code objects `com1` and `com2` have different int instances representing the literal `1000`. This is why, in this case, when we do `a is b` via the `co_consts` argument, we actually get:
```
>>> id(com1.co_consts[0]) == id(com2.co_consts[0])
False
```
Which agrees with what we actually got.
*Different code objects, different contents.*
---
**Note:** I was somewhat curious as to how exactly this happens in the source code and after digging through it I believe I finally found it.
During compilations phase the [**`co_consts`**](https://github.com/python/cpython/blob/master/Python/compile.c#L595) attribute is represented by a dictionary object. In [`compile.c`](https://github.com/python/cpython/blob/master/Python/compile.c) we can actually see the initialization:
```
/* snippet for brevity */
u->u_lineno = 0;
u->u_col_offset = 0;
u->u_lineno_set = 0;
u->u_consts = PyDict_New();
/* snippet for brevity */
```
During compilation this is checked for already existing constants. See [@Raymond Hettinger's answer below](https://stackoverflow.com/a/39325641/4952130) for a bit more on this.
---
### Caveats:
* Chained statements will evaluate to an identity check of `True`
It should be more clear now why exactly the following evaluates to `True`:
```
>>> a = 1000; b = 1000;
>>> a is b
```
In this case, by chaining the two assignment commands together we tell the interpreter to compile these **together**. As in the case for the function object, only one object for the literal `1000` will be created resulting in a `True` value when evaluated.
* Execution on a module level yields `True` again:
As previously mentioned, the reference manual states that:
>
> ... The following are blocks: **a module** ...
>
>
>
So the same premise applies: we will have a single code object (for the module) and so, as a result, single values stored for each different literal.
* The same **doesn't** apply for **mutable** objects:
Meaning that unless we explicitly initialize to the same mutable object (for example with `a = b = []`), the identity of the objects will never be equal, for example:
```
a = []; b = []
a is b # always evaluates to False
```
Again, in [the documentation](https://docs.python.org/3/reference/datamodel.html#objects-values-and-types), this is specified:
>
> after a = 1; b = 1, a and b may or may not refer to the same object with the value one, depending on the implementation, but after c = []; d = [], c and d are guaranteed to refer to two different, unique, newly created empty lists.
>
>
>
|
At the interactive prompt, entry are [compiled in a *single* mode](https://docs.python.org/3/library/functions.html#compile) which processes one complete statement at a time. The compiler itself (in [Python/compile.c](https://hg.python.org/cpython/file/tip/Python/compile.c)) tracks the constants in a dictionary called [*u\_consts*](https://hg.python.org/cpython/file/tip/Python/compile.c#l114) that maps the constant object to its index.
In the [*compiler\_add\_o()*](https://hg.python.org/cpython/file/tip/Python/compile.c#l1099) function, you see that before adding a new constant (and incrementing the index), the dict is checked to see whether the constant object and index already exist. If so, they are reused.
In short, that means that repeated constants in one statement (such as in your function definition) are folded into one singleton. In contrast, your `a = 1000` and `b = 1000` are two separate statements, so no folding takes place.
FWIW, this is all just a CPython implementation detail (i.e. not guaranteed by the language). This is why the references given here are to the C source code rather than the language specification which makes no guarantees on the subject.
Hope you enjoyed this insight into how CPython works under the hood :-)
| 7,920
|
23,819,117
|
I use python 3.4 and I can format strings in two ways:
```
print("%d %d" %(1, 2))
```
and
```
print("{:d} {:d}".format(1, 2))
```
In [documentation](https://docs.python.org/2/library/string.html) they show examples only using 'format'. Do it mean that using '%' is not good, or it does not matter which version to use?
|
2014/05/23
|
[
"https://Stackoverflow.com/questions/23819117",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3654650/"
] |
Quoting from the [official documentation](https://docs.python.org/2/library/stdtypes.html#str.format),
>
> This method of string formatting is the new standard in Python 3, and should be preferred to the % formatting described in String Formatting Operations in new code.
>
>
>
So, `format` is the recommended way to do, going forward.
|
In addition to the recommendations on the official site the format() method is more flexible, powerful and readable than operator '%'.
For example:
```
>>> '{2}, {1}, {0}'.format(*'abc')
'c, b, a'
>>> coord = {'latitude': '37.24N', 'longitude': '-115.81W'}
>>> 'Coordinates: {latitude}, {longitude}'.format(**coord)
'Coordinates: 37.24N, -115.81W'
>>> "Units destroyed: {players[0]}".format(players = [1, 2, 3])
'Units destroyed: 1'
```
and more, and more, and more... Hard to do something similar with the operator '%'.
| 7,921
|
61,417,765
|
Disclaimer. I am new to python and trying to learn. I have a list of dictionaries containing address information I would like to iterate over and then pass into a function as arguments.
`print(data)`
`[{'firstName': 'John', 'lastName': 'Smith', 'address': '123 Lane', 'country': 'United States', 'state': 'TX', 'city': 'Springfield', 'zip': '12345'}, {'firstName': 'Mary', 'lastName': 'Smith', 'address': '321 Lanet', 'country': 'United States', 'state': 'Washington', 'city': 'Springfield', 'zip': '54321'}]`
I iterate over the list and attempt pass in the values, but the values are past as a list instead of individually. I'm not sure how to correct. I'm still grasping arguments and keyword arguments. Any help and guidance is appreciated.
```
from usps import USPSApi, Address
input_name = [li['lastName'] for li in data]
input_address = [li['address'] for li in data]
input_city = [li['city'] for li in data]
input_state = [li['state'] for li in data]
input_zip = [li['zip'] for li in data]
input_country = [li['country'] for li in data]
address = Address(
name = input_name,
address_1= input_address,
city= input_city,
state=input_state,
zipcode=input_zip
)
usps = USPSApi('------', test=True)
validation = usps.validate_address(address)
data_results = validation.result
print(data_results)
```
|
2020/04/24
|
[
"https://Stackoverflow.com/questions/61417765",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3950634/"
] |
You already have all the logic to process a *single* data point, just expand it to *multiple* data points as shown below using a loop.
```
from usps import USPSApi, Address
for item in data:
kwargs = dict()
kwargs['name'] = item['lastName']
kwargs['address_1'] = item['address']
kwargs['city'] = item['city']
kwargs['state'] = item['state']
kwargs['zipcode'] = item['zip']
address = Address(**kwargs)
usps = USPSApi('------', test=True)
validation = usps.validate_address(address)
data_results = validation.result
print(data_results)
```
Without syntactic sugar it becomes
```
for item in data:
kwargs = dict()
name = item['lastName']
address_ = item['address']
city = item['city']
state = item['state']
zip_ = item['zip']
address = Address(
name=name,
address_1=address_,
city=city,
state=state,
zipcode=zip_)
```
|
If the intention is to get the key's values in a variable:
```
d_data = [{'firstName': 'John', 'lastName': 'Smith', 'address': '123 Lane', 'country': 'United States', 'state': 'TX', 'city': 'Springfield', 'zip': '12345'}, {'firstName': 'Mary', 'lastName': 'Smith', 'address': '321 Lanet', 'country': 'United States', 'state': 'Washington', 'city': 'Springfield', 'zip': '54321'}]
input_name = d_data[0]['lastName']
input_address = d_data[0]['address']
input_city = d_data[0]['city']
input_state = d_data[0]['state']
input_zip = d_data[0]['zip']
input_country = d_data[0]['country']
print(input_name)
print(input_address)
print(input_city)
print(input_zip)
print(input_country)
```
**OUTPUT:**
```
John
Smith
123 Lane
Springfield
12345
United States
```
| 7,922
|
55,239,297
|
I'm never done development in a career as a software programmer
I'm given this domain name on NameCheap with the server disk. Now I design Django app and trying to deploy on the server but I had problems (stated below)
```
[ E 2019-03-19 06:23:19.7356 598863/T2n age/Cor/App/Implementation.cpp:221 ]: Could not spawn process for application /home/username/IOT: The application process exited prematurely.
App 644163 output: File "/home/username/IOT/passenger_wsgi.py", line 1, in <module>
App 644163 output: File "/home/username/virtualenv/IOT/3.7/lib64/python3.7/imp.py", line 171, in load_source
```
**edited:** Read more about the software supporting the WSGI is using Phusion Passenger, you could read more here; www.phusionpassenger.com
this is my passenger\_wsgi.py:
```
from myproject.wsgi import application
```
I had tried several tutorials:
1. <https://www.youtube.com/watch?v=ffqMZ5IcmSY&ab_channel=iFastNetLtd.InternetServices>
2. <https://smartlazycoding.com/django-tutorial/deploy-a-django-website-to-a2-hosting>
3. <https://hostpresto.com/community/tutorials/how-to-setup-a-python-django-website-on-hostpresto/>
4. <https://www.helloworldhost.com/knowledgebase/9/Deploy-Django-App-on-cPanel-HelloWorldHost.html>
5. <https://www.helloworldhost.com/knowledgebase/9/Deploy-Django-App-on-cPanel-HelloWorldHost.html>
6. [how to install django on cpanel](https://stackoverflow.com/questions/15625151/how-to-install-django-on-cpanel)
Very much applicate if you could help
|
2019/03/19
|
[
"https://Stackoverflow.com/questions/55239297",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10865416/"
] |
When you setup a python app in cpanel, you specify the folder where you have setup the app. That will be the folder that will contain your passenger\_wsgi.py file. You have to upload your django project to that same folder. To make sure they you have uploaded to the right directory, just have this simple check\_ your `manage.py` and `passenger_wsgi.py` should be in the same folder. Now edit your `passenger_wsgi.py` and replace everything with the following code:
```
from myapp.wsgi import application
```
After this, do not forget to restart the python app. I have written a step by step guide for deploying django app on shared hosting using cpanel. Check it [here](https://blog.umersoftwares.com/2019/09/django-shared-hosting.html).
|
The answer very simple, my server is using a programme name passenger, the official website for more information: <https://www.phusionpassenger.com/>
Now error very simply; **passenger can't find my application**, All I did was move my project and app folder on the same layer passenger\_wsgi.py and it works like charm.
| 7,923
|
62,903,138
|
Here is my attempt:
```
int* globalvar = new int[8];
void cpp_init(){
for (int i = 0; i < 8; i++)
globalvar[i] = 0;
}
void writeAtIndex(int index, int value){
globalvar[index] = value;
}
int accessIndex(int index){
return globalvar[index];
}
BOOST_PYTHON_MODULE(MpUtils){
def("cpp_init", &cpp_init);
def("writeAtIndex", &writeAtIndex);
def("accessIndex", &accessIndex);
}
```
and in the python file
```
def do_stuff(index):
writeAtIndex(index, randint(1, 100))
time.sleep(index/10)
print([accessIndex(i) for i in range(8)])
if __name__ == '__main__':
freeze_support()
processes = []
cpp_init()
for i in range(0, 10):
processes.append( Process( target=do_stuff, args=(i,) ) )
for process in processes:
process.start()
for process in processes:
process.join()
```
And the output is this:
```
[48, 0, 0, 0, 0, 0, 0, 0]
[0, 23, 0, 0, 0, 0, 0, 0]
[0, 0, 88, 0, 0, 0, 0, 0]
[0, 0, 0, 9, 0, 0, 0, 0]
[0, 0, 0, 0, 18, 0, 0, 0]
[0, 0, 0, 0, 0, 59, 0, 0]
[0, 0, 0, 0, 0, 0, 12, 0]
[0, 0, 0, 0, 0, 0, 0, 26]
```
Could someone explain why this is not working? I tried printing the globalvar and it's always the same value. There shouldn't be any race conditions as wating for 0.1 to 0.8 seconds should be more than enough to make the computer write something. Shouldn't C++ be accessing directly the location of the pointer?
Thanks
|
2020/07/14
|
[
"https://Stackoverflow.com/questions/62903138",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11627201/"
] |
Processes can generally access only their own memory space.
You can possibly use the [shared\_memory](https://docs.python.org/3/library/multiprocessing.shared_memory.html#multiprocessing.shared_memory.SharedMemory) module of multiprocessing to share the same array across processes. See example in the linked page.
|
The following loop will create a list of processes with consecutive first `args`, from `0` to `9`. (First process has `0`, second one `1` and so on.)
```
for i in range(0, 10):
processes.append( Process( target=do_stuff, args=(i,) ) )
```
The writing may take a list of numbers, but the call to `writeAtIndex` will be called with the first element only:
```
def do_stuff(index):
writeAtIndex(index, randint(1, 100))
```
Function `writeAtIndex` takes one `int`, not a list:
```
void writeAtIndex(int index, int value)
```
That's why your output has a value on each position.
And as said by @Voo and in the [docs](https://docs.python.org/3/library/multiprocessing.html):
>
> The multiprocessing package offers both local and remote concurrency, effectively side-stepping the Global Interpreter Lock by using subprocesses instead of threads.
>
>
>
>
> Process objects represent activity that is run in a separate process.
>
>
>
So there is a `globalvar` for each process.
If you want to share data between threads you can use the [threading](https://docs.python.org/3/library/threading.html#module-threading) module. There are details you should read though:
>
> due to the Global Interpreter Lock, only one thread can execute Python code at once (even though certain performance-oriented libraries might overcome this limitation). If you want your application to make better use of the computational resources of multi-core machines, you are advised to use multiprocessing or concurrent.futures.ProcessPoolExecutor.
>
>
>
| 7,924
|
33,697,379
|
Now this could be me being **very** stupid here but please see the below code.
I am trying to work out what percentage of my carb goal I've already consumed at the point I run the script. I get the totals and store them in `carbsConsumed` and `carbsGoal`. `carbsPercent` then calculates the percentage consumed. However, `carbsPercent` returns 0 each time. Any thoughts?
```
#!/usr/bin/env python2.7
import myfitnesspal
from datetime import datetime
username = 'someusername'
password = 'somepassword'
date = datetime.now()
client = myfitnesspal.Client(username, password)
day = client.get_date(date.year, date.month, date.day)
#day = client.get_date(2015,11,12)
carbs = 'carbohydrates'
carbsConsumed = day.totals[carbs]
carbsGoal = day.goals[carbs]
carbsPercent = (carbsConsumed / carbsGoal) * 100
print 'Carbs consumed: ' + str(carbsConsumed)
print 'Carbs goal: ' + str(carbsGoal)
print 'Percentage consumed: ' + str(carbsPercent)
```
|
2015/11/13
|
[
"https://Stackoverflow.com/questions/33697379",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4461239/"
] |
Try this:
```
carbsPercent = (float(carbsConsumed) / carbsGoal) * 100
```
The problem is that in Python 2.7, the default division mode is integer division, so 1000/1200 = 0. The way you force Python to change that is to cast at least one operand IN THE DIVISION operation to a float.
|
For easily portable code, in `python2`, see <https://stackoverflow.com/a/10768737/610569>:
```
from __future__ import division
carbsPercent = (carbsConsumed / carbsGoal) * 100
```
E.g.
```
$ python
>>> from __future__ import division
>>> 6 / 5
1.2
$ python3
>>> 6 / 5
1.2
```
| 7,925
|
46,520,136
|
Is there any way to run a function from python and a function from java in one app in parallel and get the result of each function to do another process?
|
2017/10/02
|
[
"https://Stackoverflow.com/questions/46520136",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6700881/"
] |
There are at least three ways to achieve that.
a) You could use `java.lang.Runtime` (as in `Runtime.getRuntime().exec(...)`) to launch an external process (from Java side), the external process being your Python script.
b) You could do the same as a), just using Python as launcher.
c) You could use some Python-Java binding, and from Java side use a separate `Thread` to run your Python code.
|
You should probably look for **[jython](https://wiki.python.org/jython/WhyJython)**. This support `Java` and `Python`.
| 7,926
|
40,239,866
|
I'm trying to list the containers under an azure account using the python sdk - why do I get the following?
```
>>> azure.storage.blob.baseblobservice.BaseBlobService(account_name='x', account_key='x').list_containers()
>>> <azure.storage.models.ListGenerator at 0x7f7cf935fa58>
```
Surely the above is a call to the function and not a reference to the function itself.
|
2016/10/25
|
[
"https://Stackoverflow.com/questions/40239866",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1693831/"
] |
you get the following according to [source code](https://github.com/Azure/azure-storage-python/blob/54e60c8c029f41d2573233a70021c4abf77ce67c/azure-storage-blob/azure/storage/blob/baseblobservice.py#L609) it return `ListGenerator(resp, self._list_containers, (), kwargs)`
you can access what you want as follow:
python2:
```
>>> from azure.storage.blob.baseblobservice import BaseBlobService
>>> blob_service = BaseBlobService(account_name='x', account_key='x')
>>> containers = blob_service.list_containers()
>>> for c in containers:
print c.name
```
python3
```
>>> from azure.storage.blob.baseblobservice import BaseBlobService
>>> blob_service = BaseBlobService(account_name='x', account_key='x')
>>> containers = blob_service.list_containers()
>>> for c in containers:
print(c.name)
```
|
for python 3 and more recent distribution of `azure` libraries, you can do:
```
from azure.storage.blob import BlockBlobService
block_blob_service = BlockBlobService(account_name=account_name, account_key=account_key)
containers = block_blob_service.list_containers()
for c in containers:
print(c.name)
```
| 7,927
|
67,401,326
|
i have json file as below .Using python i want a fetch a record for source\_name = 'Abc' . How i can achieve it
---json file: file.json
```
[{
"source_name" :"Abc",
"target_table" : "TABLE1",
"schema": "col1 string, col2 string"
},
{
"source_name" :"xyz",
"target_table" : "TABLE2",
"schema": "col3 string, col4 string"
}
]
```
final o/p ---
```
{
"source_name" :"Abc",
"target_table" : "TABLE1",
"schema": "col1 string, col2 string"
}
```
|
2021/05/05
|
[
"https://Stackoverflow.com/questions/67401326",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4050708/"
] |
You can use list comprehensions, which is considered to be more pythonic since the initial data is a list of dictionaries.
```
# using list comprehension
result = [element for element in json.load(json_file) if element['source_name'] == "Abc"]
# without using list comprehension
for element in json.load(json_file):
if element['source_name'] == "Abc":
print(element)
```
|
Try this.
```
data = [{
"source_name" :"Abc",
"target_table" : "TABLE1",
"schema": "col1 string, col2 string"
},
{
"source_name" :"xyz",
"target_table" : "TABLE2",
"schema": "col3 string, col4 string"
}
]
l = len(data)
n = 0
while n < l:
if (data[n]["source_name"] == "Abc"):
print(data[n])
if n == (l-1):
break
n +=1
```
final output
```
{
'source_name': 'Abc',
'target_table': 'TABLE1',
'schema': 'col1 string, col2 string'
}
```
| 7,930
|
4,128,462
|
I just started working in an environment that uses multiple programming languages, doesn't have source control and doesn't have automated deploys.
I am interested in recommending VSS for source control and using CruiseControl.net for automated deploys but I have only used CC.NET with ASP.NET applications. Is it possible to use CC.NET to deploy python, php, ASP.NET and ??? apps all from the same instance?
|
2010/11/08
|
[
"https://Stackoverflow.com/questions/4128462",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/226897/"
] |
Yes you can.
CC .net is written in .Net but can handle any project. Your project langage does not matter, you can still use Batch, Powershell, Nant or MsBuild scripts. You may also use Cruise Control or Hudson, as you like.
As for the source control provider, I would prefer svn (or even git) but that's more a matter of habbits : from my point of view VSS is too linked to VS and I don't like the lock on check out by default behaviour.
|
VSS is [unsafe for any source](http://www.developsense.com/testing/VSSDefects.html) and is damn near useless outside visual studio. And cruise control is painful to learn and make work at best. Your heart is in the right place, but you probably want slightly different technical solutions. For SCM, you probably want either subversion (cf [VisualSVN Server](http://www.visualsvn.com/server/)) or Mercurial ([on IIS7](http://www.jeremyskinner.co.uk/mercurial-on-iis7/)). For continuious integration, I'd look at TeamCity first or Hudson second. Either of which is vastly superior to CCNet in terms of ease of use.
| 7,931
|
39,615,436
|
When using LDA model, I get different topics each time and I want to replicate the same set. I have searched for the similar question in Google such as [this](https://groups.google.com/forum/#!topic/gensim/s1EiOUsqT8s).
I fix the seed as shown in the article by `num.random.seed(1000)` but it doesn't work. I read the `ldamodel.py` and find the code below:
```
def get_random_state(seed):
"""
Turn seed into a np.random.RandomState instance.
Method originally from maciejkula/glove-python, and written by @joshloyal
"""
if seed is None or seed is numpy.random:
return numpy.random.mtrand._rand
if isinstance(seed, (numbers.Integral, numpy.integer)):
return numpy.random.RandomState(seed)
if isinstance(seed, numpy.random.RandomState):
return seed
raise ValueError('%r cannot be used to seed a numpy.random.RandomState'
' instance' % seed)
```
So I use the code:
```
lda = models.LdaModel(
corpus_tfidf,
id2word=dic,
num_topics=2,
random_state=numpy.random.RandomState(10)
)
```
But it's still not working.
|
2016/09/21
|
[
"https://Stackoverflow.com/questions/39615436",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6858032/"
] |
First: there is no problem.
For international sites Unicode is often the preferred choice, as it can combine all possible scripts on one page (if the fonts can display them).
For a Chinese-only site, say combined with plain ASCII, you can display the page in a Chinese encoding.
Now the XML can be in UTF-8, a Unicode formatting, using multiple bytes for Chinese. For Chinese another encoding might be more compact, but the ASCII tags in XML and HTML make UTF-8 still sufficient efficient.
Java internally holds Strings as chars in Unicode (UTF-16 actually), so there is no problem too.
Only the output encoding of the JSP needs a choice. The JSP itself can also be in UTF-8 with hard-coded Chinese strings, and still specify another encoding.
The following would be for Mainland China, resp. Taiwan/Hong Kong/Macau.
```
<%@ page contentType="text/html; charset=GB18030" %>
<%@ page contentType="text/html; charset=Big5" %>
```
Check that the installed java has the charset.
|
use dom4j and reload xml file;
```
org.dom4j.io.SAXReader reader=new SAXReader();
org.dom4j.Document doc=reader.read(new File(yourFilePath));
org.dom4j.io.OutputFormat format=new OutputFormat();
format.setEncoding("utf-8");
org.dom4j.io.XMLWriter writer=new XMLWriter(new FileOutputStream(path),format);
writer.write(doc);
writer.close();
```
| 7,932
|
12,201,498
|
I am a beginer python learner. I am trying to create a basic dictionary where random meaning of words will come and user have to input the correct word. I used the following method, but random doesn't work. I always get the first word first and when the last word finishes, I get infinite 'none' until I kill it. Using python 3.2
```
from random import choice
print("Welcome , let's get started")
input()
def word():
print('Humiliate')
a = input(':')
while a == 'abasement':
break
else:
word()
# --------------------------------------------------------- #
def word1():
print('Swelling')
a = input(':')
while a == 'billowing':
break
else:
word()
# ------------------------------------------------------------ #
wooo = [word(),word1()]
while 1==1:
print(choice(wooo))
```
is there any faster way of doing this and get real random? I tried classes but it seems harder than this. Also, is there any way I can make python not care about weather the input is capital letter or not?
|
2012/08/30
|
[
"https://Stackoverflow.com/questions/12201498",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1570162/"
] |
```
#!/usr/bin/env perl
use strict;
use warnings;
my $data = <<END;
Subnet Mask . . . . . . . . . . . : 255.255.255.0
Default Gateway . . . . . . . . . : 152.225.244.1
DHCP Server . . . . . . . . . . . : 10.204.40.57
DNS Servers . . . . . . . . . . . : 10.204.127.11
10.207.2.50
10.200.10.6
Primary WINS Server . . . . . . . : 10.207.40.145
Secondary WINS Server . . . . . . : 10.232.40.38
Lease Obtained. . . . . . . . . . : Tuesday, August 28, 2012 6:45:12 AM
Lease Expires . . . . . . . . . . : Sunday, September 02, 2012 6:45:12 A
END
my @ips = ();
if ($data =~ /^DNS Servers[\s\.:]+((\d{2}\.\d{3}\.\d{1,3}\.\d{1,3}\s*)+)/m) {
@ips = split(/\s+/, $1);
print "$_\n" foreach(@ips);
}
```
|
I would use [`unpack`](http://perldoc.perl.org/functions/unpack.html) instead of regular expressions for parsing column-based data:
```
#!/usr/bin/env perl
use strict;
use warnings;
while (<DATA>) {
my ($ip) = unpack 'x36 A*';
print "$ip\n";
}
__DATA__
DNS Servers . . . . . . . . . . . : 10.204.127.11
10.207.2.50
10.200.10.6
Primary WINS Server . . . . . . . : 10.207.40.145
Secondary WINS Server . . . . . . : 10.232.40.38
```
You may have to adjust the number `36` to the actual number of characters that should be skipped.
| 7,933
|
58,801,994
|
I have images with Borders like the below. Can I use OpenCV or python to remove the borders like this in images?
I used the following code to crop, but it didn't work.
```
copy = Image.fromarray(img_final_bin)
try:
bg = Image.new(copy.mode, copy.size, copy.getpixel((0, 0)))
except:
return None
diff = ImageChops.difference(copy, bg)
diff = ImageChops.add(diff, diff, 2.0, -100)
bbox = diff.getbbox()
if bbox:
return np.array(copy.crop(bbox))
```
|
2019/11/11
|
[
"https://Stackoverflow.com/questions/58801994",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9186998/"
] |
Here's an approach using thresholding + contour filtering. The idea is to threshold to obtain a binary image. From here we find contours and filter using a maximum area threshold. We draw all contours that pass this filter onto a blank mask then perform bitwise operations to remove the border. Here's the result with the removed border
[](https://i.stack.imgur.com/ThU1A.png)
```
import cv2
import numpy as np
image = cv2.imread('1.png')
mask = np.zeros(image.shape, dtype=np.uint8)
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
thresh = cv2.threshold(gray, 0, 255, cv2.THRESH_BINARY_INV + cv2.THRESH_OTSU)[1]
cnts = cv2.findContours(thresh, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
cnts = cnts[0] if len(cnts) == 2 else cnts[1]
for c in cnts:
area = cv2.contourArea(c)
if area < 10000:
cv2.drawContours(mask, [c], -1, (255,255,255), -1)
mask = cv2.cvtColor(mask, cv2.COLOR_BGR2GRAY)
result = cv2.bitwise_and(image,image,mask=mask)
result[mask==0] = (255,255,255)
cv2.imwrite('result.png', result)
cv2.waitKey()
```
|
You could also use Werk24's API for reading Technical Drawings: [www.werk24.io](https://werk24.io)
```py
from werk24 import W24TechreadClient, W24AskCanvasThumbnail
async with W24TechreadClient.make_from_env() as session:
response = await session.read_drawing(document_bytes,[W24AskCanvasThumbnail()])
```
It also allows you to extract the measures etc. See:
[Fully read](https://i.stack.imgur.com/ny2Gi.jpg)
I work at Werk24.
| 7,937
|
70,972,961
|
I have this list of objects which have p1, and p2 parameter (and some other stuff).
```
Job("J1", p1=8, p2=3)
Job("J2", p1=7, p2=11)
Job("J3", p1=5, p2=12)
Job("J4", p1=10, p2=5)
...
Job("J222",p1=22,p2=3)
```
With python, I can easily get max value of p2 by
```
(max(job.p2 for job in instance.jobs))
```
but how do I get the value of p1, where p2 is the biggest?
Seems simple, but I can't get it anyway... Could You guys help me?
|
2022/02/03
|
[
"https://Stackoverflow.com/questions/70972961",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/18110528/"
] |
You can do it like this:
```html
<style>
.input-wrapper {
display: flex;
position: relative;
}
.input-wrapper .icon {
position: absolute;
top: 50%;
transform: translateY(-50%);
padding: 0 10px;
}
.input-wrapper input {
padding: 0 0 0 25px;
height: 50px;
}
</style>
<div class="input-wrapper">
<i class="icon">2</i>
<input type="text" class="form-control ps-5" placeholder="Name" name="s" required="">
</div>
```
|
There are two styling options you can use. These are block and flex.
Block will not be responsive while flex will be responsive. I hope this solve your issue.
| 7,938
|
70,777,985
|
I'm trying to create a pandas multiIndexed dataframe that is a summary of the unique values in each column.
Is there an easier way to have this information summarized besides creating this dataframe?
Either way, it would be nice to know how to complete this code challenge. Thanks for your help! Here is the toy dataframe and the solution I attempted using a for loop with a dictionary and a value\_counts dataframe. Not sure if it's possible to incorporate MultiIndex.from\_frame or .from\_product here somehow...
Original Dataframe:
```
data = pd.DataFrame({'A': ['case', 'case', 'case', 'case', 'case'],
'B': [2001, 2002, 2003, 2004, 2005],
'C': ['F', 'M', 'F', 'F', 'M'],
'D': [0, 0, 0, 1, 0],
'E': [1, 0, 1, 0, 1],
'F': [1, 1, 0, 0, 0]})
A B C D E F
0 case 2001 F 0 1 1
1 case 2002 M 0 0 1
2 case 2003 F 0 1 0
3 case 2004 F 1 0 0
4 case 2005 M 1 1 0
```
Desired outcome:
```
unique percent
A case 100
B 2001 20
2002 20
2003 20
2004 20
2005 20
C F 60
M 40
D 0 80
1 20
E 0 40
1 60
F 0 60
1 40
```
My failed for loop attempt:
```
def unique_values(df):
values = {}
columns = []
df = pd.DataFrame(values, columns=columns)
for col in data:
df2 = data[col].value_counts(normalize=True)*100
values = values.update(df2.to_dict)
columns = columns.append(col*len(df2))
return df
unique_values(data)
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-84-a341284fb859> in <module>
11
12
---> 13 unique_values(data)
<ipython-input-84-a341284fb859> in unique_values(df)
5 for col in data:
6 df2 = data[col].value_counts(normalize=True)*100
----> 7 values = values.update(df2.to_dict)
8 columns = columns.append(col*len(df2))
9 return df
TypeError: 'method' object is not iterable
```
Let me know if there's something obvious I'm missing! Still relatively new to EDA and pandas, any pointers appreciated.
|
2022/01/19
|
[
"https://Stackoverflow.com/questions/70777985",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/17119804/"
] |
Well, there are many ways to handle errors, it really depends on what you want to do in case of an error.
In your current setup, the solution is straightforward: **first, `NotificationException` should extend `RuntimeException`**, thus, in case of an HTTP error, `.block()` will throw a `NotificationException`. It is a good practice to add it in the signature of the method, accompanied with a Javadoc entry.
In another method, you just need to catch the exception and do what you want with it.
```java
/**
* @param notification
* @throws NotificationException in case of a HTTP error
*/
public void sendNotification(String notification) throws NotificationException {
final WebClient webClient = WebClient.builder()
.defaultHeader(CONTENT_TYPE, APPLICATION_JSON_VALUE)
.build();
webClient.post()
.uri("http://localhost:9000/api")
.body(BodyInserters.fromValue(notification))
.retrieve()
.onStatus(HttpStatus::isError, clientResponse -> Mono.error(NotificationException::new))
.toBodilessEntity()
.block();
log.info("Notification delivered successfully");
}
public void someOtherMethod() {
try {
sendNotification("test");
} catch (NotificationException e) {
// Treat exception
}
}
```
In a more reactive style, you could return a `Mono` and use [`onErrorResume()`](https://projectreactor.io/docs/core/release/api/reactor/core/publisher/Mono.html#onErrorResume-java.lang.Class-java.util.function.Function-).
```java
public Mono<Void> sendNotification(String notification) {
final WebClient webClient = WebClient.builder()
.defaultHeader(CONTENT_TYPE, APPLICATION_JSON_VALUE)
.build();
return webClient.post()
.uri("http://localhost:9000/api")
.body(BodyInserters.fromValue(notification))
.retrieve()
.onStatus(HttpStatus::isError, clientResponse -> Mono.error(NotificationException::new))
.bodyToMono(Void.class);
}
public void someOtherMethod() {
sendNotification("test")
.onErrorResume(NotificationException.class, ex -> {
log.error(ex.getMessage());
return Mono.empty();
})
.doOnSuccess(unused -> log.info("Notification delivered successfully"))
.block();
}
```
|
Using imperative/blocking style you can surround it with a try-catch:
```java
try {
webClient.post()
.uri("http://localhost:9000/api")
.body(BodyInserters.fromValue(notification))
.retrieve()
.onStatus(HttpStatus::isError, clientResponse -> Mono.error(NotificationException::new))
.toBodilessEntity()
.block();
} catch(NotificationException e) {...}
```
A reactive solution would be to use the `onErrorResume` operator like this:
```java
webClient.post()
.uri("http://localhost:9000/api")
.body(BodyInserters.fromValue(notification))
.retrieve()
.onErrorResume(e -> someOtherMethod())
.toBodilessEntity();
```
Here, the reactive method `someOtherMethod()` will be executed in case of any error.
| 7,939
|
25,737,589
|
I run a django app via gunicorn, supervisor and nginx as reverse proxy and struggle to make my gunicorn access log show the actual ip instead of 127.0.0.1:
Log entries look like this at the moment:
```
127.0.0.1 - - [09/Sep/2014:15:46:52] "GET /admin/ HTTP/1.0" ...
```
supervisord.conf
```
[program:gunicorn]
command=/opt/middleware/bin/gunicorn --chdir /opt/middleware -c /opt/middleware/gunicorn_conf.py middleware.wsgi:application
stdout_logfile=/var/log/middleware/gunicorn.log
```
gunicorn\_conf.py
```
#!python
from os import environ
from gevent import monkey
import multiprocessing
monkey.patch_all()
bind = "0.0.0.0:9000"
x_forwarded_for_header = "X-Real-IP"
policy_server = False
worker_class = "socketio.sgunicorn.GeventSocketIOWorker"
accesslog = '-'
```
my nginx module conf
```
server {
listen 80;
root /opt/middleware;
index index.html index.htm;
client_max_body_size 200M;
server_name _;
location / {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_redirect off;
real_ip_header X-Real-IP;
}
}
```
I tried all sorts of combinations in the location {} block, but can't see that it makes any difference. Any hint appreciated.
|
2014/09/09
|
[
"https://Stackoverflow.com/questions/25737589",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/640916/"
] |
The problem is that you need to configure [`gunicorn`'s logging](http://docs.gunicorn.org/en/0.17.0/configure.html#logging), because it will (by default) not display any custom headers.
From the documentation, we find out that the default access log format is controlled by [`access_log_format`](http://docs.gunicorn.org/en/0.17.0/configure.html#access-log-format) and is set to the following:
```
"%(h)s %(l)s %(u)s %(t)s "%(r)s" %(s)s %(b)s "%(f)s" "%(a)s"
```
where:
* `h` is the remote address
* `l` is `-` (not used)
* `u` is `-` (not used, reserved)
* `t` is the time stamp
* `r` is the status line
* `s` is the status of the request
* `b` is length of response
* `f` is referrer
* `a` is user agent
You can also customize it with the following extra variables that are not used by default:
* `T` - request time (in seconds)
* `D` - request time (in microseconds)
* `p` - the process id
* `{Header}i` - request header (custom)
* `{Response}o` - response header (custom)
To `gunicorn`, all requests are coming from nginx so it will display that as the remote IP. To get it to log any custom headers (what you are sending from `nginx`) you'll need to adjust this parameter and add the appropriate variables, in your case you would set it to the following:
```
%(h)s %(l)s %(u)s %(t)s "%(r)s" %(s)s %(b)s "%(f)s" "%(a)s" "%({X-Real-IP}i)s"
```
|
Note that headers containing `-` should be referred to here by replacing `-` with `_`, thus `X-Forwarded-For` becomes `%({X_Forwarded_For}i)s`.
| 7,940
|
16,640,610
|
So I tried to use parses generator [waxeye](http://waxeye.org/), but as I try to use tutorial example of program in python using generated parser I get error:
```
AttributeError: 'module' object has no attribute 'Parser'
```
Here's part of code its reference to:
```
import waxeye
import parser
p = parser.Parser()
```
The last line cause the error. Parser generated by waxeye I put in the same directory as the **init**.py . It's parser.py .
Anyone have any idea what's wrong with my code?
---
**Edit 20-05-2013:**
Beggining of the parser.py file:
```
from waxeye import Edge, State, FA, WaxeyeParser
class Parser (WaxeyeParser):
```
|
2013/05/19
|
[
"https://Stackoverflow.com/questions/16640610",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1334531/"
] |
can you try this code and then please tell me result.
be sure you check all file names.
```
@Override
protected void onCreate(Bundle savedInstanceState) {
//.......
DatabaseHelper dbHelper = new DatabaseHelper(this);
dbHelper.createDatabase();
dbHelper.openDatabase();
// do stuff
Cursor data =dbHelper.Sample_use_of_helper();
dbHelper.close();
}
class DatabaseHelper extends SQLiteOpenHelper {
private static String DB_PATH = "/data/data/com.mycomp.myapp/databases/";
private static String DB_NAME = "database.db";
private SQLiteDatabase myDataBase;
private final Context myContext;
public DatabaseHelper (Context context) {
super(context, DB_NAME, null, 1);
this.myContext = context;
}
public void crateDatabase() throws IOException {
boolean vtVarMi = isDatabaseExist();
if (!vtVarMi) {
this.getReadableDatabase();
try {
copyDataBase();
} catch (IOException e) {
throw new Error("Error copying database");
}
}
}
private void copyDataBase() throws IOException {
// Open your local db as the input stream
InputStream myInput = myContext.getAssets().open(DB_NAME);
// Path to the just created empty db
String outFileName = DB_PATH + DB_NAME;
// Open the empty db as the output stream
OutputStream myOutput = new FileOutputStream(outFileName);
// transfer bytes from the inputfile to the outputfile
byte[] buffer = new byte[1024];
int length;
while ((length = myInput.read(buffer)) > 0) {
myOutput.write(buffer, 0, length);
}
// Close the streams
myOutput.flush();
myOutput.close();
myInput.close();
}
private boolean isDatabaseExist() {
SQLiteDatabase kontrol = null;
try {
String myPath = DB_PATH + DB_NAME;
kontrol = SQLiteDatabase.openDatabase(myPath, null, SQLiteDatabase.OPEN_READONLY);
} catch (SQLiteException e) {
kontrol = null;
}
if (kontrol != null) {
kontrol.close();
}
return kontrol != null ? true : false;
}
public void openDataBase() throws SQLException {
// Open the database
String myPath = DB_PATH + DB_NAME;
myDataBase = SQLiteDatabase.openDatabase(myPath, null, SQLiteDatabase.OPEN_READWRITE);
}
public Cursor Sample_use_of_helper() {
return myDataBase.query("TABLE_NAME", null, null, null, null, null, null);
}
@Override
public synchronized void close() {
if (myDataBase != null)
myDataBase.close();
super.close();
}
@Override
public void onCreate(SQLiteDatabase db) {
}
@Override
public void onUpgrade(SQLiteDatabase db, int oldVersion, int newVersion) {
}
}
```
|
remove **.db** extension from both **DB\_NAME** and **.open("database.db")**
It looks ugly but currently same thing is working for me....
And before installing new apk please do ***Clear Data*** for earlier installed apk
And still if it won't work, please mention your database size.
| 7,941
|
48,630,957
|
I'm using subprocess in Python and passing ec2 instance id at runtime..while giving instance id it is taking as a string and throwing error saying instance id is not valid. Please don't suggest boto3 package as my company restricted of using it.My question is how can I send the character+integer+special character at runtime.
Pgm:
```
#!/bin/python
import subprocess
import sys
def describe_one(instanceid):
subprocess.call(["aws", "ec2", "describe-instances", "--instance-ids", '"instanceid"'])
if __name__ == "__main__":
instanceid=sys.argv[1]
describe_one(instanceid)
```
Output
```
# ./sample.py i-061e41edcc2afc01
```
>
> An error occurred (InvalidInstanceID.Malformed) when calling the DescribeInstances operation: Invalid id: ""instanceid""
>
>
>
|
2018/02/05
|
[
"https://Stackoverflow.com/questions/48630957",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9318698/"
] |
Are you intending to pass the literal string `"instanceid"`?
Surely you mean to use its value?
```
subprocess.call(["aws", "ec2", "describe-instances", "--instance-ids", f'"{instanceid}"'])
```
or
```
subprocess.call(["aws", "ec2", "describe-instances", "--instance-ids", '"{}"'.format(instanceid)])
```
if not using Python3.6+
|
This has nothing to do with "passing integer or character types", whatever that means.
You are passing the literal value `"instanceid"`, complete with quotes, rather than the value of the instanceid argument to your script. You did do both sets of quotes.
(And really, you should take up this silly "restriction" about using boto3 with your boss. You can't be expected to do your gin without the correct tools.)
| 7,942
|
17,436,045
|
I'm sure this is something easy to do for someone with programming skills (unlike me). I am playing around with the Google Sites API. Basically, I want to be able to batch-create a bunch of pages, instead of having to do them one by one using the slow web form, which is a pain.
I have installed all the necessary files, read the [documentation](https://developers.google.com/google-apps/sites/docs/1.0/developers_guide_python), and successfully ran [this](https://code.google.com/p/gdata-python-client/source/browse/samples/sites/sites_example.py) sample file. As a matter of fact, this sample file already has the python code for creating a page in Google Sites:
```
elif choice == 4:
print "\nFetching content feed of '%s'...\n" % self.client.site
feed = self.client.GetContentFeed()
try:
selection = self.GetChoiceSelection(
feed, 'Select a parent to upload to (or hit ENTER for none): ')
except ValueError:
selection = None
page_title = raw_input('Enter a page title: ')
parent = None
if selection is not None:
parent = feed.entry[selection - 1]
new_entry = self.client.CreatePage(
'webpage', page_title, '<b>Your html content</b>',
parent=parent)
if new_entry.GetAlternateLink():
print 'Created. View it at: %s' % new_entry.GetAlternateLink().href
```
I understand the creation of a page revolves around page\_title and new\_entry and CreatePage. However, instead of creating one page at a time, I want to create many.
I've done some research, and I gather I need something like
```
page_titles = input("Enter a list of page titles separated by commas: ").split(",")
```
to gather a list of page titles (like page1, page2, page3, etc. -- I plan to use a text editor or spreadsheet to generate a long list of comma separated names).
Now I am trying to figure out how to get that string and "feed" it to new\_entry so that it creates a separate page for each value in the string. I can't figure out how to do that. Can anyone help, please?
In case it helps, this is what the Google API needs to create a page:
```
entry = client.CreatePage('webpage', 'New WebPage Title', html='<b>HTML content</b>')
print 'Created. View it at: %s' % entry.GetAlternateLink().href
```
Thanks.
|
2013/07/02
|
[
"https://Stackoverflow.com/questions/17436045",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2544281/"
] |
Variable 'initialization' will always trigger an event. From the last Verilog standard([IEEE 1364-2005](http://standards.ieee.org/findstds/standard/1364-2005.html)):
>
> If a variable declaration assignment is used (see 6.2.1), the variable
> shall take this value as if the assignment occurred in a blocking
> assignment in an initial construct.
>
>
>
Also be aware
>
> If the same variable is assigned different values both in an initial
> block and in a variable declaration assignment, the order of the
> evaluation is undefined.
>
>
>
|
A good reference for order of events is this paper:
<http://www.sunburst-design.com/papers/CummingsSNUG2000SJ_NBA_rev1_2.pdf>
page 6 has a diagram of the order of events as far as evaluation of variables and triggering goes.
| 7,943
|
28,379,373
|
In python is there any way at all to get a function to use a variable and return it without passing it in as an argument?
|
2015/02/07
|
[
"https://Stackoverflow.com/questions/28379373",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3983128/"
] |
What you ask **is** possible. If a function doesn't re-bind (e.g, assign to) a name, but uses it, that name is taken as a *global* one, e.g:
```
def f():
print(foo)
foo = 23
f()
```
will do what you ask. However, it's a dubious idea already: why not just `def f(foo):` and call `f(23)`, so much more direct and clearer?!
If the function needs to bind the name, it's even more dubious, even though the `global` statement allows it...:
```
def f():
global foo
print(foo)
foo += 1
foo = 23
f()
print(foo)
```
Here, the **much** better alternative would be:
```
def f(foo):
print(foo)
foo += 1
return foo
foo = f(23)
print(foo)
```
Same effect - much cleaner, better-maintainable, preferable structure.
**Why** are you looking for the inferior (though feasible) approach, when a better one is easier and cleaner...?!
|
As @Alex Martelli pointed out, you *can*... but *should* you? That's the more relevant question, IMO.
I understand the *appeal* of what you're asking. It seems like it should *just be* good form because otherwise you'd have to pass the variables everywhere, which of course seems wasteful.
I won't presume to know how far along you are in your Python-Fu (I can only guess), but back when I was more of a beginner there were times when I had the same question you do now. Using `global` felt ugly (everyone said *so*), yet I had this irresistible urge to do exactly what you're asking. In the end, I went with passing variables in to functions.
**Why?**
Because it's a code smell. You may not see it, but there are good reasons to avoid it in the long run.
*However, there is a name for what you're asking*:
### Classes
What you need are *instance* variables. Technically, it might just be a step away from your current situation. So, *you might be wondering*: Why such a fuss about it? After all, instance variables are widely accepted and global variables *seem* similar in nature. So what's the big problem with using global variables in regular functions?
The problem is mostly *conceptual*. Classes are designed for grouping related behaviour (functions) and state (player position variables) and have a wide array of functionality that goes along with it (which wouldn't otherwise be possible with only functions). Classes are also ***semantically significant***, in that they signal a clear intention in your mind and to future readers about the way the program is organized and the way it operates.
I won't go into much further detail other than to say this: You should probably reconsider your strategy because the road it leads to inevitably is this:
* 1) Increased coupling in your codebase
* 2) Undecipherable flow of execution
* 3) Debugging bonanza
* 4) **Spaghetti** code.
Right now, it may seem like an exaggeration. The point is you'd greatly benefit to learn how to organize your program in a way that makes sense, ***conceptually***, instead of using what currently requires the least amount of code or what seems intuitive to you *right now*. You may be working on a small application for all I know and with one only ***1 module***, so it might resemble the use of a single class.
The *worst* thing that can happen is you'll adopt bad coding practices. In the long run, this might hurt you.
| 7,944
|
46,318,530
|
Hello all I'm very new to python, so just bear with me.
I have a sample json file in this format.
```
[{
"5":"5",
"0":"0",
"1":"1"},{
"14":"14",
"11":"11",
"15":"15"},{
"25":"25",
"23":"23",
"22":"22"}]
```
I would like to convert the json file to csv in a specific way. All the values in 1 map i.e., `{` and `}` in the json should be converted to a csv file(say FILE\_A.csv).
For the above example,
FILE\_A.csv,
```
DAILY,COUNT
5,5
0,0
1,1
```
FILE\_B.csv,
```
WEEKLY,COUNT
14,14
11,11
15,15
```
FILE\_C.csv,
```
MONTHLY,COUNT
25,25
23,23
22,22
```
There will only be 3 maps in the json.
**Can anyone suggest the best way to convert the 3 maps in the json to 3 different csv files of the above structure?**
Thanks in advance.
|
2017/09/20
|
[
"https://Stackoverflow.com/questions/46318530",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7923318/"
] |
`bytes(iterator)` create bytes object from iterator using internal C-API `_PyBytes_FromIterator` function, which use special `_PyBytes_Writer` protocol. It internaly use a buffer, which resize when it overflows using a rule:
```
bufsize += bufsize / OVERALLOCATE_FACTOR
```
For linux OVERALLOCATE\_FACTOR=4, for windows OVERALLOCATE\_FACTOR=2.
Those. This process looks like writing to a file in RAM. At the end, the contents of the buffer returns.
|
this is probably more a remark or discussion start than an answer but I think it's better to format it like this.
I just hooking in 'cause I find this topic very inteeresting as well.
I would recommend to paste the real call and generator mock. Since imho the generator expression example does not fit really well for your question.
And the example code you pasted does not work.
Normally you have a generator like this: (in your case calling the module instead of generating numbers of course ..)
Example slightly modified from [dabeaz](http://www.dabeaz.com/coroutines/Coroutines.pdf)
**Update: deleted the explicit byte creatinon.**
```
def genbytes():
for i in range(100):
yield i**2
```
You would the probably call it with something like this:
```
for newbyte in genbytes():
print(newbyte)
print(id(newbyte))
input("press key...")
```
leeds to:
>
> Do I have to assume that for every element the generator yields, there
> is a new bytes object created
>
>
>
I would say absolutely yes. That's the way that yield works. And what we can see above. Bytes always has a new id.
Normally this shouldn't be a problem since you want to consume the bytes one by one and then collect it into something like you suggested a `bytearray` using `bytearray append`
But even if new bytes are produced I think this is at least not producing 33 MB in the input at once and returning them.
I add this exerpt from [PEP 289](https://www.python.org/dev/peps/pep-0289/) where the equivalence of a gen expression and the generator in "function" style is pointed out:
>
> The semantics of a generator expression are equivalent to creating an
> anonymous generator function and calling it. For example:
>
>
>
```
g = (x**2 for x in range(10))
print g.next()
```
is equivalent to:
```
def __gen(exp):
for x in exp:
yield x**2
g = __gen(iter(range(10)))
print g.next()
```
So `bytes(gen)` also calls `gen.next()` (which `yields` ints in this case but probably bytes in your real case) while itrerating over the `gen`.
| 7,945
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.