qid
int64 46k
74.7M
| question
stringlengths 54
37.8k
| date
stringlengths 10
10
| metadata
listlengths 3
3
| response_j
stringlengths 17
26k
| response_k
stringlengths 26
26k
|
|---|---|---|---|---|---|
34,461,003
|
I'm new to programming and am learning Python with the book Learning Python the Hard Way. I'm at exercise 36, where we're asked to write our own simple game.
<http://learnpythonthehardway.org/book/ex36.html>
(The problem is, when I'm in the hallway (or more precisely, in 'choices') and I write 'gate' the game responds as if I said "man" of "oldman" etc.
What am I doing wrong?)
EDIT: I shouldn't have written 'if "oldman" or "man" in choice' but instead put 'in choice' after every option.
However, next problem.. The man never stands up and cursus me, it keeps frowning. Why does the game not proceed to that elif?
```
experiences = []
def choices():
while True:
choice = raw_input("What do you wish to do? ").lower()
if "oldman" in choice or "man" in choice or "ask" in choice:
print oldman
elif "gate" in choice or "fight" in choice or "try" in choice and not "pissedoff" in experiences:
print "The old man frowns. \"I said, you shall not pass through this gate until you prove yourself worthy!\""
experiences.append("pissedoff")
elif "gate" in choice or "fight" in choice or "try" in choice and "pissedoff" in experiences and not "reallypissed" in experiences:
print "The old man is fed up. \"I told you not to pass this gate untill you are worthy! Try me again and you will die.\""
experiences.append("reallypissed")
elif "gate" in choice or "fight" in choice or "try" in choice and "reallypissed" in experiences:
print "The old man stands up and curses you. You die"
exit()
elif "small" in choice:
print "You go into the small room"
firstroom()
else:
print "Come again?"
```
**EDIT: FIXED IT!!**
```
def choices():
while True:
choice = raw_input("What do you wish to do? ").lower()
if choice in ['oldman', 'man', 'ask']:
print oldman
elif choice in ['gate', 'fight', 'try'] and not 'pissedoff' in experiences:
print "The old man frowns. \"I said, you shall not pass through this gate until you prove yourself worthy!\""
experiences.append("pissedoff")
elif choice in ['gate', 'fight', 'try'] and not 'reallypissed' in experiences:
print "The old man is fed up. \"I told you not to pass this gate untill you are worthy! Try me again and you will die.\""
experiences.append("reallypissed")
elif choice in ['gate', 'fight', 'try'] and 'reallypissed' in experiences:
print "The old man stands up and curses you. You die"
exit()
elif "small" in choice:
print "You go into the small room"
firstroom()
else:
print "Come again?"
```
Thanks for all your help :).
|
2015/12/25
|
[
"https://Stackoverflow.com/questions/34461003",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5713452/"
] |
>
> The problem is, when I'm in the hallway (or more precisely, in
> 'choices') and I write 'gate' the game responds as if I said "man" of
> "oldman" etc.
>
>
>
Explanation
-----------
What you were doing was being seen by Python as:
```
if ("oldman") or ("man") or ("ask" in choice):
```
Which evaluates to True if **any** of those 3 conditions is of a ["truthy" value](https://docs.python.org/3/library/stdtypes.html#truth-value-testing). A non-empty string like "oldman" and "man" evaluates to True. So that explains why your code behaved the way it did.
Test if a choice is in a list of choices:
-----------------------------------------
You're looking for
```
if choice in ['oldman', 'man', 'ask']:
```
Or, at least:
```
if 'oldman' in choice or 'man' in choice or 'ask' in choice
```
|
Look carefully at the evaluation of your conditions:
```
if "oldman" or "man" or "ask" in choice:
```
This will first evaluate if `"oldman"` is `True`, which happens to be the case, and the `if` condition will evaluate to be `True`.
I think what you intended is:
```
if "oldman" in choice or "man" in choice or "ask" in choice:
```
Alternatively you could use:
```
if choice in ["oldman" , "man", "ask" ]:
```
The one caveat, is with this method it is looking for an exact match and not a substring match.
|
49,978,967
|
I have a nested list like this :
```
[['Asan', '20180418', 'A', '17935.00'],
['Asan', '20180417', 'B', '17948.00'],
['Asan', '20180416', 'C', '17979.00'],
['Asan', '20180416', 'D', '17816.00'],
.
.
.
.
['Asan', '20180415', 'N', '18027.00']]
```
I need to do some calculations for the 3rd element in every list with the one before it, an example :
```
if (A/B) > 0.05: ....
if (B/C) > 0.05: ....
if (C/D) > 0.05: ....
.
.
if (N-1/N) == 1: ....
```
and if the condition is not met, I pop that list.
How can I approach this and what is the most pythonic and fastest way to do it.
EDIT :
let me clarify :
A and B and so on are FLOATS, I just name them A, B,... for better understanding.
the if statements are supposed to find if the A is how much greater than the previous float B.
|
2018/04/23
|
[
"https://Stackoverflow.com/questions/49978967",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8077586/"
] |
[`zip`](https://docs.python.org/3.4/library/functions.html#zip) makes things easy here for pair-wise comparison:
```
l = [['Asan', '20180418', 'A', '17935.00'],
['Asan', '20180417', 'B', '17948.00'],
['Asan', '20180416', 'C', '17979.00'],
['Asan', '20180416', 'C', '200.00'],
['Asan', '20180416', 'D', '17816.00']]
new_list = []
for x, y in zip(l, l[1:]):
if x[2] == y[2]:
new_list.extend([x, y])
print(new_list)
# [['Asan', '20180416', 'C', '17979.00'],
# ['Asan', '20180416', 'C', '200.00']]
```
**EDIT**:
If `'A'`, `'B'`, `'C'`.. are `float`s, then to check for condition you required, you could replace the condition in the above example. And do whatever if that condition is satisfied.
|
Do you mean something like this?
```
l = [['Asan', '20180418', 'A', '17935.00'],
['Asan', '20180417', 'B', '17948.00'],
['Asan', '20180416', 'C', '17979.00'],
['Asan', '20180416', 'C', '200.00'],
['Asan', '20180416', 'D', '17816.00']]
last_seen = l[0]
new_list = []
for r in l[1:]:
if r[2] == last_seen[2]:
new_list.append(last_seen)
new_list.append(r)
last_seen = r
print(new_list)
```
Which gives:
```
[['Asan', '20180416', 'C', '17979.00'], ['Asan', '20180416', 'C', '200.00']]
```
|
49,978,967
|
I have a nested list like this :
```
[['Asan', '20180418', 'A', '17935.00'],
['Asan', '20180417', 'B', '17948.00'],
['Asan', '20180416', 'C', '17979.00'],
['Asan', '20180416', 'D', '17816.00'],
.
.
.
.
['Asan', '20180415', 'N', '18027.00']]
```
I need to do some calculations for the 3rd element in every list with the one before it, an example :
```
if (A/B) > 0.05: ....
if (B/C) > 0.05: ....
if (C/D) > 0.05: ....
.
.
if (N-1/N) == 1: ....
```
and if the condition is not met, I pop that list.
How can I approach this and what is the most pythonic and fastest way to do it.
EDIT :
let me clarify :
A and B and so on are FLOATS, I just name them A, B,... for better understanding.
the if statements are supposed to find if the A is how much greater than the previous float B.
|
2018/04/23
|
[
"https://Stackoverflow.com/questions/49978967",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8077586/"
] |
Do you mean something like this?
```
l = [['Asan', '20180418', 'A', '17935.00'],
['Asan', '20180417', 'B', '17948.00'],
['Asan', '20180416', 'C', '17979.00'],
['Asan', '20180416', 'C', '200.00'],
['Asan', '20180416', 'D', '17816.00']]
last_seen = l[0]
new_list = []
for r in l[1:]:
if r[2] == last_seen[2]:
new_list.append(last_seen)
new_list.append(r)
last_seen = r
print(new_list)
```
Which gives:
```
[['Asan', '20180416', 'C', '17979.00'], ['Asan', '20180416', 'C', '200.00']]
```
|
Try like this
```
for i in range(len(l)):
if l[i][2] <Operator> l[i + 1][2] <Condtion> exp/value:
Do something here
```
|
49,978,967
|
I have a nested list like this :
```
[['Asan', '20180418', 'A', '17935.00'],
['Asan', '20180417', 'B', '17948.00'],
['Asan', '20180416', 'C', '17979.00'],
['Asan', '20180416', 'D', '17816.00'],
.
.
.
.
['Asan', '20180415', 'N', '18027.00']]
```
I need to do some calculations for the 3rd element in every list with the one before it, an example :
```
if (A/B) > 0.05: ....
if (B/C) > 0.05: ....
if (C/D) > 0.05: ....
.
.
if (N-1/N) == 1: ....
```
and if the condition is not met, I pop that list.
How can I approach this and what is the most pythonic and fastest way to do it.
EDIT :
let me clarify :
A and B and so on are FLOATS, I just name them A, B,... for better understanding.
the if statements are supposed to find if the A is how much greater than the previous float B.
|
2018/04/23
|
[
"https://Stackoverflow.com/questions/49978967",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8077586/"
] |
[`zip`](https://docs.python.org/3.4/library/functions.html#zip) makes things easy here for pair-wise comparison:
```
l = [['Asan', '20180418', 'A', '17935.00'],
['Asan', '20180417', 'B', '17948.00'],
['Asan', '20180416', 'C', '17979.00'],
['Asan', '20180416', 'C', '200.00'],
['Asan', '20180416', 'D', '17816.00']]
new_list = []
for x, y in zip(l, l[1:]):
if x[2] == y[2]:
new_list.extend([x, y])
print(new_list)
# [['Asan', '20180416', 'C', '17979.00'],
# ['Asan', '20180416', 'C', '200.00']]
```
**EDIT**:
If `'A'`, `'B'`, `'C'`.. are `float`s, then to check for condition you required, you could replace the condition in the above example. And do whatever if that condition is satisfied.
|
Since you need to test your condition for every element in the top list, you will need to iterate it.
Since the condition can not be computed for the first element (no 'before' element) we can start at index 1 instead of 0.
The 3rd element in the sublists can be referenced by it's index.
We can get the sublist before the sublist we look at right now by decreasing the index.
Because of the loop, it will not be very fast, but the most pythonic way would be a oneline loop.
```
new_list = [l[i] for i in range(1, len(l)) if float(l[i][3])/float(l[i-1][3]) > 0.05]
#new_list = [['Asan', '20180417', 'B', '17948.00'], ['Asan', '20180416', 'C', '17979.00'], ['Asan', '20180416', 'D', '17816.00']]
```
Because of readability, I would go with a multi line for loop.
```
new_list = []
for i in range(1, len(l)):
if float(l[i][3])/float(l[i-1][3]) > 0.05:
new_list.append(l[i])
```
|
49,978,967
|
I have a nested list like this :
```
[['Asan', '20180418', 'A', '17935.00'],
['Asan', '20180417', 'B', '17948.00'],
['Asan', '20180416', 'C', '17979.00'],
['Asan', '20180416', 'D', '17816.00'],
.
.
.
.
['Asan', '20180415', 'N', '18027.00']]
```
I need to do some calculations for the 3rd element in every list with the one before it, an example :
```
if (A/B) > 0.05: ....
if (B/C) > 0.05: ....
if (C/D) > 0.05: ....
.
.
if (N-1/N) == 1: ....
```
and if the condition is not met, I pop that list.
How can I approach this and what is the most pythonic and fastest way to do it.
EDIT :
let me clarify :
A and B and so on are FLOATS, I just name them A, B,... for better understanding.
the if statements are supposed to find if the A is how much greater than the previous float B.
|
2018/04/23
|
[
"https://Stackoverflow.com/questions/49978967",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8077586/"
] |
Since you need to test your condition for every element in the top list, you will need to iterate it.
Since the condition can not be computed for the first element (no 'before' element) we can start at index 1 instead of 0.
The 3rd element in the sublists can be referenced by it's index.
We can get the sublist before the sublist we look at right now by decreasing the index.
Because of the loop, it will not be very fast, but the most pythonic way would be a oneline loop.
```
new_list = [l[i] for i in range(1, len(l)) if float(l[i][3])/float(l[i-1][3]) > 0.05]
#new_list = [['Asan', '20180417', 'B', '17948.00'], ['Asan', '20180416', 'C', '17979.00'], ['Asan', '20180416', 'D', '17816.00']]
```
Because of readability, I would go with a multi line for loop.
```
new_list = []
for i in range(1, len(l)):
if float(l[i][3])/float(l[i-1][3]) > 0.05:
new_list.append(l[i])
```
|
Try like this
```
for i in range(len(l)):
if l[i][2] <Operator> l[i + 1][2] <Condtion> exp/value:
Do something here
```
|
49,978,967
|
I have a nested list like this :
```
[['Asan', '20180418', 'A', '17935.00'],
['Asan', '20180417', 'B', '17948.00'],
['Asan', '20180416', 'C', '17979.00'],
['Asan', '20180416', 'D', '17816.00'],
.
.
.
.
['Asan', '20180415', 'N', '18027.00']]
```
I need to do some calculations for the 3rd element in every list with the one before it, an example :
```
if (A/B) > 0.05: ....
if (B/C) > 0.05: ....
if (C/D) > 0.05: ....
.
.
if (N-1/N) == 1: ....
```
and if the condition is not met, I pop that list.
How can I approach this and what is the most pythonic and fastest way to do it.
EDIT :
let me clarify :
A and B and so on are FLOATS, I just name them A, B,... for better understanding.
the if statements are supposed to find if the A is how much greater than the previous float B.
|
2018/04/23
|
[
"https://Stackoverflow.com/questions/49978967",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8077586/"
] |
[`zip`](https://docs.python.org/3.4/library/functions.html#zip) makes things easy here for pair-wise comparison:
```
l = [['Asan', '20180418', 'A', '17935.00'],
['Asan', '20180417', 'B', '17948.00'],
['Asan', '20180416', 'C', '17979.00'],
['Asan', '20180416', 'C', '200.00'],
['Asan', '20180416', 'D', '17816.00']]
new_list = []
for x, y in zip(l, l[1:]):
if x[2] == y[2]:
new_list.extend([x, y])
print(new_list)
# [['Asan', '20180416', 'C', '17979.00'],
# ['Asan', '20180416', 'C', '200.00']]
```
**EDIT**:
If `'A'`, `'B'`, `'C'`.. are `float`s, then to check for condition you required, you could replace the condition in the above example. And do whatever if that condition is satisfied.
|
Try like this
```
for i in range(len(l)):
if l[i][2] <Operator> l[i + 1][2] <Condtion> exp/value:
Do something here
```
|
9,874,042
|
I'm trying to complete 100 model runs on my 8-processor 64-bit Windows 7 machine. I'd like to run 7 instances of the model concurrently to decrease my total run time (approx. 9.5 min per model run). I've looked at several threads pertaining to the Multiprocessing module of Python, but am still missing something.
[Using the multiprocessing module](https://stackoverflow.com/questions/7780478/using-the-multiprocessing-module)
[How to spawn parallel child processes on a multi-processor system?](https://stackoverflow.com/questions/884650/python-spawn-parallel-child-processes-on-a-multi-processor-system-use-multipr)
[Python Multiprocessing queue](https://stackoverflow.com/questions/5506227/python-multiprocessing-queue)
**My Process:**
I have 100 different parameter sets I'd like to run through SEAWAT/MODFLOW to compare the results. I have pre-built the model input files for each model run and stored them in their own directories. What I'd like to be able to do is have 7 models running at a time until all realizations have been completed. There needn't be communication between processes or display of results. So far I have only been able to spawn the models sequentially:
```
import os,subprocess
import multiprocessing as mp
ws = r'D:\Data\Users\jbellino\Project\stJohnsDeepening\model\xsec_a'
files = []
for f in os.listdir(ws + r'\fieldgen\reals'):
if f.endswith('.npy'):
files.append(f)
## def work(cmd):
## return subprocess.call(cmd, shell=False)
def run(f,def_param=ws):
real = f.split('_')[2].split('.')[0]
print 'Realization %s' % real
mf2k = r'c:\modflow\mf2k.1_19\bin\mf2k.exe '
mf2k5 = r'c:\modflow\MF2005_1_8\bin\mf2005.exe '
seawatV4 = r'c:\modflow\swt_v4_00_04\exe\swt_v4.exe '
seawatV4x64 = r'c:\modflow\swt_v4_00_04\exe\swt_v4x64.exe '
exe = seawatV4x64
swt_nam = ws + r'\reals\real%s\ss\ss.nam_swt' % real
os.system( exe + swt_nam )
if __name__ == '__main__':
p = mp.Pool(processes=mp.cpu_count()-1) #-leave 1 processor available for system and other processes
tasks = range(len(files))
results = []
for f in files:
r = p.map_async(run(f), tasks, callback=results.append)
```
I changed the `if __name__ == 'main':` to the following in hopes it would fix the lack of parallelism I feel is being imparted on the above script by the `for loop`. However, the model fails to even run (no Python error):
```
if __name__ == '__main__':
p = mp.Pool(processes=mp.cpu_count()-1) #-leave 1 processor available for system and other processes
p.map_async(run,((files[f],) for f in range(len(files))))
```
Any and all help is greatly appreciated!
**EDIT 3/26/2012 13:31 EST**
Using the "Manual Pool" method in @J.F. Sebastian's answer below I get parallel execution of my external .exe. Model realizations are called up in batches of 8 at a time, but it doesn't wait for those 8 runs to complete before calling up the next batch and so on:
```
from __future__ import print_function
import os,subprocess,sys
import multiprocessing as mp
from Queue import Queue
from threading import Thread
def run(f,ws):
real = f.split('_')[-1].split('.')[0]
print('Realization %s' % real)
seawatV4x64 = r'c:\modflow\swt_v4_00_04\exe\swt_v4x64.exe '
swt_nam = ws + r'\reals\real%s\ss\ss.nam_swt' % real
subprocess.check_call([seawatV4x64, swt_nam])
def worker(queue):
"""Process files from the queue."""
for args in iter(queue.get, None):
try:
run(*args)
except Exception as e: # catch exceptions to avoid exiting the
# thread prematurely
print('%r failed: %s' % (args, e,), file=sys.stderr)
def main():
# populate files
ws = r'D:\Data\Users\jbellino\Project\stJohnsDeepening\model\xsec_a'
wdir = os.path.join(ws, r'fieldgen\reals')
q = Queue()
for f in os.listdir(wdir):
if f.endswith('.npy'):
q.put_nowait((os.path.join(wdir, f), ws))
# start threads
threads = [Thread(target=worker, args=(q,)) for _ in range(8)]
for t in threads:
t.daemon = True # threads die if the program dies
t.start()
for _ in threads: q.put_nowait(None) # signal no more files
for t in threads: t.join() # wait for completion
if __name__ == '__main__':
mp.freeze_support() # optional if the program is not frozen
main()
```
No error traceback is available. The `run()` function performs its duty when called upon a single model realization file as with mutiple files. The only difference is that with multiple files, it is called `len(files)` times though each of the instances immediately closes and only one model run is allowed to finish at which time the script exits gracefully (exit code 0).
Adding some print statements to `main()` reveals some information about active thread-counts as well as thread status (note that this is a test on only 8 of the realization files to make the screenshot more manageable, theoretically all 8 files should be run concurrently, however the behavior continues where they are spawn and immediately die except one):
```
def main():
# populate files
ws = r'D:\Data\Users\jbellino\Project\stJohnsDeepening\model\xsec_a'
wdir = os.path.join(ws, r'fieldgen\test')
q = Queue()
for f in os.listdir(wdir):
if f.endswith('.npy'):
q.put_nowait((os.path.join(wdir, f), ws))
# start threads
threads = [Thread(target=worker, args=(q,)) for _ in range(mp.cpu_count())]
for t in threads:
t.daemon = True # threads die if the program dies
t.start()
print('Active Count a',threading.activeCount())
for _ in threads:
print(_)
q.put_nowait(None) # signal no more files
for t in threads:
print(t)
t.join() # wait for completion
print('Active Count b',threading.activeCount())
```

\*\*The line which reads "`D:\\Data\\Users...`" is the error information thrown when I manually stop the model from running to completion. Once I stop the model running, the remaining thread status lines get reported and the script exits.
**EDIT 3/26/2012 16:24 EST**
SEAWAT does allow concurrent execution as I've done this in the past, spawning instances manually using iPython and launching from each model file folder. This time around, I'm launching all model runs from a single location, namely the directory where my script resides. It looks like the culprit may be in the way SEAWAT is saving some of the output. When SEAWAT is run, it immediately creates files pertaining to the model run. One of these files is not being saved to the directory in which the model realization is located, but in the top directory where the script is located. This is preventing any subsequent threads from saving the same file name in the same location (which they all want to do since these filenames are generic and non-specific to each realization). The SEAWAT windows were not staying open long enough for me to read or even see that there was an error message, I only realized this when I went back and tried to run the code using iPython which directly displays the printout from SEAWAT instead of opening a new window to run the program.
I am accepting @J.F. Sebastian's answer as it is likely that once I resolve this model-executable issue, the threading code he has provided will get me where I need to be.
**FINAL CODE**
Added cwd argument in subprocess.check\_call to start each instance of SEAWAT in its own directory. Very key.
```
from __future__ import print_function
import os,subprocess,sys
import multiprocessing as mp
from Queue import Queue
from threading import Thread
import threading
def run(f,ws):
real = f.split('_')[-1].split('.')[0]
print('Realization %s' % real)
seawatV4x64 = r'c:\modflow\swt_v4_00_04\exe\swt_v4x64.exe '
cwd = ws + r'\reals\real%s\ss' % real
swt_nam = ws + r'\reals\real%s\ss\ss.nam_swt' % real
subprocess.check_call([seawatV4x64, swt_nam],cwd=cwd)
def worker(queue):
"""Process files from the queue."""
for args in iter(queue.get, None):
try:
run(*args)
except Exception as e: # catch exceptions to avoid exiting the
# thread prematurely
print('%r failed: %s' % (args, e,), file=sys.stderr)
def main():
# populate files
ws = r'D:\Data\Users\jbellino\Project\stJohnsDeepening\model\xsec_a'
wdir = os.path.join(ws, r'fieldgen\reals')
q = Queue()
for f in os.listdir(wdir):
if f.endswith('.npy'):
q.put_nowait((os.path.join(wdir, f), ws))
# start threads
threads = [Thread(target=worker, args=(q,)) for _ in range(mp.cpu_count()-1)]
for t in threads:
t.daemon = True # threads die if the program dies
t.start()
for _ in threads: q.put_nowait(None) # signal no more files
for t in threads: t.join() # wait for completion
if __name__ == '__main__':
mp.freeze_support() # optional if the program is not frozen
main()
```
|
2012/03/26
|
[
"https://Stackoverflow.com/questions/9874042",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/698809/"
] |
I don't see any computations in the Python code. If you just need to execute several external programs in parallel it is sufficient to use `subprocess` to run the programs and `threading` module to maintain constant number of processes running, but the simplest code is using `multiprocessing.Pool`:
```
#!/usr/bin/env python
import os
import multiprocessing as mp
def run(filename_def_param):
filename, def_param = filename_def_param # unpack arguments
... # call external program on `filename`
def safe_run(*args, **kwargs):
"""Call run(), catch exceptions."""
try: run(*args, **kwargs)
except Exception as e:
print("error: %s run(*%r, **%r)" % (e, args, kwargs))
def main():
# populate files
ws = r'D:\Data\Users\jbellino\Project\stJohnsDeepening\model\xsec_a'
workdir = os.path.join(ws, r'fieldgen\reals')
files = ((os.path.join(workdir, f), ws)
for f in os.listdir(workdir) if f.endswith('.npy'))
# start processes
pool = mp.Pool() # use all available CPUs
pool.map(safe_run, files)
if __name__=="__main__":
mp.freeze_support() # optional if the program is not frozen
main()
```
If there are many files then `pool.map()` could be replaced by `for _ in pool.imap_unordered(safe_run, files): pass`.
There is also `mutiprocessing.dummy.Pool` that provides the same interface as `multiprocessing.Pool` but uses threads instead of processes that might be more appropriate in this case.
You don't need to keep some CPUs free. Just use a command that starts your executables with a low priority (on Linux it is a `nice` program).
### [`ThreadPoolExecutor` example](http://docs.python.org/dev/library/concurrent.futures.html#threadpoolexecutor-example)
[`concurrent.futures.ThreadPoolExecutor`](http://docs.python.org/dev/library/concurrent.futures.html) would be both simple and sufficient but it requires [3rd-party dependency on Python 2.x](http://pypi.python.org/pypi/futures/) (it is in the stdlib since Python 3.2).
```
#!/usr/bin/env python
import os
import concurrent.futures
def run(filename, def_param):
... # call external program on `filename`
# populate files
ws = r'D:\Data\Users\jbellino\Project\stJohnsDeepening\model\xsec_a'
wdir = os.path.join(ws, r'fieldgen\reals')
files = (os.path.join(wdir, f) for f in os.listdir(wdir) if f.endswith('.npy'))
# start threads
with concurrent.futures.ThreadPoolExecutor(max_workers=8) as executor:
future_to_file = dict((executor.submit(run, f, ws), f) for f in files)
for future in concurrent.futures.as_completed(future_to_file):
f = future_to_file[future]
if future.exception() is not None:
print('%r generated an exception: %s' % (f, future.exception()))
# run() doesn't return anything so `future.result()` is always `None`
```
Or if we ignore exceptions raised by `run()`:
```
from itertools import repeat
... # the same
# start threads
with concurrent.futures.ThreadPoolExecutor(max_workers=8) as executor:
executor.map(run, files, repeat(ws))
# run() doesn't return anything so `map()` results can be ignored
```
### `subprocess` + `threading` (manual pool) solution
```
#!/usr/bin/env python
from __future__ import print_function
import os
import subprocess
import sys
from Queue import Queue
from threading import Thread
def run(filename, def_param):
... # define exe, swt_nam
subprocess.check_call([exe, swt_nam]) # run external program
def worker(queue):
"""Process files from the queue."""
for args in iter(queue.get, None):
try:
run(*args)
except Exception as e: # catch exceptions to avoid exiting the
# thread prematurely
print('%r failed: %s' % (args, e,), file=sys.stderr)
# start threads
q = Queue()
threads = [Thread(target=worker, args=(q,)) for _ in range(8)]
for t in threads:
t.daemon = True # threads die if the program dies
t.start()
# populate files
ws = r'D:\Data\Users\jbellino\Project\stJohnsDeepening\model\xsec_a'
wdir = os.path.join(ws, r'fieldgen\reals')
for f in os.listdir(wdir):
if f.endswith('.npy'):
q.put_nowait((os.path.join(wdir, f), ws))
for _ in threads: q.put_nowait(None) # signal no more files
for t in threads: t.join() # wait for completion
```
|
Here is my way to maintain the minimum x number of threads in the memory. Its an combination of threading and multiprocessing modules. It may be unusual to other techniques like respected fellow members have explained above BUT may be worth considerable. For the sake of explanation, I am taking a scenario of crawling a minimum of 5 websites at a time.
so here it is:-
```
#importing dependencies.
from multiprocessing import Process
from threading import Thread
import threading
# Crawler function
def crawler(domain):
# define crawler technique here.
output.write(scrapeddata + "\n")
pass
```
Next is threadController function. This function will control the flow of threads to the main memory. It will keep activating the threads to maintain the threadNum "minimum" limit ie. 5. Also it won't exit until, all Active threads(acitveCount) are finished up.
It will maintain a minimum of threadNum(5) startProcess function threads (these threads will eventually start the Processes from the processList while joining them with a time out of 60 seconds). After staring threadController, there would be 2 threads which are not included in the above limit of 5 ie. the Main thread and the threadController thread itself. thats why threading.activeCount() != 2 has been used.
```
def threadController():
print "Thread count before child thread starts is:-", threading.activeCount(), len(processList)
# staring first thread. This will make the activeCount=3
Thread(target = startProcess).start()
# loop while thread List is not empty OR active threads have not finished up.
while len(processList) != 0 or threading.activeCount() != 2:
if (threading.activeCount() < (threadNum + 2) and # if count of active threads are less than the Minimum AND
len(processList) != 0): # processList is not empty
Thread(target = startProcess).start() # This line would start startThreads function as a seperate thread **
```
startProcess function, as a separate thread, would start Processes from the processlist. The purpose of this function (\*\*started as a different thread) is that It would become a parent thread for Processes. So when It will join them with a timeout of 60 seconds, this would stop the startProcess thread to move ahead but this won't stop threadController to perform. So this way, threadController will work as required.
```
def startProcess():
pr = processList.pop(0)
pr.start()
pr.join(60.00) # joining the thread with time out of 60 seconds as a float.
if __name__ == '__main__':
# a file holding a list of domains
domains = open("Domains.txt", "r").read().split("\n")
output = open("test.txt", "a")
processList = [] # thread list
threadNum = 5 # number of thread initiated processes to be run at one time
# making process List
for r in range(0, len(domains), 1):
domain = domains[r].strip()
p = Process(target = crawler, args = (domain,))
processList.append(p) # making a list of performer threads.
# starting the threadController as a seperate thread.
mt = Thread(target = threadController)
mt.start()
mt.join() # won't let go next until threadController thread finishes.
output.close()
print "Done"
```
Besides maintaining a minimum number of threads in the memory, my aim was to also have something which could avoid stuck threads or processes in the memory. I did this using the time out function.
My apologies for any typing mistake.
I hope this construction would help anyone in this world.
Regards,
Vikas Gautam
|
34,889,737
|
I am a new programmer that just learned python so I made a small program just for fun:
```
year = int(input("Year Of Birth: "), 10)
if year < 2016:
print("You passed! You may continue!")
else:
break
```
I searched this up and I tried using break but I got a error saying I have to use break outside of the loop. Can anyone give a simple solution? Thanks.
|
2016/01/20
|
[
"https://Stackoverflow.com/questions/34889737",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4584838/"
] |
`Break` is used in a `loop`. For example:
```
i = 3
while True:
i = i + 1 #increments i by 1
print(i)
if i == 5:
break
print("Done!")
```
Normally, `while True` will execute indefinitely because `True` is always equal to `True`. (This is equivalent to something like `1==1`). The `break` statement tells Python to stop executing the loop right then and there. Then, it will continue executing code where the while loop left off.
*The output of this program will be:*
```
4
5
Done!
```
First, `i=3`. Then, in the `while` loop, `i` becomes `4`, and `4` is outputted to the console. Since `i` is not equal to `5`, it does not `break`.
Now, `i=4`. `i` increments to `5` and is printed to the console. This time, `i==5`, so it `break`s. Then, it prints `Done!` because that line is after the `while` loop we just broke out of.
|
`Break` is used inside a loop to get out of that loop which looks something like `while`, `for` and you are trying to use it in Flow Control statements i.e `if` , `elif` and `else`
|
68,940,884
|
I have written a code in javascript that I don't know how to write in python. The use of it is rather simple. There are 2 variables: Buy and Sell. E.g There is Buy, Buy, Sell, Buy, Sell, Sell
the javascript gives me a new list with Buy,Sell,Buy,Sell
```
´´const fs = require('fs');
class Trade {
price;
side;
}
const array =
console.log('AllTrades:');
console.log(array);
let trades = [];
for(let i = array.length - 1; i >= 0; i--) {
let trade = array[i];
let nextTrade = array[i - 1];
if(nextTrade) {
if(trade[0] == "BUY" && nextTrade[0] == "SELL") {
trades[i] = trade;
}
if(trade[0] == "SELL" && nextTrade[0] == "BUY") {
trades[i] = trade;
}
} else if(trade[0] == "BUY") {
trades[i] = trade;
}
}
trades = trades.filter(function (i) { return i != undefined;});
console.log('D Trades:');
for(let i = 0; i < trades.length; i++) {
console.log(trades[i]);
}
fs.writeFile('./data/trades.json', JSON.stringify(trades), (err, data) => {
if (err) return console.log(err);
console.log('Trades > trades.txt');
}); ´´
```
|
2021/08/26
|
[
"https://Stackoverflow.com/questions/68940884",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/16480780/"
] |
You can try:
1. `stack` all the dates into one column
2. `drop_duplicates`: effectively getting rid of equal end and start dates
3. Keep every second row and restructure the result
```
df = df.sort_values(["category", "start", "end"])
stacked = df.set_index("category").stack().droplevel(-1).rename("start")
output = stacked.reset_index().drop_duplicates(keep=False).set_index("category")
output["end"] = output["start"].shift(-1)
output = output.iloc[range(0, len(output.index), 2)]
>>> stacked
start end
category
A 2015-01-01 2016-12-01
B 2016-01-01 2016-07-01
B 2018-01-01 2018-08-01
```
|
You could do it like this:
**Sample data:**
```
import pandas as pd
d = {'category': {0: 'A', 1: 'A', 2: 'A', 3: 'B', 4: 'B'}, 'start': {0: '2015-01-01', 1: '2016-01-01', 2: '2016-06-01', 3: '2016-01-01', 4: '2018-01-01'}, 'end': {0: '2016-01-01', 1: '2016-06-01', 2: '2016-12-01', 3: '2016-07-01', 4: '2018-08-01'}}
df = pd.DataFrame(d)
```
**Code:**
```
# Create a series of boolean essentially marking the start of a new continuous group
df = df.sort_values(['category', 'start', 'end'])
mask_continue = ~df['start'].eq(df['end'].shift())
mask_group = ~df['category'].eq(df['category'].shift())
df['group'] = (mask_continue | mask_group).cumsum()
# group by that above generated group and get first and last value of it
g = df.groupby('group')
g_start = g.head(1).set_index('group')[['category', 'start']]
g_end = g.tail(1).set_index('group')['end']
# combine both
g_start['end'] = g_end
```
**Output:**
```
print(g_start)
category start end
group
1 A 2015-01-01 2016-12-01
2 B 2016-01-01 2016-07-01
3 B 2018-01-01 2018-08-01
```
|
57,665,576
|
I’m new to nix and would like to be able to install openconnect with it.
Darwin seems to be unsupported at this time via nix, though it can be installed with either brew or macports.
I’ve tried both a basic install and allowing unsupported. I didn’t see anything on google or stackoverflow as yet that helped.
Basic Install results
---------------------
```sh
$ nix-env -i openconnect
warning: there are multiple derivations named 'openconnect-8.03'; using the first one
installing 'openconnect-8.03'
error: Package ‘«name-missing»’ in /nix/store/bsxpbjipz85ws3wbznhpgc38779pzak5-nixpkgs-19.09pre186574.88d9f776091/nixpkgs/pkgs/tools/networking/openconnect/default.nix:28 is not supported on ‘x86_64-darwin’, refusing to evaluate.
a) For `nixos-rebuild` you can set
{ nixpkgs.config.allowUnsupportedSystem = true; }
in configuration.nix to override this.
b) For `nix-env`, `nix-build`, `nix-shell` or any other Nix command you can add
{ allowUnsupportedSystem = true; }
to ~/.config/nixpkgs/config.nix.
(use '--show-trace' to show detailed location information)
```
Allow Unsupported Results
-------------------------
```sh
$ NIXPKGS_ALLOW_UNSUPPORTED_SYSTEM=1 nix-env -i openconnect
warning: there are multiple derivations named 'openconnect-8.03'; using the first one
installing 'openconnect-8.03'
these derivations will be built:
/nix/store/rf9afn87lch36bj7q61ks6rr3igsbv83-builder.pl.drv
/nix/store/25pr0lcpbgdx78vpv2jb99sngzwq6b44-nettools-1003.1-2008.drv
/nix/store/v9i3z3bh3jjzs80wj4jbb5vn37mwflah-openresolv-3.9.0.drv
/nix/store/1i8i9qww079gm1l82bnggkpxphg27v9w-vpnc-0.5.3-post-r550.drv
/nix/store/1fzxm6cdd5d2mgizkvqqx0zi0vg9ilpk-openconnect-8.03.drv
these paths will be fetched (157.54 MiB download, 811.71 MiB unpacked):
/nix/store/02cjbymr9q0abw1vdy9i4c67dklgq4iq-autogen-5.18.12-lib
/nix/store/09cdrnckixb7i5fsv2fc33rd7p5zx0n8-libjpeg-turbo-2.0.2
/nix/store/0cifmxw0r6lijng796a3z3nwq67ma5b3-llvm-7.1.0-lib
/nix/store/0x4jdm1dhb593bpgazqclg7np5mb9yp2-vpnc-r550
/nix/store/1d1xq0r2zg4r1mk8f1ydfygw19i84lpq-fontconfig-2.12.6
/nix/store/1ki1m09g8wnd5vzadcxh0pjwfz27zk8z-expand-response-params
/nix/store/1qkqjb02khxr13q8hhwicznz5zsvjvzv-gnused-4.7
/nix/store/28ciaclbgwii11yna529xim4w1cnc7bn-expat-2.2.7
/nix/store/29alwmfhca3y5gj0fki5raahi2s6330n-nettle-3.4.1-dev
/nix/store/2ia88csiygmwq9bprdlwdpj9ajnjw1wd-gtk+3-3.24.8
/nix/store/2pj2hdfy8jqy41hbh5h6z7sqhmcpi7xy-cctools-binutils-darwin
/nix/store/2x82drnhq64vpw0pj4gkgid0qhm7f20q-gnutls-3.6.8-dev
/nix/store/33whmhh30l2c1cx65pakygv520fjsi96-jasper-2.0.16
/nix/store/4366mxm4p1032s9s8l6pyzwz56xr9inb-libxml2-2.9.9-py
/nix/store/49nmvn7iaxlahhc66w8b4y9j66n8fjlb-autogen-5.18.12
/nix/store/4mjyhc65kdri28g59gpnjvimjkb44vpy-gawk-4.2.1
/nix/store/4zn384psyqihmm6fnmavxw5w3509qgnw-Security-osx-10.9.5
...
configuring
building
build flags: SHELL=/nix/store/d9wxsyxsj3ssjcbywwm35254d14yv87a-bash-4.4-p23/bin/bash
sed -e 's:@SBINDIR@:/nix/store/izy3kb6035gl46dkxlhl09g2ghcg2hqp-openresolv-3.9.0/sbin:g' -e 's:@SYSCONFDIR@:/etc:g' -e 's:@LIBEXECDIR@:/nix/store/izy3kb6035gl46dkxlhl09g2ghcg2hqp-openresolv-3.9.0/libexec/resolvconf:g' \
-e 's:@VARDIR@:/run/resolvconf:g' \
-e 's:@RCDIR@::g' -e 's:@RESTARTCMD@:false:g' -e 's:@RCDIR@::g' -e 's:@STATUSARG@::g' \
resolvconf.in > resolvconf
sed -e 's:@SBINDIR@:/nix/store/izy3kb6035gl46dkxlhl09g2ghcg2hqp-openresolv-3.9.0/sbin:g' -e 's:@SYSCONFDIR@:/etc:g' -e 's:@LIBEXECDIR@:/nix/store/izy3kb6035gl46dkxlhl09g2ghcg2hqp-openresolv-3.9.0/libexec/resolvconf:g' \
-e 's:@VARDIR@:/run/resolvconf:g' \
-e 's:@RCDIR@::g' -e 's:@RESTARTCMD@:false:g' -e 's:@RCDIR@::g' -e 's:@STATUSARG@::g' \
resolvconf.8.in > resolvconf.8
sed -e 's:@SBINDIR@:/nix/store/izy3kb6035gl46dkxlhl09g2ghcg2hqp-openresolv-3.9.0/sbin:g' -e 's:@SYSCONFDIR@:/etc:g' -e 's:@LIBEXECDIR@:/nix/store/izy3kb6035gl46dkxlhl09g2ghcg2hqp-openresolv-3.9.0/libexec/resolvconf:g' \
-e 's:@VARDIR@:/run/resolvconf:g' \
-e 's:@RCDIR@::g' -e 's:@RESTARTCMD@:false:g' -e 's:@RCDIR@::g' -e 's:@STATUSARG@::g' \
resolvconf.conf.5.in > resolvconf.conf.5
sed -e 's:@SBINDIR@:/nix/store/izy3kb6035gl46dkxlhl09g2ghcg2hqp-openresolv-3.9.0/sbin:g' -e 's:@SYSCONFDIR@:/etc:g' -e 's:@LIBEXECDIR@:/nix/store/izy3kb6035gl46dkxlhl09g2ghcg2hqp-openresolv-3.9.0/libexec/resolvconf:g' \
-e 's:@VARDIR@:/run/resolvconf:g' \
-e 's:@RCDIR@::g' -e 's:@RESTARTCMD@:false:g' -e 's:@RCDIR@::g' -e 's:@STATUSARG@::g' \
libc.in > libc
sed -e 's:@SBINDIR@:/nix/store/izy3kb6035gl46dkxlhl09g2ghcg2hqp-openresolv-3.9.0/sbin:g' -e 's:@SYSCONFDIR@:/etc:g' -e 's:@LIBEXECDIR@:/nix/store/izy3kb6035gl46dkxlhl09g2ghcg2hqp-openresolv-3.9.0/libexec/resolvconf:g' \
-e 's:@VARDIR@:/run/resolvconf:g' \
-e 's:@RCDIR@::g' -e 's:@RESTARTCMD@:false:g' -e 's:@RCDIR@::g' -e 's:@STATUSARG@::g' \
dnsmasq.in > dnsmasq
sed -e 's:@SBINDIR@:/nix/store/izy3kb6035gl46dkxlhl09g2ghcg2hqp-openresolv-3.9.0/sbin:g' -e 's:@SYSCONFDIR@:/etc:g' -e 's:@LIBEXECDIR@:/nix/store/izy3kb6035gl46dkxlhl09g2ghcg2hqp-openresolv-3.9.0/libexec/resolvconf:g' \
-e 's:@VARDIR@:/run/resolvconf:g' \
-e 's:@RCDIR@::g' -e 's:@RESTARTCMD@:false:g' -e 's:@RCDIR@::g' -e 's:@STATUSARG@::g' \
named.in > named
sed -e 's:@SBINDIR@:/nix/store/izy3kb6035gl46dkxlhl09g2ghcg2hqp-openresolv-3.9.0/sbin:g' -e 's:@SYSCONFDIR@:/etc:g' -e 's:@LIBEXECDIR@:/nix/store/izy3kb6035gl46dkxlhl09g2ghcg2hqp-openresolv-3.9.0/libexec/resolvconf:g' \
-e 's:@VARDIR@:/run/resolvconf:g' \
-e 's:@RCDIR@::g' -e 's:@RESTARTCMD@:false:g' -e 's:@RCDIR@::g' -e 's:@STATUSARG@::g' \
pdnsd.in > pdnsd
sed -e 's:@SBINDIR@:/nix/store/izy3kb6035gl46dkxlhl09g2ghcg2hqp-openresolv-3.9.0/sbin:g' -e 's:@SYSCONFDIR@:/etc:g' -e 's:@LIBEXECDIR@:/nix/store/izy3kb6035gl46dkxlhl09g2ghcg2hqp-openresolv-3.9.0/libexec/resolvconf:g' \
-e 's:@VARDIR@:/run/resolvconf:g' \
-e 's:@RCDIR@::g' -e 's:@RESTARTCMD@:false:g' -e 's:@RCDIR@::g' -e 's:@STATUSARG@::g' \
unbound.in > unbound
installing
install flags: SHELL=/nix/store/d9wxsyxsj3ssjcbywwm35254d14yv87a-bash-4.4-p23/bin/bash SYSCONFDIR=\$\(out\)/etc install
install -d /nix/store/izy3kb6035gl46dkxlhl09g2ghcg2hqp-openresolv-3.9.0/sbin
install -m 0755 resolvconf /nix/store/izy3kb6035gl46dkxlhl09g2ghcg2hqp-openresolv-3.9.0/sbin
install -d /nix/store/izy3kb6035gl46dkxlhl09g2ghcg2hqp-openresolv-3.9.0/etc
test -e /nix/store/izy3kb6035gl46dkxlhl09g2ghcg2hqp-openresolv-3.9.0/etc/resolvconf.conf || \
install -m 0644 resolvconf.conf /nix/store/izy3kb6035gl46dkxlhl09g2ghcg2hqp-openresolv-3.9.0/etc
install -d /nix/store/izy3kb6035gl46dkxlhl09g2ghcg2hqp-openresolv-3.9.0/libexec/resolvconf
install -m 0644 libc dnsmasq named pdnsd unbound /nix/store/izy3kb6035gl46dkxlhl09g2ghcg2hqp-openresolv-3.9.0/libexec/resolvconf
install -d /nix/store/izy3kb6035gl46dkxlhl09g2ghcg2hqp-openresolv-3.9.0/share/man/man8
install -m 0444 resolvconf.8 /nix/store/izy3kb6035gl46dkxlhl09g2ghcg2hqp-openresolv-3.9.0/share/man/man8
install -d /nix/store/izy3kb6035gl46dkxlhl09g2ghcg2hqp-openresolv-3.9.0/share/man/man5
install -m 0444 resolvconf.conf.5 /nix/store/izy3kb6035gl46dkxlhl09g2ghcg2hqp-openresolv-3.9.0/share/man/man5
post-installation fixup
gzipping man pages under /nix/store/izy3kb6035gl46dkxlhl09g2ghcg2hqp-openresolv-3.9.0/share/man/
strip is /nix/store/2pj2hdfy8jqy41hbh5h6z7sqhmcpi7xy-cctools-binutils-darwin/bin/strip
stripping (with command strip and flags -S) in /nix/store/izy3kb6035gl46dkxlhl09g2ghcg2hqp-openresolv-3.9.0/libexec /nix/store/izy3kb6035gl46dkxlhl09g2ghcg2hqp-openresolv-3.9.0/sbin
patching script interpreter paths in /nix/store/izy3kb6035gl46dkxlhl09g2ghcg2hqp-openresolv-3.9.0
/nix/store/izy3kb6035gl46dkxlhl09g2ghcg2hqp-openresolv-3.9.0/sbin/.resolvconf-wrapped: interpreter directive changed from "/bin/sh" to "/nix/store/d9wxsyxsj3ssjcbywwm35254d14yv87a-bash-4.4-p23/bin/sh"
moving /nix/store/izy3kb6035gl46dkxlhl09g2ghcg2hqp-openresolv-3.9.0/sbin/* to /nix/store/izy3kb6035gl46dkxlhl09g2ghcg2hqp-openresolv-3.9.0/bin
building '/nix/store/25pr0lcpbgdx78vpv2jb99sngzwq6b44-nettools-1003.1-2008.drv'...
created 6 symlinks in user environment
building '/nix/store/1i8i9qww079gm1l82bnggkpxphg27v9w-vpnc-0.5.3-post-r550.drv'...
unpacking sources
unpacking source archive /nix/store/0x4jdm1dhb593bpgazqclg7np5mb9yp2-vpnc-r550
source root is vpnc-r550
patching sources
applying patch /nix/store/cjmd5815vwl3kfhm8f5yzrnd0zcms7ki-makefile.patch
patching file Makefile
Hunk #1 succeeded at 20 with fuzz 1.
Hunk #2 succeeded at 82 with fuzz 2 (offset 11 lines).
applying patch /nix/store/9lq5151d73q7m5d291qayz73828mky1f-no_default_route_when_netmask.patch
patching file vpnc-script
configuring
substituteStream(): WARNING: pattern '/usr/bin/perl' doesn't match anything in file 'pcf2vpnc'
no configure script, doing nothing
...
clang -O3 -g -W -Wall -Wmissing-declarations -Wwrite-strings -I/nix/store/f7pyl32f8jq8g2i62qqfrjwigi0znb1a-libgcrypt-1.8.4-dev/include -I/nix/store/5pnsm2flly4w5zi0d1qlvz0zs71jwlv9-libgpg-error-1.36-dev/include -DOPENSSL_GPL_VIOLATION -DCRYPTO_OPENSSL -DVERSION=\"0.5.3\" -c -o config.o config.c
config.c:212:6: warning: variable 'pass' is used uninitialized whenever 'if' condition is true [-Wsometimes-uninitialized]
if (WEXITSTATUS(status) != 0) {
^~~~~~~~~~~~~~~~~~~~~~~~
/nix/store/bhywddq18j79xr274n45byvqjb8fs52j-Libsystem-osx-10.12.6/include/sys/wait.h:144:24: note: expanded from macro 'WEXITSTATUS'
#define WEXITSTATUS(x) ((_W_INT(x) >> 8) & 0x000000ff)
^
config.c:242:9: note: uninitialized use occurs here
return pass;
^~~~
config.c:212:2: note: remove the 'if' if its condition is always false
if (WEXITSTATUS(status) != 0) {
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
config.c:207:6: warning: variable 'pass' is used uninitialized whenever 'if' condition is true [-Wsometimes-uninitialized]
if (!WIFEXITED(status)) {
^~~~~~~~~~~~~~~~~~
config.c:242:9: note: uninitialized use occurs here
return pass;
^~~~
config.c:207:2: note: remove the 'if' if its condition is always false
if (!WIFEXITED(status)) {
^~~~~~~~~~~~~~~~~~~~~~~~~
config.c:204:6: warning: variable 'pass' is used uninitialized whenever 'if' condition is true [-Wsometimes-uninitialized]
if (r == -1)
^~~~~~~
config.c:242:9: note: uninitialized use occurs here
return pass;
^~~~
config.c:204:2: note: remove the 'if' if its condition is always false
if (r == -1)
^~~~~~~~~~~~
config.c:177:6: warning: variable 'pass' is used uninitialized whenever 'if' condition is true [-Wsometimes-uninitialized]
if (pid == -1)
^~~~~~~~~
config.c:242:9: note: uninitialized use occurs here
return pass;
^~~~
config.c:177:2: note: remove the 'if' if its condition is always false
if (pid == -1)
^~~~~~~~~~~~~~
config.c:173:6: warning: variable 'pass' is used uninitialized whenever 'if' condition is true [-Wsometimes-uninitialized]
if (pipe(fds) == -1)
^~~~~~~~~~~~~~~
config.c:242:9: note: uninitialized use occurs here
return pass;
^~~~
config.c:173:2: note: remove the 'if' if its condition is always false
if (pipe(fds) == -1)
^~~~~~~~~~~~~~~~~~~~
config.c:170:12: note: initialize the variable 'pass' to silence this warning
char *pass;
^
= NULL
5 warnings generated.
clang -O3 -g -W -Wall -Wmissing-declarations -Wwrite-strings -I/nix/store/f7pyl32f8jq8g2i62qqfrjwigi0znb1a-libgcrypt-1.8.4-dev/include -I/nix/store/5pnsm2flly4w5zi0d1qlvz0zs71jwlv9-libgpg-error-1.36-dev/include -DOPENSSL_GPL_VIOLATION -DCRYPTO_OPENSSL -DVERSION=\"0.5.3\" -c -o dh.o dh.c
clang -O3 -g -W -Wall -Wmissing-declarations -Wwrite-strings -I/nix/store/f7pyl32f8jq8g2i62qqfrjwigi0znb1a-libgcrypt-1.8.4-dev/include -I/nix/store/5pnsm2flly4w5zi0d1qlvz0zs71jwlv9-libgpg-error-1.36-dev/include -DOPENSSL_GPL_VIOLATION -DCRYPTO_OPENSSL -DVERSION=\"0.5.3\" -c -o math_group.o math_group.c
clang -O3 -g -W -Wall -Wmissing-declarations -Wwrite-strings -I/nix/store/f7pyl32f8jq8g2i62qqfrjwigi0znb1a-libgcrypt-1.8.4-dev/include -I/nix/store/5pnsm2flly4w5zi0d1qlvz0zs71jwlv9-libgpg-error-1.36-dev/include -DOPENSSL_GPL_VIOLATION -DCRYPTO_OPENSSL -DVERSION=\"0.5.3\" -c -o supp.o supp.c
clang -O3 -g -W -Wall -Wmissing-declarations -Wwrite-strings -I/nix/store/f7pyl32f8jq8g2i62qqfrjwigi0znb1a-libgcrypt-1.8.4-dev/include -I/nix/store/5pnsm2flly4w5zi0d1qlvz0zs71jwlv9-libgpg-error-1.36-dev/include -DOPENSSL_GPL_VIOLATION -DCRYPTO_OPENSSL -DVERSION=\"0.5.3\" -c -o decrypt-utils.o decrypt-utils.c
clang -O3 -g -W -Wall -Wmissing-declarations -Wwrite-strings -I/nix/store/f7pyl32f8jq8g2i62qqfrjwigi0znb1a-libgcrypt-1.8.4-dev/include -I/nix/store/5pnsm2flly4w5zi0d1qlvz0zs71jwlv9-libgpg-error-1.36-dev/include -DOPENSSL_GPL_VIOLATION -DCRYPTO_OPENSSL -DVERSION=\"0.5.3\" -c -o crypto.o crypto.c
clang -O3 -g -W -Wall -Wmissing-declarations -Wwrite-strings -I/nix/store/f7pyl32f8jq8g2i62qqfrjwigi0znb1a-libgcrypt-1.8.4-dev/include -I/nix/store/5pnsm2flly4w5zi0d1qlvz0zs71jwlv9-libgpg-error-1.36-dev/include -DOPENSSL_GPL_VIOLATION -DCRYPTO_OPENSSL -DVERSION=\"0.5.3\" -c -o crypto-openssl.o crypto-openssl.c
crypto-openssl.c:66:30: warning: unused parameter 'buf' [-Wunused-parameter]
static int password_cb(char *buf, int size, int rwflag, void *userdata)
^
crypto-openssl.c:66:39: warning: unused parameter 'size' [-Wunused-parameter]
static int password_cb(char *buf, int size, int rwflag, void *userdata)
^
crypto-openssl.c:66:49: warning: unused parameter 'rwflag' [-Wunused-parameter]
static int password_cb(char *buf, int size, int rwflag, void *userdata)
^
crypto-openssl.c:66:63: warning: unused parameter 'userdata' [-Wunused-parameter]
static int password_cb(char *buf, int size, int rwflag, void *userdata)
^
4 warnings generated.
clang -O3 -g -W -Wall -Wmissing-declarations -Wwrite-strings -I/nix/store/f7pyl32f8jq8g2i62qqfrjwigi0znb1a-libgcrypt-1.8.4-dev/include -I/nix/store/5pnsm2flly4w5zi0d1qlvz0zs71jwlv9-libgpg-error-1.36-dev/include -DOPENSSL_GPL_VIOLATION -DCRYPTO_OPENSSL -DVERSION=\"0.5.3\" -c -o vpnc.o vpnc.c
vpnc.c:1901:39: warning: 'memset' call operates on objects of type 'unsigned char' while the size is based on a different type 'unsigned char *' [-Wsizeof-pointer-memaccess]
memset(dh_shared_secret, 0, sizeof(dh_shared_secret));
~~~~~~~~~~~~~~~~ ^~~~~~~~~~~~~~~~
vpnc.c:1901:39: note: did you mean to provide an explicit length?
memset(dh_shared_secret, 0, sizeof(dh_shared_secret));
^~~~~~~~~~~~~~~~
1 warning generated.
...
checking for gawk... gawk
checking whether make sets $(MAKE)... yes
checking whether make supports nested variables... yes
checking whether UID '501' is supported by ustar format... yes
checking whether GID '20' is supported by ustar format... yes
checking how to create a ustar tar archive... gnutar
checking whether make supports nested variables... (cached) yes
checking for style of include used by make... GNU
checking for gcc... clang
checking whether the C compiler works... yes
checking for C compiler default output file name... a.out
checking for suffix of executables...
checking whether we are cross compiling... no
checking for suffix of object files... o
checking whether we are using the GNU C compiler... yes
checking whether clang accepts -g... yes
checking for clang option to accept ISO C89... none needed
checking whether clang understands -c and -o together... yes
checking dependency style of clang... none
checking for fdevname_r... no
checking for statfs... yes
checking for getline... yes
checking for strcasestr... yes
checking for strndup... yes
checking for asprintf... yes
checking for vasprintf... yes
checking for supported compiler flags... -Wall -Wextra -Wno-missing-field-initializers -Wno-sign-compare -Wno-unused-parameter -Werror=pointer-to-int-cast -Wdeclaration-after-statement -Werror-implicit-function-declaration -Wformat-nonliteral -Wformat-security -Winit-self -Wmissing-declarations -Wmissing-include-dirs -Wnested-externs -Wpointer-arith -Wwrite-strings
checking For memset_s... yes
checking for socket... yes
checking for inet_aton... yes
checking for IPV6_PATHMTU socket option... no
checking for __android_log_vprint... no
checking for __android_log_vprint in -llog... no
checking for nl_langinfo... yes
checking for ld used by clang... ld
checking if the linker (ld) is GNU ld... no
checking for shared library run path origin... done
checking how to run the C preprocessor... clang -E
checking for grep that handles long lines and -e... /nix/store/bzr287y1sy2qsrmywwkgxlkblz7vx61w-gnugrep-3.3/bin/grep
checking for egrep... /nix/store/bzr287y1sy2qsrmywwkgxlkblz7vx61w-gnugrep-3.3/bin/grep -E
checking for iconv... no, consider installing GNU libiconv
checking for GNUTLS... yes
checking for gnutls_system_key_add_x509... yes
checking for gnutls_pkcs11_add_provider... yes
checking for P11KIT... no
checking for tss library... no
checking for TASN1... no
checking for LIBLZ4... no
configure: WARNING:
***
*** lz4 not found.
***
checking for egrep... (cached) /nix/store/bzr287y1sy2qsrmywwkgxlkblz7vx61w-gnugrep-3.3/bin/grep -E
checking how to print strings... printf
checking for a sed that does not truncate output... /nix/store/1qkqjb02khxr13q8hhwicznz5zsvjvzv-gnused-4.7/bin/sed
checking for fgrep... /nix/store/bzr287y1sy2qsrmywwkgxlkblz7vx61w-gnugrep-3.3/bin/grep -F
checking for ld used by clang... ld
checking if the linker (ld) is GNU ld... no
checking for BSD- or MS-compatible name lister (nm)... nm
checking the name lister (nm) interface... BSD nm
checking whether ln -s works... yes
checking the maximum length of command line arguments... 196608
checking how to convert x86_64-apple-darwin18.7.0 file names to x86_64-apple-darwin18.7.0 format... func_convert_file_noop
checking how to convert x86_64-apple-darwin18.7.0 file names to toolchain format... func_convert_file_noop
checking for ld option to reload object files... -r
checking for objdump... no
checking how to recognize dependent libraries... pass_all
checking for dlltool... no
checking how to associate runtime and link libraries... printf %s\n
checking for archiver @FILE support... no
checking for strip... strip
checking for ranlib... ranlib
checking command to parse nm output from clang object... ok
checking for sysroot... no
checking for a working dd... /nix/store/6964byz5cbs03s8zckqn72i6zq12ipqv-coreutils-8.31/bin/dd
checking how to truncate binary pipes... /nix/store/6964byz5cbs03s8zckqn72i6zq12ipqv-coreutils-8.31/bin/dd bs=4096 count=1
checking for mt... no
checking if : is a manifest tool... no
checking for dsymutil... dsymutil
checking for nmedit... no
checking for lipo... lipo
checking for otool... otool
checking for otool64... no
checking for -single_module linker flag... yes
checking for -exported_symbols_list linker flag... yes
checking for -force_load linker flag... yes
checking for dlfcn.h... yes
checking for objdir... .libs
checking if clang supports -fno-rtti -fno-exceptions... yes
checking for clang option to produce PIC... -fno-common -DPIC
checking if clang PIC flag -fno-common -DPIC works... yes
checking if clang static flag -static works... no
checking if clang supports -c -o file.o... yes
checking if clang supports -c -o file.o... (cached) yes
checking whether the clang linker (ld) supports shared libraries... yes
checking dynamic linker characteristics... darwin18.7.0 dyld
checking how to hardcode library paths into programs... immediate
checking whether stripping libraries is possible... yes
checking if libtool supports shared libraries... yes
checking whether to build shared libraries... yes
checking whether to build static libraries... no
checking linker version script flag... unsupported
checking for LIBXML2... yes
checking for ZLIB... yes
checking for LIBPROXY... no
checking for libproxy... no
checking for LIBSTOKEN... yes
checking for LIBPSKC... no
checking for krb5-config... no
checking gssapi/gssapi.h usability... no
checking gssapi/gssapi.h presence... no
checking for gssapi/gssapi.h... no
checking gssapi.h usability... no
checking gssapi.h presence... no
checking for gssapi.h... no
configure: WARNING: Cannot find <gssapi/gssapi.h> or <gssapi.h>
configure: WARNING: Building without GSSAPI support
checking if_tun.h usability... no
checking if_tun.h presence... no
checking for if_tun.h... no
checking linux/if_tun.h usability... no
checking linux/if_tun.h presence... no
checking for linux/if_tun.h... no
checking net/if_tun.h usability... no
checking net/if_tun.h presence... no
checking for net/if_tun.h... no
checking net/tun/if_tun.h usability... no
checking net/tun/if_tun.h presence... no
checking for net/tun/if_tun.h... no
checking for net/if_utun.h... yes
checking alloca.h usability... yes
checking alloca.h presence... yes
checking for alloca.h... yes
checking endian.h usability... no
checking endian.h presence... no
checking for endian.h... no
checking sys/endian.h usability... no
checking sys/endian.h presence... no
checking for sys/endian.h... no
checking sys/isa_defs.h usability... no
checking sys/isa_defs.h presence... no
checking for sys/isa_defs.h... no
checking for python3... no
checking for python2... no
checking for python... /usr/bin/python
checking if groff can create UTF-8 XHTML... no. Not building HTML pages
checking for CWRAP... no
checking for nuttcp... no
checking that generated files are newer than configure... done
configure: creating ./config.status
config.status: creating Makefile
config.status: creating openconnect.pc
config.status: creating po/Makefile
config.status: creating www/Makefile
config.status: creating libopenconnect.map
config.status: creating openconnect.8
config.status: creating www/styles/Makefile
config.status: creating www/inc/Makefile
config.status: creating www/images/Makefile
config.status: creating tests/Makefile
config.status: creating tests/softhsm2.conf
config.status: creating tests/configs/test-user-cert.config
config.status: creating tests/configs/test-user-pass.config
config.status: creating config.h
config.status: executing depfiles commands
config.status: executing libtool commands
BUILD OPTIONS:
SSL library: GnuTLS
PKCS#11 support: no
DTLS support: yes
ESP support: yes
libproxy support: no
RSA SecurID support: yes
PSKC OATH file support: no
GSSAPI support: no
Yubikey support: yes
LZ4 compression: no
Java bindings: no
Build docs: no
Unit tests: no
Net namespace tests: no
building
build flags: SHELL=/nix/store/d9wxsyxsj3ssjcbywwm35254d14yv87a-bash-4.4-p23/bin/bash
make all-recursive
make[1]: Entering directory '/private/var/folders/6j/j96x43893xd3vst_44_w7pqm0000gn/T/nix-build-openconnect-8.03.drv-0/openconnect-8.03'
CC libopenconnect_la-version.lo
CC libopenconnect_la-ssl.lo
CC libopenconnect_la-http.lo
CC libopenconnect_la-http-auth.lo
CC libopenconnect_la-auth-common.lo
CC libopenconnect_la-library.lo
CC libopenconnect_la-compat.lo
CC libopenconnect_la-lzs.lo
CC libopenconnect_la-mainloop.lo
CC libopenconnect_la-script.lo
CC libopenconnect_la-ntlm.lo
CC libopenconnect_la-digest.lo
CC libopenconnect_la-oncp.lo
CC libopenconnect_la-lzo.lo
CC libopenconnect_la-auth-juniper.lo
CC libopenconnect_la-esp.lo
CC libopenconnect_la-esp-seqno.lo
CC libopenconnect_la-gnutls-esp.lo
CC libopenconnect_la-auth.lo
CC libopenconnect_la-cstp.lo
CC libopenconnect_la-dtls.lo
CC libopenconnect_la-gnutls-dtls.lo
CC libopenconnect_la-oath.lo
CC libopenconnect_la-gpst.lo
CC libopenconnect_la-auth-globalprotect.lo
CC libopenconnect_la-yubikey.lo
yubikey.c:63:10: fatal error: 'PCSC/wintypes.h' file not found
#include <PCSC/wintypes.h>
^~~~~~~~~~~~~~~~~
1 error generated.
make[1]: *** [Makefile:1163: libopenconnect_la-yubikey.lo] Error 1
make[1]: Leaving directory '/private/var/folders/6j/j96x43893xd3vst_44_w7pqm0000gn/T/nix-build-openconnect-8.03.drv-0/openconnect-8.03'
make: *** [Makefile:697: all] Error 2
builder for '/nix/store/1fzxm6cdd5d2mgizkvqqx0zi0vg9ilpk-openconnect-8.03.drv' failed with exit code 2
error: build of '/nix/store/1fzxm6cdd5d2mgizkvqqx0zi0vg9ilpk-openconnect-8.03.drv’ failed
```
Naturally I’d expect the install to succeed.
Does anyone know how to make this kind of install work on macos and how to configure it via nix?
|
2019/08/26
|
[
"https://Stackoverflow.com/questions/57665576",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1241878/"
] |
Please check this
```
let dataJobs = await page.evaluate(()=>{
var a = document.getElementById("task-listing-datatable").getAttribute("data-tasks"); //Search job list
var ar = eval(a);//Convert string to arr
var keyword = ['Image Annotation', 'What Is The Best Dialogue Category About Phones', 'Label Roads( With Sidewalks) In Images']; //This is the list where I'll find match
let ret = [] ;
ar.forEach(val => { if (keyword.includes(val[1])) { ret.push(`${val[0]} - ${val[1]} - Paga: ${val[3]} - Numero de Tasks: ${val[5]} @everyone`)} } )
return ret ;
});
```
**Ps: Instead of using eval please use JSON.parse(a) to prevent javascript code injection .**
|
why not try and save the result of the match in a dynamic array instead of returning the value, something like a global array:
```
let array = [];
let dataJobs = await page.evaluate(()=>{
var a = document.getElementById("task-listing-datatable").getAttribute("data-tasks"); //Search job list
var ar = eval(a);//Convert string to arr
var keyword = ['Image Annotation', 'What Is The Best Dialogue Category About Phones', 'Label Roads( With Sidewalks) In Images']; //This is the list where I'll find match
for(let i=0; i<ar.length; i++){ //hago la busqueda de coincidencia
for(let j=0; j<keyword.length; j++){
if(ar[i][1] === keyword[j]){
let jobMatch =`${ar[i][0]} - ${ar[i][1]} - Paga: ${ar[i][3]} - Numero de Tasks: ${ar[i][5]} @everyone`; //return the Match
array.push(jobMatch);
}
}
}
});
```
|
60,920,262
|
First I have created an environment in anaconda navigator in which i have installed packages like Keras,Tensorflow and Theano.
Now I have activated my environment through anaconda navigator and have launched the Spyder Application.
Now When i try to import keras it is showing error stating that:
```
Traceback (most recent call last):
File "C:\Users\Quickhotshot612\Anaconda3\envs\tensorflow\lib\site-packages\tensorflow_core\python\pywrap_tensorflow.py", line 58, in <module>
from tensorflow.python.pywrap_tensorflow_internal import *
File "C:\Users\Quickhotshot612\Anaconda3\envs\tensorflow\lib\site-packages\tensorflow_core\python\pywrap_tensorflow_internal.py", line 28, in <module>
_pywrap_tensorflow_internal = swig_import_helper()
File "C:\Users\Quickhotshot612\Anaconda3\envs\tensorflow\lib\site-packages\tensorflow_core\python\pywrap_tensorflow_internal.py", line 24, in swig_import_helper
_mod = imp.load_module('_pywrap_tensorflow_internal', fp, pathname, description)
File "C:\Users\Quickhotshot612\Anaconda3\envs\tensorflow\lib\imp.py", line 242, in load_module
return load_dynamic(name, filename, file)
File "C:\Users\Quickhotshot612\Anaconda3\envs\tensorflow\lib\imp.py", line 342, in load_dynamic
return _load(spec)
ImportError: DLL load failed: The specified module could not be found.
```
Failed to load the native TensorFlow runtime.
See <https://www.tensorflow.org/install/errors>
for some common reasons and solutions. Include the entire stack trace
above this error message when asking for help.
And is continuously the showing the error like an infinite.
Where did i go wrong? Anyone help me with this.[enter image description here](https://i.stack.imgur.com/XCybv.png)
But when i give (import theano),it is working,but does'nt in the case of tensorflow and keras.
|
2020/03/29
|
[
"https://Stackoverflow.com/questions/60920262",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13148984/"
] |
You could use below aggregation for unique active users over day and month wise. I've assumed ct as timestamp field.
```
db.collection.aggregate(
[
{"$match":{"ct":{"$gte":dateFrom,"$lt":dateTo}}},
{"$facet":{
"dau":[
{"$group":{
"_id":{
"user":"$user",
"ymd":{"$dateToString":{"format":"%Y-%m-%d","date":"$ct"}}
}
}},
{"$group":{"_id":"$_id.ymd","dau":{"$sum":1}}}
],
"mau":[
{"$group":{
"_id":{
"user":"$user",
"ym":{"$dateToString":{"format":"%Y-%m","date":"$ct"}}
}
}},
{"$group":{"_id":"$_id.ym","mau":{"$sum":1}}}
]
}}
])
```
DAU
```
db.collection.aggregate(
[
{"$match":{"ct":{"$gte":dateFrom,"$lt":dateTo}}},
{"$group":{
"_id":{
"user":"$user",
"ymd":{"$dateToString":{"format":"%Y-%m-%d","date":"$ct"}}
}
}},
{"$group":{"_id":"$_id.ymd","dau":{"$sum":1}}}
])
```
MAU
```
db.collection.aggregate(
[
{"$match":{"ct":{"$gte":dateFrom,"$lt":dateTo}}},
{"$group":{
"_id":{
"user":"$user",
"ym":{"$dateToString":{"format":"%Y-%m","date":"$ct"}}
}
}},
{"$group":{"_id":"$_id.ym","mau":{"$sum":1}}}
])
```
|
I made a quick test on a large database of events, and counting with a distinct is much faster than an aggregate if you have the right indexes :
```
collection.distinct('user', { ct: { $gte: dateFrom, $lt: dateTo } }).length
```
|
60,920,262
|
First I have created an environment in anaconda navigator in which i have installed packages like Keras,Tensorflow and Theano.
Now I have activated my environment through anaconda navigator and have launched the Spyder Application.
Now When i try to import keras it is showing error stating that:
```
Traceback (most recent call last):
File "C:\Users\Quickhotshot612\Anaconda3\envs\tensorflow\lib\site-packages\tensorflow_core\python\pywrap_tensorflow.py", line 58, in <module>
from tensorflow.python.pywrap_tensorflow_internal import *
File "C:\Users\Quickhotshot612\Anaconda3\envs\tensorflow\lib\site-packages\tensorflow_core\python\pywrap_tensorflow_internal.py", line 28, in <module>
_pywrap_tensorflow_internal = swig_import_helper()
File "C:\Users\Quickhotshot612\Anaconda3\envs\tensorflow\lib\site-packages\tensorflow_core\python\pywrap_tensorflow_internal.py", line 24, in swig_import_helper
_mod = imp.load_module('_pywrap_tensorflow_internal', fp, pathname, description)
File "C:\Users\Quickhotshot612\Anaconda3\envs\tensorflow\lib\imp.py", line 242, in load_module
return load_dynamic(name, filename, file)
File "C:\Users\Quickhotshot612\Anaconda3\envs\tensorflow\lib\imp.py", line 342, in load_dynamic
return _load(spec)
ImportError: DLL load failed: The specified module could not be found.
```
Failed to load the native TensorFlow runtime.
See <https://www.tensorflow.org/install/errors>
for some common reasons and solutions. Include the entire stack trace
above this error message when asking for help.
And is continuously the showing the error like an infinite.
Where did i go wrong? Anyone help me with this.[enter image description here](https://i.stack.imgur.com/XCybv.png)
But when i give (import theano),it is working,but does'nt in the case of tensorflow and keras.
|
2020/03/29
|
[
"https://Stackoverflow.com/questions/60920262",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13148984/"
] |
I made a quick test on a large database of events, and counting with a distinct is much faster than an aggregate if you have the right indexes :
```
collection.distinct('user', { ct: { $gte: dateFrom, $lt: dateTo } }).length
```
|
you can use sum at the time of group.
```
collection.aggregate([
{ $match: {'date': {$gte: dateFrom, $lt: dateTo }}}, // fetch all requests from/to
{ $group: { _id: '$user', total: { $sum: 1 }}}, // group all requests by user and sum the count of collection for a group
{ $sort: { total: -1 }}
], function (err, result) {
if (err) cb(err, null);
cb(null, result);
});
```
|
60,920,262
|
First I have created an environment in anaconda navigator in which i have installed packages like Keras,Tensorflow and Theano.
Now I have activated my environment through anaconda navigator and have launched the Spyder Application.
Now When i try to import keras it is showing error stating that:
```
Traceback (most recent call last):
File "C:\Users\Quickhotshot612\Anaconda3\envs\tensorflow\lib\site-packages\tensorflow_core\python\pywrap_tensorflow.py", line 58, in <module>
from tensorflow.python.pywrap_tensorflow_internal import *
File "C:\Users\Quickhotshot612\Anaconda3\envs\tensorflow\lib\site-packages\tensorflow_core\python\pywrap_tensorflow_internal.py", line 28, in <module>
_pywrap_tensorflow_internal = swig_import_helper()
File "C:\Users\Quickhotshot612\Anaconda3\envs\tensorflow\lib\site-packages\tensorflow_core\python\pywrap_tensorflow_internal.py", line 24, in swig_import_helper
_mod = imp.load_module('_pywrap_tensorflow_internal', fp, pathname, description)
File "C:\Users\Quickhotshot612\Anaconda3\envs\tensorflow\lib\imp.py", line 242, in load_module
return load_dynamic(name, filename, file)
File "C:\Users\Quickhotshot612\Anaconda3\envs\tensorflow\lib\imp.py", line 342, in load_dynamic
return _load(spec)
ImportError: DLL load failed: The specified module could not be found.
```
Failed to load the native TensorFlow runtime.
See <https://www.tensorflow.org/install/errors>
for some common reasons and solutions. Include the entire stack trace
above this error message when asking for help.
And is continuously the showing the error like an infinite.
Where did i go wrong? Anyone help me with this.[enter image description here](https://i.stack.imgur.com/XCybv.png)
But when i give (import theano),it is working,but does'nt in the case of tensorflow and keras.
|
2020/03/29
|
[
"https://Stackoverflow.com/questions/60920262",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13148984/"
] |
You could use below aggregation for unique active users over day and month wise. I've assumed ct as timestamp field.
```
db.collection.aggregate(
[
{"$match":{"ct":{"$gte":dateFrom,"$lt":dateTo}}},
{"$facet":{
"dau":[
{"$group":{
"_id":{
"user":"$user",
"ymd":{"$dateToString":{"format":"%Y-%m-%d","date":"$ct"}}
}
}},
{"$group":{"_id":"$_id.ymd","dau":{"$sum":1}}}
],
"mau":[
{"$group":{
"_id":{
"user":"$user",
"ym":{"$dateToString":{"format":"%Y-%m","date":"$ct"}}
}
}},
{"$group":{"_id":"$_id.ym","mau":{"$sum":1}}}
]
}}
])
```
DAU
```
db.collection.aggregate(
[
{"$match":{"ct":{"$gte":dateFrom,"$lt":dateTo}}},
{"$group":{
"_id":{
"user":"$user",
"ymd":{"$dateToString":{"format":"%Y-%m-%d","date":"$ct"}}
}
}},
{"$group":{"_id":"$_id.ymd","dau":{"$sum":1}}}
])
```
MAU
```
db.collection.aggregate(
[
{"$match":{"ct":{"$gte":dateFrom,"$lt":dateTo}}},
{"$group":{
"_id":{
"user":"$user",
"ym":{"$dateToString":{"format":"%Y-%m","date":"$ct"}}
}
}},
{"$group":{"_id":"$_id.ym","mau":{"$sum":1}}}
])
```
|
you can use sum at the time of group.
```
collection.aggregate([
{ $match: {'date': {$gte: dateFrom, $lt: dateTo }}}, // fetch all requests from/to
{ $group: { _id: '$user', total: { $sum: 1 }}}, // group all requests by user and sum the count of collection for a group
{ $sort: { total: -1 }}
], function (err, result) {
if (err) cb(err, null);
cb(null, result);
});
```
|
51,549,234
|
I'm trying to spin up an EMR cluster with a Spark step using a Lambda function.
Here is my lambda function (python 2.7):
```
import boto3
def lambda_handler(event, context):
conn = boto3.client("emr")
cluster_id = conn.run_job_flow(
Name='LSR Batch Testrun',
ServiceRole='EMR_DefaultRole',
JobFlowRole='EMR_EC2_DefaultRole',
VisibleToAllUsers=True,
LogUri='s3n://aws-logs-171256445476-ap-southeast-2/elasticmapreduce/',
ReleaseLabel='emr-5.16.0',
Instances={
"Ec2SubnetId": "<my-subnet>",
'InstanceGroups': [
{
'Name': 'Master nodes',
'Market': 'ON_DEMAND',
'InstanceRole': 'MASTER',
'InstanceType': 'm3.xlarge',
'InstanceCount': 1,
},
{
'Name': 'Slave nodes',
'Market': 'ON_DEMAND',
'InstanceRole': 'CORE',
'InstanceType': 'm3.xlarge',
'InstanceCount': 2,
}
],
'KeepJobFlowAliveWhenNoSteps': False,
'TerminationProtected': False
},
Applications=[{
'Name': 'Spark',
'Name': 'Hive'
}],
Configurations=[
{
"Classification": "hive-site",
"Properties": {
"hive.metastore.client.factory.class": "com.amazonaws.glue.catalog.metastore.AWSGlueDataCatalogHiveClientFactory"
}
},
{
"Classification": "spark-hive-site",
"Properties": {
"hive.metastore.client.factory.class": "com.amazonaws.glue.catalog.metastore.AWSGlueDataCatalogHiveClientFactory"
}
}
],
Steps=[{
'Name': 'mystep',
'ActionOnFailure': 'TERMINATE_CLUSTER',
'HadoopJarStep': {
'Jar': 's3://elasticmapreduce/libs/script-runner/script-runner.jar',
'Args': [
"/home/hadoop/spark/bin/spark-submit", "--deploy-mode", "cluster",
"--master", "yarn-cluster", "--class", "org.apache.spark.examples.SparkPi",
"s3://support.elasticmapreduce/spark/1.2.0/spark-examples-1.2.0-hadoop2.4.0.jar", "10"
]
}
}],
)
return "Started cluster {}".format(cluster_id)
```
The cluster is starting up, but when trying to execute the step it fails. The error log is containing the following exception:
```
Exception in thread "main" java.lang.RuntimeException: Local file does not exist.
at com.amazon.elasticmapreduce.scriptrunner.ScriptRunner.fetchFile(ScriptRunner.java:30)
at com.amazon.elasticmapreduce.scriptrunner.ScriptRunner.main(ScriptRunner.java:56)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.util.RunJar.run(RunJar.java:234)
at org.apache.hadoop.util.RunJar.main(RunJar.java:148)
```
So it seems like the script-runner is not understanding to pick up the .jar file from S3?
Any help appreciated...
|
2018/07/27
|
[
"https://Stackoverflow.com/questions/51549234",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5175894/"
] |
I could solve the problem eventually. Main problem was the broken "Applications" configuration, which has to look like the following instead:
```
Applications=[{
'Name': 'Spark'
},
{
'Name': 'Hive'
}],
```
The final Steps element:
```
Steps=[{
'Name': 'lsr-step1',
'ActionOnFailure': 'TERMINATE_CLUSTER',
'HadoopJarStep': {
'Jar': 'command-runner.jar',
'Args': [
"spark-submit", "--class", "org.apache.spark.examples.SparkPi",
"s3://support.elasticmapreduce/spark/1.2.0/spark-examples-1.2.0-hadoop2.4.0.jar", "10"
]
}
}]
```
|
Not all EMR pre-built with ability to copy your jar, script from S3 so you must do that in bootstrap steps:
```
BootstrapActions=[
{
'Name': 'Install additional components',
'ScriptBootstrapAction': {
'Path': code_dir + '/scripts' + '/emr_bootstrap.sh'
}
}
],
```
And here is what my bootstrap does
```
#!/bin/bash
HADOOP="/home/hadoop"
BUCKET="s3://<yourbucket>/<path>"
# Sync jars libraries
aws s3 sync ${BUCKET}/jars/ ${HADOOP}/
aws s3 sync ${BUCKET}/scripts/ ${HADOOP}/
# Install python packages
sudo pip install --upgrade pip
sudo ln -s /usr/local/bin/pip /usr/bin/pip
sudo pip install psycopg2 numpy boto3 pythonds
```
Then you can call your script and jar like this
```
{
'Name': 'START YOUR STEP',
'ActionOnFailure': 'TERMINATE_CLUSTER',
'HadoopJarStep': {
'Jar': 'command-runner.jar',
'Args': [
"spark-submit", "--jars", ADDITIONAL_JARS,
"--py-files", "/home/hadoop/modules.zip",
"/home/hadoop/<your code>.py"
]
}
},
```
|
53,503,196
|
I want to separate the exact words of a text file (*text.txt*) ending in a certain string by using 'endswith'. The fact is that my variable
```
h=[w for w in 'text.txt' if w.endswith('os')]
```
does not give what I want when I call it. On the other hand, if I try naming the open file
```
f=open('text.txt')
h=[w for w in f if w.endswith('os')]
```
does not work either. Should I convert the text into a list first?
Comment: I hope this is not duplicate. It is not the same as this [former question](https://stackoverflow.com/questions/14676265/how-to-read-a-text-file-into-a-list-or-an-array-with-python) although close to it.
|
2018/11/27
|
[
"https://Stackoverflow.com/questions/53503196",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10502088/"
] |
Open the file first and then process it like this:
```
with open('text.txt', 'r') as file:
content = file.read()
h=[w for w in content.split() if w.endswith('os')]
```
|
```
with open('text.txt') as f:
words = [word for line in f for word in line.split() if word.endswith('os')]
```
Your first attempt does not read the file, or even open it. Instead, it loops over the characters of the string `'text.txt'` and checks each of them if it ends with `'os'`.
Your second attempt iterates over lines of the file, not words -- that's how a `for` loop works with a file handle.
|
53,503,196
|
I want to separate the exact words of a text file (*text.txt*) ending in a certain string by using 'endswith'. The fact is that my variable
```
h=[w for w in 'text.txt' if w.endswith('os')]
```
does not give what I want when I call it. On the other hand, if I try naming the open file
```
f=open('text.txt')
h=[w for w in f if w.endswith('os')]
```
does not work either. Should I convert the text into a list first?
Comment: I hope this is not duplicate. It is not the same as this [former question](https://stackoverflow.com/questions/14676265/how-to-read-a-text-file-into-a-list-or-an-array-with-python) although close to it.
|
2018/11/27
|
[
"https://Stackoverflow.com/questions/53503196",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10502088/"
] |
```
with open('text.txt') as f:
words = [word for line in f for word in line.split() if word.endswith('os')]
```
Your first attempt does not read the file, or even open it. Instead, it loops over the characters of the string `'text.txt'` and checks each of them if it ends with `'os'`.
Your second attempt iterates over lines of the file, not words -- that's how a `for` loop works with a file handle.
|
```
f=open('text.txt')
h=[w for w in f if w.endswith('os')]
```
This should work properly. Reasons it may be not working for you,
1. You should `strip` the line first. There may be hidden ascii chars, like "\n". You can use `rstrip()` method for that. Something like this.
`h=[w.rstrip() for w in f if w.rstrip().endswith('os')]`
2. After reading the file once, the `w` pointer reaches the End Of File (EOF), and hence any more read-operations will be in vain. To move the pointer back to the starting of the file, either use `seek` method, or re-open the file.
|
53,503,196
|
I want to separate the exact words of a text file (*text.txt*) ending in a certain string by using 'endswith'. The fact is that my variable
```
h=[w for w in 'text.txt' if w.endswith('os')]
```
does not give what I want when I call it. On the other hand, if I try naming the open file
```
f=open('text.txt')
h=[w for w in f if w.endswith('os')]
```
does not work either. Should I convert the text into a list first?
Comment: I hope this is not duplicate. It is not the same as this [former question](https://stackoverflow.com/questions/14676265/how-to-read-a-text-file-into-a-list-or-an-array-with-python) although close to it.
|
2018/11/27
|
[
"https://Stackoverflow.com/questions/53503196",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10502088/"
] |
Open the file first and then process it like this:
```
with open('text.txt', 'r') as file:
content = file.read()
h=[w for w in content.split() if w.endswith('os')]
```
|
Splitting the seperate words into a list (assuming they are seperated by spaces)
```
f = open('text.txt').read().split(' ')
```
Then to get a list of the words ending in "os", like you had:
```
h=[w for w in f if w.endswith('os')]
```
|
53,503,196
|
I want to separate the exact words of a text file (*text.txt*) ending in a certain string by using 'endswith'. The fact is that my variable
```
h=[w for w in 'text.txt' if w.endswith('os')]
```
does not give what I want when I call it. On the other hand, if I try naming the open file
```
f=open('text.txt')
h=[w for w in f if w.endswith('os')]
```
does not work either. Should I convert the text into a list first?
Comment: I hope this is not duplicate. It is not the same as this [former question](https://stackoverflow.com/questions/14676265/how-to-read-a-text-file-into-a-list-or-an-array-with-python) although close to it.
|
2018/11/27
|
[
"https://Stackoverflow.com/questions/53503196",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10502088/"
] |
Open the file first and then process it like this:
```
with open('text.txt', 'r') as file:
content = file.read()
h=[w for w in content.split() if w.endswith('os')]
```
|
```
f=open('text.txt')
h=[w for w in f if w.endswith('os')]
```
This should work properly. Reasons it may be not working for you,
1. You should `strip` the line first. There may be hidden ascii chars, like "\n". You can use `rstrip()` method for that. Something like this.
`h=[w.rstrip() for w in f if w.rstrip().endswith('os')]`
2. After reading the file once, the `w` pointer reaches the End Of File (EOF), and hence any more read-operations will be in vain. To move the pointer back to the starting of the file, either use `seek` method, or re-open the file.
|
53,503,196
|
I want to separate the exact words of a text file (*text.txt*) ending in a certain string by using 'endswith'. The fact is that my variable
```
h=[w for w in 'text.txt' if w.endswith('os')]
```
does not give what I want when I call it. On the other hand, if I try naming the open file
```
f=open('text.txt')
h=[w for w in f if w.endswith('os')]
```
does not work either. Should I convert the text into a list first?
Comment: I hope this is not duplicate. It is not the same as this [former question](https://stackoverflow.com/questions/14676265/how-to-read-a-text-file-into-a-list-or-an-array-with-python) although close to it.
|
2018/11/27
|
[
"https://Stackoverflow.com/questions/53503196",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10502088/"
] |
Splitting the seperate words into a list (assuming they are seperated by spaces)
```
f = open('text.txt').read().split(' ')
```
Then to get a list of the words ending in "os", like you had:
```
h=[w for w in f if w.endswith('os')]
```
|
```
f=open('text.txt')
h=[w for w in f if w.endswith('os')]
```
This should work properly. Reasons it may be not working for you,
1. You should `strip` the line first. There may be hidden ascii chars, like "\n". You can use `rstrip()` method for that. Something like this.
`h=[w.rstrip() for w in f if w.rstrip().endswith('os')]`
2. After reading the file once, the `w` pointer reaches the End Of File (EOF), and hence any more read-operations will be in vain. To move the pointer back to the starting of the file, either use `seek` method, or re-open the file.
|
55,318,862
|
I want to try to make a little webserver where you can put location data into a database using a grid system in Javascript. I've found a setup that I really liked and maked an example in CodePen: <https://codepen.io/anon/pen/XGxEaj>. A user can click in a grid to point out the location. In the console log I see the data that is generated as output. For every cell in the grid it has a dictionary that looks like this:
```js
click: 4
height: 50
width: 50
x: 1
xnum: 1
y: 1
ynum: 1
```
A complete overview of output data looks like this, where their are 10 arrays for every row, 10 arrays for every column of that row, and then a dictionary with the values:
[](https://i.stack.imgur.com/IkYP3.png)
Now i want to get this information back to python in a dictionary as well, so i can store particular information in a database. But I can't figure out what the best way is to do that. Hopefully you could provide a little push in the right direction, links to usefull pages or code snippets to guide me.
Thanks for any help in advance!
|
2019/03/23
|
[
"https://Stackoverflow.com/questions/55318862",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9606978/"
] |
This can be achieved by converting your JS object into JSON:
```
var json = JSON.stringify(myJsObj);
```
And then converting your Python object to JSON:
```
import json
myPythonObj = json.loads(json)
```
More information about the JSON package for Python can be found [here](https://www.w3schools.com/python/python_json.asp) and more information about converting JS objects to and from JSON can be found [here](https://www.w3schools.com/js/js_json_intro.asp)
|
You should use the python [json](https://docs.python.org/3/library/json.html) module.
Here is how you can perform such operation using the module.
```
import json
json_data = '{"a": 1, "b": 2, "c": 3, "d": 4, "e": 5}'
loaded_json = json.loads(json_data)
```
|
66,602,656
|
I am following the instruction (<https://github.com/huggingface/transfer-learning-conv-ai>) to install conv-ai from huggingface, but I got stuck on the docker build step: `docker build -t convai .`
I am using Mac 10.15, python 3.8, increased Docker memory to 4G.
I have tried the following ways to solve the issue:
1. add `numpy` in `requirements.txt`
2. add `RUN pip3 install --upgrade setuptools` in Dockerfile
3. add `--upgrade` to `RUN pip3 install -r /tmp/requirements.txt` in Dockerfile
4. add `RUN pip3 install numpy` before `RUN pip3 install -r /tmp/requirements.txt` in Dockerfile
5. add `RUN apt-get install python3-numpy` before `RUN pip3 install -r /tmp/requirements.txt` in Dockerfile
6. using python 3.6.13 because of this [post](https://github.com/huggingface/transfer-learning-conv-ai/issues/97), but it has exact same error.
7. I am currently working on debugging inside the container by entering right before the `RUN pip3 install requirements.txt`
Can anyone help me on this? Thank you!!
The error:
```
=> [6/9] COPY . ./ 0.0s
=> [7/9] COPY requirements.txt /tmp/requirements.txt 0.0s
=> ERROR [8/9] RUN pip3 install -r /tmp/requirements.txt 98.2s
------
> [8/9] RUN pip3 install -r /tmp/requirements.txt:
#12 1.111 Collecting torch (from -r /tmp/requirements.txt (line 1))
#12 1.754 Downloading https://files.pythonhosted.org/packages/46/99/8b658e5095b9fb02e38ccb7ecc931eb1a03b5160d77148aecf68f8a7eeda/torch-1.8.0-cp36-cp36m-manylinux1_x86_64.whl (735.5MB)
#12 81.11 Collecting pytorch-ignite (from -r /tmp/requirements.txt (line 2))
#12 81.76 Downloading https://files.pythonhosted.org/packages/f8/d3/640f70d69393b415e6a29b27c735047ad86267921ad62682d1d756556d48/pytorch_ignite-0.4.4-py3-none-any.whl (200kB)
#12 81.82 Collecting transformers==2.5.1 (from -r /tmp/requirements.txt (line 3))
#12 82.17 Downloading https://files.pythonhosted.org/packages/13/33/ffb67897a6985a7b7d8e5e7878c3628678f553634bd3836404fef06ef19b/transformers-2.5.1-py3-none-any.whl (499kB)
#12 82.29 Collecting tensorboardX==1.8 (from -r /tmp/requirements.txt (line 4))
#12 82.50 Downloading https://files.pythonhosted.org/packages/c3/12/dcaf67e1312475b26db9e45e7bb6f32b540671a9ee120b3a72d9e09bc517/tensorboardX-1.8-py2.py3-none-any.whl (216kB)
#12 82.57 Collecting tensorflow (from -r /tmp/requirements.txt (line 5))
#12 83.12 Downloading https://files.pythonhosted.org/packages/de/f0/96fb2e0412ae9692dbf400e5b04432885f677ad6241c088ccc5fe7724d69/tensorflow-1.14.0-cp36-cp36m-manylinux1_x86_64.whl (109.2MB)
#12 95.24 Collecting spacy (from -r /tmp/requirements.txt (line 6))
#12 95.81 Downloading https://files.pythonhosted.org/packages/65/01/fd65769520d4b146d92920170fd00e01e826cda39a366bde82a87ca249db/spacy-3.0.5.tar.gz (7.0MB)
#12 97.41 Complete output from command python setup.py egg_info:
#12 97.41 Traceback (most recent call last):
#12 97.41 File "<string>", line 1, in <module>
#12 97.41 File "/tmp/pip-build-cc3a804w/spacy/setup.py", line 5, in <module>
#12 97.41 import numpy
#12 97.41 ModuleNotFoundError: No module named 'numpy'
#12 97.41
#12 97.41 ----------------------------------------
#12 98.11 Command "python setup.py egg_info" failed with error code 1 in /tmp/pip-build-cc3a804w/spacy/
```
@Håken Lid The error I got if I `RUN pip3 install numpy` right before `RUN pip3 install -r tmp/requirements`:
```
=> [ 8/10] RUN pip3 install numpy 10.1s
=> ERROR [ 9/10] RUN pip3 install -r /tmp/requirements.txt 112.4s
------
> [ 9/10] RUN pip3 install -r /tmp/requirements.txt:
#13 1.067 Requirement already satisfied: numpy in /usr/local/lib/python3.6/dist-packages (from -r /tmp/requirements.txt (line 1))
#13 1.074 Collecting torch (from -r /tmp/requirements.txt (line 2))
#13 1.656 Downloading https://files.pythonhosted.org/packages/46/99/8b658e5095b9fb02e38ccb7ecc931eb1a03b5160d77148aecf68f8a7eeda/torch-1.8.0-cp36-cp36m-manylinux1_x86_64.whl (735.5MB)
#13 96.46 Collecting pytorch-ignite (from -r /tmp/requirements.txt (line 3))
#13 97.02 Downloading https://files.pythonhosted.org/packages/f8/d3/640f70d69393b415e6a29b27c735047ad86267921ad62682d1d756556d48/pytorch_ignite-0.4.4-py3-none-any.whl (200kB)
#13 97.07 Collecting transformers==2.5.1 (from -r /tmp/requirements.txt (line 4))
#13 97.32 Downloading https://files.pythonhosted.org/packages/13/33/ffb67897a6985a7b7d8e5e7878c3628678f553634bd3836404fef06ef19b/transformers-2.5.1-py3-none-any.whl (499kB)
#13 97.43 Collecting tensorboardX==1.8 (from -r /tmp/requirements.txt (line 5))
#13 97.70 Downloading https://files.pythonhosted.org/packages/c3/12/dcaf67e1312475b26db9e45e7bb6f32b540671a9ee120b3a72d9e09bc517/tensorboardX-1.8-py2.py3-none-any.whl (216kB)
#13 97.76 Collecting tensorflow (from -r /tmp/requirements.txt (line 6))
#13 98.27 Downloading https://files.pythonhosted.org/packages/de/f0/96fb2e0412ae9692dbf400e5b04432885f677ad6241c088ccc5fe7724d69/tensorflow-1.14.0-cp36-cp36m-manylinux1_x86_64.whl (109.2MB)
#13 109.6 Collecting spacy (from -r /tmp/requirements.txt (line 7))
#13 110.0 Downloading https://files.pythonhosted.org/packages/65/01/fd65769520d4b146d92920170fd00e01e826cda39a366bde82a87ca249db/spacy-3.0.5.tar.gz (7.0MB)
#13 111.6 Complete output from command python setup.py egg_info:
#13 111.6 Traceback (most recent call last):
#13 111.6 File "<string>", line 1, in <module>
#13 111.6 File "/tmp/pip-build-t6n57csv/spacy/setup.py", line 10, in <module>
#13 111.6 from Cython.Build import cythonize
#13 111.6 ModuleNotFoundError: No module named 'Cython'
#13 111.6
#13 111.6 ----------------------------------------
#13 112.3 Command "python setup.py egg_info" failed with error code 1 in /tmp/pip-build-t6n57csv/spacy/
------
executor failed running [/bin/sh -c pip3 install -r /tmp/requirements.txt]: exit code: 1
```
requirements.txt:
```
torch
pytorch-ignite
transformers==2.5.1
tensorboardX==1.8
tensorflow # for tensorboardX
spacy
```
Dockerfile:
```
FROM ubuntu:18.04
MAINTAINER Loreto Parisi loretoparisi@gmail.com
######################################## BASE SYSTEM
# set noninteractive installation
ARG DEBIAN_FRONTEND=noninteractive
RUN apt-get update && apt-get install -y apt-utils
RUN apt-get install -y --no-install-recommends \
build-essential \
pkg-config \
tzdata \
curl
######################################## PYTHON3
RUN apt-get install -y \
python3 \
python3-pip
# set local timezone
RUN ln -fs /usr/share/zoneinfo/America/New_York /etc/localtime && \
dpkg-reconfigure --frontend noninteractive tzdata
# transfer-learning-conv-ai
ENV PYTHONPATH /usr/local/lib/python3.6
COPY . ./
COPY requirements.txt /tmp/requirements.txt
RUN pip3 install -r /tmp/requirements.txt
# model zoo
RUN mkdir models && \
curl https://s3.amazonaws.com/models.huggingface.co/transfer-learning-chatbot/finetuned_chatbot_gpt.tar.gz > models/finetuned_chatbot_gpt.tar.gz && \
cd models/ && \
tar -xvzf finetuned_chatbot_gpt.tar.gz && \
rm finetuned_chatbot_gpt.tar.gz
CMD ["bash"]
```
Steps I ran so far:
```
git clone https://github.com/huggingface/transfer-learning-conv-ai
cd transfer-learning-conv-ai
pip install -r requirements.txt
python -m spacy download en
docker build -t convai .
```
|
2021/03/12
|
[
"https://Stackoverflow.com/questions/66602656",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11210924/"
] |
It seems that pip does not install the pre-built wheel, but instead tries to build *spacy* from source. This is a fragile process and requires extra dependencies.
To avoid this, you should ensure that the Python packages *pip*, *wheel* and *setuptools* are up to date before proceeding with the installation.
```
# replace RUN pip3 install -r /tmp/requirements.txt
RUN python3 -m pip install --upgrade pip setuptools wheel
RUN python3 -m pip install -r /tmp/requirements.txt
```
|
Did you try adding numpy into the requirements.txt? It looks to me that it is missing.
|
47,186,849
|
Given an iterable consisting of a finite set of elements:
>
> (a, b, c, d)
>
>
>
as an example
What would be a Pythonic way to generate the following (pseudo) ordered pair from the above iterable:
```
ab
ac
ad
bc
bd
cd
```
A trivial way would be to use for loops, but I'm wondering if there is a pythonic way of generating this list from the iterable above ?
|
2017/11/08
|
[
"https://Stackoverflow.com/questions/47186849",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/962891/"
] |
Try using [combinations](https://docs.python.org/2/library/itertools.html#itertools.combinations).
```
import itertools
combinations = itertools.combinations('abcd', n)
```
will give you an iterator, you can loop over it or convert it into a list with `list(combinations)`
In order to only include pairs as in your example, you can pass 2 as the argument:
```
combinations = itertools.combinations('abcd', 2)
>>> print list(combinations)
[('a', 'b'), ('a', 'c'), ('a', 'd'), ('b', 'c'), ('b', 'd'), ('c', 'd')]
```
|
You can accomplish this in a list comprehension.
```
data = [1, 2, 3, 4]
output = [(data[n], next) for n in range(0,len(data)) for next in data[n:]]
print(repr(output))
```
Pulling the comprehension apart, it's making a tuple of the first member of the iterable and the first object in a list composed of all other members of the iterable. `data[n:]` tells Python to take all members after the nth.
[Here it is in action.](https://repl.it/NyYe)
|
47,186,849
|
Given an iterable consisting of a finite set of elements:
>
> (a, b, c, d)
>
>
>
as an example
What would be a Pythonic way to generate the following (pseudo) ordered pair from the above iterable:
```
ab
ac
ad
bc
bd
cd
```
A trivial way would be to use for loops, but I'm wondering if there is a pythonic way of generating this list from the iterable above ?
|
2017/11/08
|
[
"https://Stackoverflow.com/questions/47186849",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/962891/"
] |
Try using [combinations](https://docs.python.org/2/library/itertools.html#itertools.combinations).
```
import itertools
combinations = itertools.combinations('abcd', n)
```
will give you an iterator, you can loop over it or convert it into a list with `list(combinations)`
In order to only include pairs as in your example, you can pass 2 as the argument:
```
combinations = itertools.combinations('abcd', 2)
>>> print list(combinations)
[('a', 'b'), ('a', 'c'), ('a', 'd'), ('b', 'c'), ('b', 'd'), ('c', 'd')]
```
|
Use list comprehensions - they seem to be considered pythonic.
```
stuff = ('a','b','c','d')
```
Obtain the n-digit binary numbers, in string format, where only two of the digits are *one* and n is the length of the items.
```
n = len(stuff)
s = '{{:0{}b}}'.format(n)
z = [s.format(x) for x in range(2**n) if s.format(x).count('1') ==2]
```
Find the indices of the *ones* for each combination.
```
y = [[i for i, c in enumerate(combo) if c == '1'] for combo in z]
```
Use the indices to select the items, then sort.
```
x = [''.join(operator.itemgetter(*indices)(stuff)) for indices in y]
x.sort()
```
|
54,551,016
|
I am trying to use cloud SQL / mysql instance for my APPEngine account. The app is an python django-2.1.5 appengine app. I have created an instance of MYSQL in the google cloud.
I have added the following in my app.yaml file copied from the SQL instance details:
```
beta_settings:
cloud_sql_instances: <INSTANCE_CONNECTION_NAME>=tcp:<TCP_PORT>
```
* I have given rights for my appengine project xxx-app's owner xxx-`app@appspot.gserviceaccount.com` the `Cloud SQL Client` rights. I have created a DB user account specific for the app `XYZ` which can connect to all hosts (\* option)
* My connection details in settings.py are the following:
```
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.mysql',
'NAME': 'my-db',
'USER': 'appengine',
'PASSWORD': 'xxx',
'HOST': '111.111.11.11', # used actual ip
'PORT': '3306'
}
}
```
* I have also tried as per <https://github.com/GoogleCloudPlatform/appengine-django-skeleton/blob/master/mysite/settings.py>:
```
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.mysql',
'HOST': '/cloudsql/<your-project-id>:<your-cloud-sql-instance>',
'NAME': '<your-database-name>',
'USER': 'root',
}
}
```
* I cannot connect from the local as well. However, if I do a `Add Network` of my local IP and then try to connect the local connects. THe app runs fine locally after adding the network of local IP Address using CIDR notation.
My problems:
* I am unable to connect to the Cloud sql without adding the AppEngine assigned IP Address. It gives me an error :
```
OperationalError: (2003, "Can't connect to MySQL server on '0.0.0.0' ([Errno 111] Connection refused)")
```
* Where can I find the appengine's assigned IP Address. I dont mind even if it is temporary. I understand if I need static IP Address I will have to create a Compute VM Instance.
|
2019/02/06
|
[
"https://Stackoverflow.com/questions/54551016",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3204942/"
] |
App Engine doesn't have any guarantees regarding IP address of a particular instance, and may change at any time. Since it is a Serverless platform, it abstracts away the infrastructure to allow you to focus on your app.
There are two options when using App Engine Flex: Unix domain socket and TCP port. Which one App Engine provides for you depends on how you specify it in your app.yaml:
* `cloud_sql_instances: <INSTANCE_CONNECTION_NAME>` provides a Unix socket at
`/cloudsql/<INSTANCE_CONNECTION_NAME>`
* `cloud_sql_instances: <INSTANCE_CONNECTION_NAME>=tcp:<TCP_PORT>` provides a local tcp port (`127.0.0.1:<TCP_PORT>`).
You can find more information about this on the [Connecting from App Engine](https://cloud-dot-devsite.googleplex.com/sql/docs/postgres/connect-app-engine#connecting_the_flexible_environment_to) page.
|
I've encountered this issue before and after hours of scratching my head, all I needed to do was enable "Cloud SQL Admin API" and the deployed site connected to the database. This also sets permissions on your GAE service account for cloud proxy to connect to your GAE service.
|
60,765,225
|
right now i use
```
foo()
import pdb; pdb.set_trace()
bar()
```
for debugging purposes. Its a tad cumbersome, is there a better way to implement a breakpoint in python?
edit: i understand there are IDE's that are useful, but i want to do this programatically, furthermore all the documentation i can find is for `pdb.set_trace()` instead of `breakpoint()` i would appriciate an explanation on how to use it
|
2020/03/19
|
[
"https://Stackoverflow.com/questions/60765225",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11898750/"
] |
Personally I prefer to use my editor for debugging. I can visually set break points and use keyboard shortcuts to step through my code and evaluate variables and expressions. Visual Studio Code, Eclipse and PyCharm are all popular choices.
If you insist on using the command line, you can use the interactive debugger. See the [pdb documentation](https://docs.python.org/3/library/pdb.html) for details.
|
Python 3.7 added a [breakpoint() function](https://docs.python.org/3/library/functions.html#breakpoint) that, by default, performs the same job without the explicit import (to keep it lower overhead). It's [quite configurable](https://docs.python.org/3/whatsnew/3.7.html#whatsnew37-pep553) though, so you can have it do other things.
|
50,885,291
|
Trying to upgrade pip in my terminal. Here's my code:
```
(env) macbook-pro83:zappatest zorgan$ pip install --upgrade pip
Could not fetch URL https://pypi.python.org/simple/pip/: There was a problem confirming the ssl certificate: [SSL: TLSV1_ALERT_PROTOCOL_VERSION] tlsv1 alert protocol version (_ssl.c:645) - skipping
Requirement already up-to-date: pip in ./env/lib/python3.5/site-packages
You are using pip version 8.1.2, however version 10.0.1 is available.
You should consider upgrading via the 'pip install --upgrade pip' command.
```
Any idea what the problem is? I also get the same error `There was a problem confirming the ssl certificate` when I perform `pip install django`.
**Edit** `pip install --upgrade pip -vvv returns`:
```
1 location(s) to search for versions of pip:
* https://pypi.python.org/simple/pip/
Getting page https://pypi.python.org/simple/pip/
Looking up "https://pypi.python.org/simple/pip/" in the cache
Returning cached "301 Moved Permanently" response (ignoring date and etag information)
Looking up "https://pypi.org/simple/pip/" in the cache
Current age based on date: 23811
Freshness lifetime from max-age: 600
Freshness lifetime from request max-age: 600
Starting new HTTPS connection (1): pypi.org
Could not fetch URL https://pypi.python.org/simple/pip/: There was a problem confirming the ssl certificate: [SSL: TLSV1_ALERT_PROTOCOL_VERSION] tlsv1 alert protocol version (_ssl.c:645) - skipping
Installed version (8.1.2) is most up-to-date (past versions: none)
Requirement already up-to-date: pip in ./env/lib/python3.5/site-packages
Cleaning up...
You are using pip version 8.1.2, however version 10.0.1 is available.
You should consider upgrading via the 'pip install --upgrade pip' command.
```
|
2018/06/16
|
[
"https://Stackoverflow.com/questions/50885291",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6733153/"
] |
a. Check your system date and time and see whether it is correct.
b. If the 1st solution doesn't fix it try using a lower security method:
```
pip install --index-url=http://pypi.python.org/simple/linkchecker
```
This bypasses HTTPS and uses HTTP instead, try this workaround only if you are in a hurry.
c. Try downgrading pip to a version that does not use SSL verification by:
```
pip install pip==1.2.1
```
and then upgrade pip back again to the newer version by:
```
pip install --upgrade pip
```
d. If nothing of the above works, uninstall pip and reinstall it.
|
You can get more details about upgrade failed by using
`$ pip install --upgrade pip -vvv`
there will be more details help you debug it.
---
Try
`pip --trusted-host pypi.python.org install --upgrade pip`
Maybe useful.
---
Another solution:
`$ pip install certifi`
then run install.
|
50,885,291
|
Trying to upgrade pip in my terminal. Here's my code:
```
(env) macbook-pro83:zappatest zorgan$ pip install --upgrade pip
Could not fetch URL https://pypi.python.org/simple/pip/: There was a problem confirming the ssl certificate: [SSL: TLSV1_ALERT_PROTOCOL_VERSION] tlsv1 alert protocol version (_ssl.c:645) - skipping
Requirement already up-to-date: pip in ./env/lib/python3.5/site-packages
You are using pip version 8.1.2, however version 10.0.1 is available.
You should consider upgrading via the 'pip install --upgrade pip' command.
```
Any idea what the problem is? I also get the same error `There was a problem confirming the ssl certificate` when I perform `pip install django`.
**Edit** `pip install --upgrade pip -vvv returns`:
```
1 location(s) to search for versions of pip:
* https://pypi.python.org/simple/pip/
Getting page https://pypi.python.org/simple/pip/
Looking up "https://pypi.python.org/simple/pip/" in the cache
Returning cached "301 Moved Permanently" response (ignoring date and etag information)
Looking up "https://pypi.org/simple/pip/" in the cache
Current age based on date: 23811
Freshness lifetime from max-age: 600
Freshness lifetime from request max-age: 600
Starting new HTTPS connection (1): pypi.org
Could not fetch URL https://pypi.python.org/simple/pip/: There was a problem confirming the ssl certificate: [SSL: TLSV1_ALERT_PROTOCOL_VERSION] tlsv1 alert protocol version (_ssl.c:645) - skipping
Installed version (8.1.2) is most up-to-date (past versions: none)
Requirement already up-to-date: pip in ./env/lib/python3.5/site-packages
Cleaning up...
You are using pip version 8.1.2, however version 10.0.1 is available.
You should consider upgrading via the 'pip install --upgrade pip' command.
```
|
2018/06/16
|
[
"https://Stackoverflow.com/questions/50885291",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6733153/"
] |
Managed to fix it (uninstall and reinstall pip) via the following command:
`curl https://bootstrap.pypa.io/get-pip.py | python`
|
You can get more details about upgrade failed by using
`$ pip install --upgrade pip -vvv`
there will be more details help you debug it.
---
Try
`pip --trusted-host pypi.python.org install --upgrade pip`
Maybe useful.
---
Another solution:
`$ pip install certifi`
then run install.
|
50,885,291
|
Trying to upgrade pip in my terminal. Here's my code:
```
(env) macbook-pro83:zappatest zorgan$ pip install --upgrade pip
Could not fetch URL https://pypi.python.org/simple/pip/: There was a problem confirming the ssl certificate: [SSL: TLSV1_ALERT_PROTOCOL_VERSION] tlsv1 alert protocol version (_ssl.c:645) - skipping
Requirement already up-to-date: pip in ./env/lib/python3.5/site-packages
You are using pip version 8.1.2, however version 10.0.1 is available.
You should consider upgrading via the 'pip install --upgrade pip' command.
```
Any idea what the problem is? I also get the same error `There was a problem confirming the ssl certificate` when I perform `pip install django`.
**Edit** `pip install --upgrade pip -vvv returns`:
```
1 location(s) to search for versions of pip:
* https://pypi.python.org/simple/pip/
Getting page https://pypi.python.org/simple/pip/
Looking up "https://pypi.python.org/simple/pip/" in the cache
Returning cached "301 Moved Permanently" response (ignoring date and etag information)
Looking up "https://pypi.org/simple/pip/" in the cache
Current age based on date: 23811
Freshness lifetime from max-age: 600
Freshness lifetime from request max-age: 600
Starting new HTTPS connection (1): pypi.org
Could not fetch URL https://pypi.python.org/simple/pip/: There was a problem confirming the ssl certificate: [SSL: TLSV1_ALERT_PROTOCOL_VERSION] tlsv1 alert protocol version (_ssl.c:645) - skipping
Installed version (8.1.2) is most up-to-date (past versions: none)
Requirement already up-to-date: pip in ./env/lib/python3.5/site-packages
Cleaning up...
You are using pip version 8.1.2, however version 10.0.1 is available.
You should consider upgrading via the 'pip install --upgrade pip' command.
```
|
2018/06/16
|
[
"https://Stackoverflow.com/questions/50885291",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6733153/"
] |
Managed to fix it (uninstall and reinstall pip) via the following command:
`curl https://bootstrap.pypa.io/get-pip.py | python`
|
a. Check your system date and time and see whether it is correct.
b. If the 1st solution doesn't fix it try using a lower security method:
```
pip install --index-url=http://pypi.python.org/simple/linkchecker
```
This bypasses HTTPS and uses HTTP instead, try this workaround only if you are in a hurry.
c. Try downgrading pip to a version that does not use SSL verification by:
```
pip install pip==1.2.1
```
and then upgrade pip back again to the newer version by:
```
pip install --upgrade pip
```
d. If nothing of the above works, uninstall pip and reinstall it.
|
1,195,494
|
Was wondering how you'd do the following in Windows:
From a c shell script (extension csh), I'm running a Python script within an 'eval' method so that the output from the script affects the shell environment. Looks like this:
```
eval `python -c "import sys; run_my_code_here(); "`
```
Was wondering how I would do something like the eval statement in Windows using Windows' built in CMD shell. I want to run a Python script within a Windows script and have the script run what the Python script prints out.
\*\* update: specified interest in running from CMD shell.
|
2009/07/28
|
[
"https://Stackoverflow.com/questions/1195494",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/101126/"
] |
If it's in `cmd.exe`, using a temporary file is the only option [that I know of]:
```
python -c "print(\"Hi\")" > temp.cmd
call temp.cmd
del temp.cmd
```
|
(Making some guesses where details are missing from your question)
In CMD, when a batch script modifies the environment, the default behavior is that it modifies the environment of the CMD process that is executing it.
Now, if you have a batch script that calls another batch script, there are 3 ways to do it.
1. execute the batch file directly:
```
REM call q.bat
q.bat
REM this line never runs
```
Usually you don't want this, because it won't return to the calling batch script. This is more like `goto` than `gosub`. The CMD process just switches from one script to another.
2. execute with `call`:
```
REM call q.bat
CALL q.bat
REM changes that q.bat affects will appear here.
```
This is the most common way for one batch file to call another. When `q.bat` exits, control will return to the caller. Since this is the same CMD process, changes to the environment will still be there.
Note: If `q.bat` uses the `EXIT` statement, it can cause the CMD process to terminate, without returning control to the calling script.
Note 2: If `q.bat` uses `EXIT /B`, then the CMD process will not exit. This is useful for setting `ERRORLEVEL`.
3. Execute in a new CMD process:
```
REM call q.bat
CMD /C q.bat
REM environment changes in q.bat don't affect me
```
Since q.bat run ins a new CMD process, it affects the environment of that process, and not the CMD that the caller is running in.
Note: If `q.bat` uses `EXIT`, it won't terminate the process of the caller.
---
The `SETLOCAL` CMD command will create a new environment for the current script. Changes in that environment won't affect the caller. In general, `SETLOCAL` is a good practice, to avoid leaking environment changes by accident.
To use `SETLOCAL` and still push environment changes to the calling script, end the script with:
```
ENDLOCAL && SET X=%X% && SET Y=%Y%
```
This will push the values of X and Y to the parent environment.
---
If on the other hand you want to run another process (not a CMD script) and have it affect the current script's environment, than have the tool generate a batch file that makes the changes you want, then execute that batch file.
```
REM q.exe will write %TEMP%\runme.cmd, which looks like:
REM set X=Y
q.exe
call "%TEMP%\runme.cmd"
```
|
36,510,859
|
I'm trying to use the CVXOPT qp solver to compute the Lagrange Multipliers for a Support Vector Machine
```
def svm(X, Y, c):
m = len(X)
P = matrix(np.dot(Y, Y.T) * np.dot(X, X.T))
q = matrix(np.ones(m) * -1)
g1 = np.asarray(np.diag(np.ones(m) * -1))
g2 = np.asarray(np.diag(np.ones(m)))
G = matrix(np.append(g1, g2, axis=0))
h = matrix(np.append(np.zeros(m), (np.ones(m) * c), axis =0))
A = np.reshape((Y.T), (1,m))
b = matrix([0])
print (A).shape
A = matrix(A)
sol = solvers.qp(P, q, G, h, A, b)
print sol
```
Here `X` is a `1000 X 2` matrix and `Y` has the same number of labels. The solver throws the following error:
`$ python svm.py
(1, 1000)
Traceback (most recent call last):
File "svm.py", line 35, in <module>
svm(X, Y, 50)
File "svm.py", line 29, in svm
sol = solvers.qp(P, q, G, h, A, b)
File "/usr/local/lib/python2.7/site-packages/cvxopt/coneprog.py", line 4468, in qp
return coneqp(P, q, G, h, None, A, b, initvals, options = options)
File "/usr/local/lib/python2.7/site-packages/cvxopt/coneprog.py", line 1914, in coneqp
%q.size[0])
TypeError: 'A' must be a 'd' matrix with 1000 columns`
I printed the shape of A and it's a `(1,1000)` matrix after reshaping from a vector. What exactly is causing this error?
|
2016/04/08
|
[
"https://Stackoverflow.com/questions/36510859",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/681410/"
] |
Your matrix elements have to be of the floating-point type as well. So the error is removed by using `A = A.astype('float')` to cast it.
|
The error - `"TypeError: 'A' must be a 'd' matrix with 1000 columns:"` has two condition namely:
1. if the type code is not equal to '`d`'
2. if the `A.size[1] != c.size[0]`.
Check for these conditions.
|
36,510,859
|
I'm trying to use the CVXOPT qp solver to compute the Lagrange Multipliers for a Support Vector Machine
```
def svm(X, Y, c):
m = len(X)
P = matrix(np.dot(Y, Y.T) * np.dot(X, X.T))
q = matrix(np.ones(m) * -1)
g1 = np.asarray(np.diag(np.ones(m) * -1))
g2 = np.asarray(np.diag(np.ones(m)))
G = matrix(np.append(g1, g2, axis=0))
h = matrix(np.append(np.zeros(m), (np.ones(m) * c), axis =0))
A = np.reshape((Y.T), (1,m))
b = matrix([0])
print (A).shape
A = matrix(A)
sol = solvers.qp(P, q, G, h, A, b)
print sol
```
Here `X` is a `1000 X 2` matrix and `Y` has the same number of labels. The solver throws the following error:
`$ python svm.py
(1, 1000)
Traceback (most recent call last):
File "svm.py", line 35, in <module>
svm(X, Y, 50)
File "svm.py", line 29, in svm
sol = solvers.qp(P, q, G, h, A, b)
File "/usr/local/lib/python2.7/site-packages/cvxopt/coneprog.py", line 4468, in qp
return coneqp(P, q, G, h, None, A, b, initvals, options = options)
File "/usr/local/lib/python2.7/site-packages/cvxopt/coneprog.py", line 1914, in coneqp
%q.size[0])
TypeError: 'A' must be a 'd' matrix with 1000 columns`
I printed the shape of A and it's a `(1,1000)` matrix after reshaping from a vector. What exactly is causing this error?
|
2016/04/08
|
[
"https://Stackoverflow.com/questions/36510859",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/681410/"
] |
Your matrix elements have to be of the floating-point type as well. So the error is removed by using `A = A.astype('float')` to cast it.
|
i have try `A=A.astype(double)` to solve it, but it is invalid, since python doesn't know what double is or A has no method astype.
therefore
---------
via using
`A = matrix(A, (1, m), 'd')`
could actually solve this problem!
|
36,510,859
|
I'm trying to use the CVXOPT qp solver to compute the Lagrange Multipliers for a Support Vector Machine
```
def svm(X, Y, c):
m = len(X)
P = matrix(np.dot(Y, Y.T) * np.dot(X, X.T))
q = matrix(np.ones(m) * -1)
g1 = np.asarray(np.diag(np.ones(m) * -1))
g2 = np.asarray(np.diag(np.ones(m)))
G = matrix(np.append(g1, g2, axis=0))
h = matrix(np.append(np.zeros(m), (np.ones(m) * c), axis =0))
A = np.reshape((Y.T), (1,m))
b = matrix([0])
print (A).shape
A = matrix(A)
sol = solvers.qp(P, q, G, h, A, b)
print sol
```
Here `X` is a `1000 X 2` matrix and `Y` has the same number of labels. The solver throws the following error:
`$ python svm.py
(1, 1000)
Traceback (most recent call last):
File "svm.py", line 35, in <module>
svm(X, Y, 50)
File "svm.py", line 29, in svm
sol = solvers.qp(P, q, G, h, A, b)
File "/usr/local/lib/python2.7/site-packages/cvxopt/coneprog.py", line 4468, in qp
return coneqp(P, q, G, h, None, A, b, initvals, options = options)
File "/usr/local/lib/python2.7/site-packages/cvxopt/coneprog.py", line 1914, in coneqp
%q.size[0])
TypeError: 'A' must be a 'd' matrix with 1000 columns`
I printed the shape of A and it's a `(1,1000)` matrix after reshaping from a vector. What exactly is causing this error?
|
2016/04/08
|
[
"https://Stackoverflow.com/questions/36510859",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/681410/"
] |
Your matrix elements have to be of the floating-point type as well. So the error is removed by using `A = A.astype('float')` to cast it.
|
To convert CVXOPT matrix items to floats:
```
A = A * 1.0
```
|
36,510,859
|
I'm trying to use the CVXOPT qp solver to compute the Lagrange Multipliers for a Support Vector Machine
```
def svm(X, Y, c):
m = len(X)
P = matrix(np.dot(Y, Y.T) * np.dot(X, X.T))
q = matrix(np.ones(m) * -1)
g1 = np.asarray(np.diag(np.ones(m) * -1))
g2 = np.asarray(np.diag(np.ones(m)))
G = matrix(np.append(g1, g2, axis=0))
h = matrix(np.append(np.zeros(m), (np.ones(m) * c), axis =0))
A = np.reshape((Y.T), (1,m))
b = matrix([0])
print (A).shape
A = matrix(A)
sol = solvers.qp(P, q, G, h, A, b)
print sol
```
Here `X` is a `1000 X 2` matrix and `Y` has the same number of labels. The solver throws the following error:
`$ python svm.py
(1, 1000)
Traceback (most recent call last):
File "svm.py", line 35, in <module>
svm(X, Y, 50)
File "svm.py", line 29, in svm
sol = solvers.qp(P, q, G, h, A, b)
File "/usr/local/lib/python2.7/site-packages/cvxopt/coneprog.py", line 4468, in qp
return coneqp(P, q, G, h, None, A, b, initvals, options = options)
File "/usr/local/lib/python2.7/site-packages/cvxopt/coneprog.py", line 1914, in coneqp
%q.size[0])
TypeError: 'A' must be a 'd' matrix with 1000 columns`
I printed the shape of A and it's a `(1,1000)` matrix after reshaping from a vector. What exactly is causing this error?
|
2016/04/08
|
[
"https://Stackoverflow.com/questions/36510859",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/681410/"
] |
The error - `"TypeError: 'A' must be a 'd' matrix with 1000 columns:"` has two condition namely:
1. if the type code is not equal to '`d`'
2. if the `A.size[1] != c.size[0]`.
Check for these conditions.
|
To convert CVXOPT matrix items to floats:
```
A = A * 1.0
```
|
36,510,859
|
I'm trying to use the CVXOPT qp solver to compute the Lagrange Multipliers for a Support Vector Machine
```
def svm(X, Y, c):
m = len(X)
P = matrix(np.dot(Y, Y.T) * np.dot(X, X.T))
q = matrix(np.ones(m) * -1)
g1 = np.asarray(np.diag(np.ones(m) * -1))
g2 = np.asarray(np.diag(np.ones(m)))
G = matrix(np.append(g1, g2, axis=0))
h = matrix(np.append(np.zeros(m), (np.ones(m) * c), axis =0))
A = np.reshape((Y.T), (1,m))
b = matrix([0])
print (A).shape
A = matrix(A)
sol = solvers.qp(P, q, G, h, A, b)
print sol
```
Here `X` is a `1000 X 2` matrix and `Y` has the same number of labels. The solver throws the following error:
`$ python svm.py
(1, 1000)
Traceback (most recent call last):
File "svm.py", line 35, in <module>
svm(X, Y, 50)
File "svm.py", line 29, in svm
sol = solvers.qp(P, q, G, h, A, b)
File "/usr/local/lib/python2.7/site-packages/cvxopt/coneprog.py", line 4468, in qp
return coneqp(P, q, G, h, None, A, b, initvals, options = options)
File "/usr/local/lib/python2.7/site-packages/cvxopt/coneprog.py", line 1914, in coneqp
%q.size[0])
TypeError: 'A' must be a 'd' matrix with 1000 columns`
I printed the shape of A and it's a `(1,1000)` matrix after reshaping from a vector. What exactly is causing this error?
|
2016/04/08
|
[
"https://Stackoverflow.com/questions/36510859",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/681410/"
] |
i have try `A=A.astype(double)` to solve it, but it is invalid, since python doesn't know what double is or A has no method astype.
therefore
---------
via using
`A = matrix(A, (1, m), 'd')`
could actually solve this problem!
|
To convert CVXOPT matrix items to floats:
```
A = A * 1.0
```
|
51,940,568
|
I'm trying to learn Python with [CS Dojo's tutorial](https://www.youtube.com/watch?v=NSbOtYzIQI0). I'm currently working on the task he gave us, which was building a "miles to kilometers converter". The task itself was very simple, but I want to build a program that does a little more than that. I want the output to be like
"The distance" + "name (ex. from NewYork to Washington)" + " is" + (Value) + " in kilometers."
but it is giving me the same error message over and over.
My code is
# miles variables
name1 = "from\_NewYork\_to\_Washington"
miles1 = 20
```
name2 = "from_Boston_to_Dallas"
miles2 = 30
name3 = "from_Tokyo_to_Seoul"
miles3 = 40
#define function
def conver(name, distance):
d = distance * 1.6
print("result: ")
print(d)
return "The distance" + name + "is" + d + "in kilometers."
# result variables
result1 = conver(name1, miles1)
result2 = conver(name2, miles2)
result3 = conver(name3, miles3)
```
The error message is following:
```
result:
32.0
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-135-4d54c2c21df2> in <module>()
17
18 # result variables
---> 19 result1 = conver(name1, miles1)
20 result2 = conver(name2, miles2)
21 result3 = conver(name3, miles3)
<ipython-input-135-4d54c2c21df2> in conver(name, distance)
14 print("result: ")
15 print(d)
---> 16 return "The distance" + name + "is" + d + "in kilometers."
17
18 # result variables
TypeError: must be str, not float
```
Thanks.
|
2018/08/21
|
[
"https://Stackoverflow.com/questions/51940568",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10252435/"
] |
The value **d** needs to be converted into a string before it can be concatenated with the rest of the return statement:
```
d = str(distance * 1.6)
```
|
In Python 3.6+, f-string saves your time typing ugly quotes and plus signs and improves the readability a lot.
```
return f"The distance {name} is {d} in kilometers."
```
You can also do formatting easily inside f-strings
```
return f"The distance {name.title()} is {d:.2f} in kilometers."
```
|
9,885,071
|
I am reading a text file which I guess is encoded in utf-8. Some lines can only be decoded as latin-1 though. I would say this is very bad practice, but nevertheless I have to cope with it.
I have the following questions:
First: how can I "guess" the encoding of a file? I have tried `enca`, but in my machine:
```
enca --list languages
belarussian: CP1251 IBM866 ISO-8859-5 KOI8-UNI maccyr IBM855 KOI8-U
bulgarian: CP1251 ISO-8859-5 IBM855 maccyr ECMA-113
czech: ISO-8859-2 CP1250 IBM852 KEYBCS2 macce KOI-8_CS_2 CORK
estonian: ISO-8859-4 CP1257 IBM775 ISO-8859-13 macce baltic
croatian: CP1250 ISO-8859-2 IBM852 macce CORK
hungarian: ISO-8859-2 CP1250 IBM852 macce CORK
lithuanian: CP1257 ISO-8859-4 IBM775 ISO-8859-13 macce baltic
latvian: CP1257 ISO-8859-4 IBM775 ISO-8859-13 macce baltic
polish: ISO-8859-2 CP1250 IBM852 macce ISO-8859-13 ISO-8859-16 baltic CORK
russian: KOI8-R CP1251 ISO-8859-5 IBM866 maccyr
slovak: CP1250 ISO-8859-2 IBM852 KEYBCS2 macce KOI-8_CS_2 CORK
slovene: ISO-8859-2 CP1250 IBM852 macce CORK
ukrainian: CP1251 IBM855 ISO-8859-5 CP1125 KOI8-U maccyr
chinese: GBK BIG5 HZ
none:
```
Which is not enough to detect latin-1. By the way: how can I increase the list of available languages for `enca`?
Second: is it possible to have a file partially encoded in different encodings? I think that, from a technical point of view, this is very much possible: just use different codecs for different parts of the file. Have you ever seen this? How would you handle reading such a file? Would you try to decode each line with a different encoding? How would you go about this? I am actually using python, and an example of handling multiple-encoding would be appreciated.
Third: how do you usually go about files for which you do not know the encoding that they are using? I encounter this problem often when downloading text files from different sources. They are usually CSV files exported from spreadsheets (I guess excel), but depending on the exporting parameters, they are using one encoding or another, without hints about which one has been chosen, so I am forced to guess.
|
2012/03/27
|
[
"https://Stackoverflow.com/questions/9885071",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/647991/"
] |
1. Guess the text encoding a file - in what? Python, Perl, Java, C? Note that this sometimes produces unexpected results (e.g. [the "bush hid the facts" bug](http://en.wikipedia.org/wiki/Bush_hid_the_facts))
2. Sometimes a file will have text encoded in different encodings. Either the file format specifies different encodings for different parts in the file, or the format permits specifying an encoding switch. For example in MIME the headers keys are ascii, the header values can have embedded sections encoded in other encodings, and the encoding of the content is declared in the Content-Type header.
3. If you don't know the encoding you just have to read bytes from the file (in a binary safe manner) to a buffer until you can determine the encoding.
|
When you say "partially encoded in different encodings", are you sure it's not just UTF-8? UTF-8 mixes single-byte, 2-byte and more-byte encodings, depending on the complexity of the character, so part of it looks like ASCII / latin-1 and part of it looks like Unicode.
<http://www.joelonsoftware.com/articles/Unicode.html>
EDIT: for guessing encodings of downloaded plain-text files, I usually open them in Chrome or Firefox. They support lots of encodings and are very good at choosing the right one. Then, it is possible to copy the content into a Unicode-encoded file from there.
|
70,463,263
|
I am not able to serialize a custom object in python 3 .
Below is the explanation and code
```
packages= []
data = {
"simplefield": "value1",
"complexfield": packages,
}
```
where packages is a list of `custom object Library`.
`Library object` is a class as below (I have also sub-classed `json.JSONEncoder` but it is not helping )
```
class Library(json.JSONEncoder):
def __init__(self, name):
print("calling Library constructor ")
self.name = name
def default(self, object):
print("calling default method ")
if isinstance(object, Library):
return {
"name": object.name
}
else:
raiseExceptions("Object is not instance of Library")
```
Now I am calling `json.dumps(data)` but it is throwing below exception.
```
TypeError: Object of type `Library` is not JSON serializable
```
It seems `"calling default method"` is not printed meaning, `Library.default` method is not called
Can anybody please help here ?
I have also referred to [Serializing class instance to JSON](https://stackoverflow.com/questions/10252010/serializing-class-instance-to-json) but its is not helping much
|
2021/12/23
|
[
"https://Stackoverflow.com/questions/70463263",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1281182/"
] |
Inheriting `json.JSONEncoder` will not make a class serializable. You should define your encoder separately and then provide the encoder instance in the `json.dumps` call.
See an example approach below
```
import json
from typing import Any
class Library:
def __init__(self, name):
print("calling Library constructor ")
self.name = name
def __repr__(self):
# I have just created a serialized version of the object
return '{"name": "%s"}' % self.name
# Create your custom encoder
class CustomEncoder(json.JSONEncoder):
def default(self, o: Any) -> Any:
# Encode differently for different types
return str(o) if isinstance(o, Library) else super().default(o)
packages = [Library("package1"), Library("package2")]
data = {
"simplefield": "value1",
"complexfield": packages,
}
#Use your encoder while serializing
print(json.dumps(data, cls=CustomEncoder))
```
Output was like:
```
{"simplefield": "value1", "complexfield": ["{\"name\": \"package1\"}", "{\"name\": \"package2\"}"]}
```
|
You can use the `default` parameter of `json.dump`:
```
def default(obj):
if isinstance(obj, Library):
return {"name": obj.name}
raise TypeError(f"Can't serialize {obj!r}.")
json.dumps(data, default=default)
```
|
57,584,229
|
I tried the following python code to assign a new value to a list.
```
a,b,c=[],[],[]
def change():
return [1]
for i in [a,b,c]:
i=change()
print(a)
```
The output is `[]`, but what I need is `[1]`
|
2019/08/21
|
[
"https://Stackoverflow.com/questions/57584229",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4917205/"
] |
What you're doing here is you're re-assigning the variable `i` within the loop, but you're not actually changing `a`, `b`, or `c`.
As an example, what would you expect the following source code to output?
```py
a = []
i = a
i = [1]
print(a)
```
Here, you are reassigning `i` after you've assigned `a` to it. `a` itself is not changing when you perform the `i = [1]` operation, and thus `[]` will output. This problem is the same one as what you're seeing in your loop.
You likely want something like this.
`a, b, c = (change() for i in (a, b, c))`
|
You created a variable called i, to it you assigned (in turn) a, then 1, then b, then 1, then c, then 1. Then you printed a, unaltered. If you want to change a, you need to assign to a. That is:
```
a = [1]
```
|
57,584,229
|
I tried the following python code to assign a new value to a list.
```
a,b,c=[],[],[]
def change():
return [1]
for i in [a,b,c]:
i=change()
print(a)
```
The output is `[]`, but what I need is `[1]`
|
2019/08/21
|
[
"https://Stackoverflow.com/questions/57584229",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4917205/"
] |
What you're doing here is you're re-assigning the variable `i` within the loop, but you're not actually changing `a`, `b`, or `c`.
As an example, what would you expect the following source code to output?
```py
a = []
i = a
i = [1]
print(a)
```
Here, you are reassigning `i` after you've assigned `a` to it. `a` itself is not changing when you perform the `i = [1]` operation, and thus `[]` will output. This problem is the same one as what you're seeing in your loop.
You likely want something like this.
`a, b, c = (change() for i in (a, b, c))`
|
Part of your problem is inside your for loop. You are saying i = 1, but that is not connected to any list. Your for loop should look like this:
```
for i in [a, b, c]:
try:
i[0] = 1
except IndexError:
i.append(1)
```
|
57,584,229
|
I tried the following python code to assign a new value to a list.
```
a,b,c=[],[],[]
def change():
return [1]
for i in [a,b,c]:
i=change()
print(a)
```
The output is `[]`, but what I need is `[1]`
|
2019/08/21
|
[
"https://Stackoverflow.com/questions/57584229",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4917205/"
] |
What you're doing here is you're re-assigning the variable `i` within the loop, but you're not actually changing `a`, `b`, or `c`.
As an example, what would you expect the following source code to output?
```py
a = []
i = a
i = [1]
print(a)
```
Here, you are reassigning `i` after you've assigned `a` to it. `a` itself is not changing when you perform the `i = [1]` operation, and thus `[]` will output. This problem is the same one as what you're seeing in your loop.
You likely want something like this.
`a, b, c = (change() for i in (a, b, c))`
|
You rather have to do this way:
```
a,b,c=[],[],[]
def change(p):
p.append(1)
for i in [a,b,c]:
change(i)
print(a)
```
|
57,584,229
|
I tried the following python code to assign a new value to a list.
```
a,b,c=[],[],[]
def change():
return [1]
for i in [a,b,c]:
i=change()
print(a)
```
The output is `[]`, but what I need is `[1]`
|
2019/08/21
|
[
"https://Stackoverflow.com/questions/57584229",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4917205/"
] |
What you're doing here is you're re-assigning the variable `i` within the loop, but you're not actually changing `a`, `b`, or `c`.
As an example, what would you expect the following source code to output?
```py
a = []
i = a
i = [1]
print(a)
```
Here, you are reassigning `i` after you've assigned `a` to it. `a` itself is not changing when you perform the `i = [1]` operation, and thus `[]` will output. This problem is the same one as what you're seeing in your loop.
You likely want something like this.
`a, b, c = (change() for i in (a, b, c))`
|
You are only changing the variable `i`. You are not changing the original list variables: `a, b, c`.
In the For loop, the variable `i` is assigned with copy of reference of `a, b, c`. It is a different variable altogether. Changing the value of `i`, does not change the value of `a, b, c`.
If you want to actually change the value of a, b, c, you have to use the `append` method.
```py
a,b,c=[],[],[]
def change():
return [1]
for i in [a,b,c]:
i.append(change())
print(a)
```
for more information, read this [stackoverflow answer on passing value by reference in python](https://stackoverflow.com/a/986145/634935)
|
57,584,229
|
I tried the following python code to assign a new value to a list.
```
a,b,c=[],[],[]
def change():
return [1]
for i in [a,b,c]:
i=change()
print(a)
```
The output is `[]`, but what I need is `[1]`
|
2019/08/21
|
[
"https://Stackoverflow.com/questions/57584229",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4917205/"
] |
What you're doing here is you're re-assigning the variable `i` within the loop, but you're not actually changing `a`, `b`, or `c`.
As an example, what would you expect the following source code to output?
```py
a = []
i = a
i = [1]
print(a)
```
Here, you are reassigning `i` after you've assigned `a` to it. `a` itself is not changing when you perform the `i = [1]` operation, and thus `[]` will output. This problem is the same one as what you're seeing in your loop.
You likely want something like this.
`a, b, c = (change() for i in (a, b, c))`
|
Please refer to [this answer](https://stackoverflow.com/a/22055029/5974685) for changing external list inside function.
```
In [45]: a,b,c=[],[],[]
...: def change():
...: return [1]
...: for i in [a,b,c]:
...: i[:]=change()
...: print(a)
...: print(b)
[1]
[1]
```
|
57,584,229
|
I tried the following python code to assign a new value to a list.
```
a,b,c=[],[],[]
def change():
return [1]
for i in [a,b,c]:
i=change()
print(a)
```
The output is `[]`, but what I need is `[1]`
|
2019/08/21
|
[
"https://Stackoverflow.com/questions/57584229",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4917205/"
] |
Please refer to [this answer](https://stackoverflow.com/a/22055029/5974685) for changing external list inside function.
```
In [45]: a,b,c=[],[],[]
...: def change():
...: return [1]
...: for i in [a,b,c]:
...: i[:]=change()
...: print(a)
...: print(b)
[1]
[1]
```
|
You created a variable called i, to it you assigned (in turn) a, then 1, then b, then 1, then c, then 1. Then you printed a, unaltered. If you want to change a, you need to assign to a. That is:
```
a = [1]
```
|
57,584,229
|
I tried the following python code to assign a new value to a list.
```
a,b,c=[],[],[]
def change():
return [1]
for i in [a,b,c]:
i=change()
print(a)
```
The output is `[]`, but what I need is `[1]`
|
2019/08/21
|
[
"https://Stackoverflow.com/questions/57584229",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4917205/"
] |
Please refer to [this answer](https://stackoverflow.com/a/22055029/5974685) for changing external list inside function.
```
In [45]: a,b,c=[],[],[]
...: def change():
...: return [1]
...: for i in [a,b,c]:
...: i[:]=change()
...: print(a)
...: print(b)
[1]
[1]
```
|
Part of your problem is inside your for loop. You are saying i = 1, but that is not connected to any list. Your for loop should look like this:
```
for i in [a, b, c]:
try:
i[0] = 1
except IndexError:
i.append(1)
```
|
57,584,229
|
I tried the following python code to assign a new value to a list.
```
a,b,c=[],[],[]
def change():
return [1]
for i in [a,b,c]:
i=change()
print(a)
```
The output is `[]`, but what I need is `[1]`
|
2019/08/21
|
[
"https://Stackoverflow.com/questions/57584229",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4917205/"
] |
Please refer to [this answer](https://stackoverflow.com/a/22055029/5974685) for changing external list inside function.
```
In [45]: a,b,c=[],[],[]
...: def change():
...: return [1]
...: for i in [a,b,c]:
...: i[:]=change()
...: print(a)
...: print(b)
[1]
[1]
```
|
You rather have to do this way:
```
a,b,c=[],[],[]
def change(p):
p.append(1)
for i in [a,b,c]:
change(i)
print(a)
```
|
57,584,229
|
I tried the following python code to assign a new value to a list.
```
a,b,c=[],[],[]
def change():
return [1]
for i in [a,b,c]:
i=change()
print(a)
```
The output is `[]`, but what I need is `[1]`
|
2019/08/21
|
[
"https://Stackoverflow.com/questions/57584229",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4917205/"
] |
Please refer to [this answer](https://stackoverflow.com/a/22055029/5974685) for changing external list inside function.
```
In [45]: a,b,c=[],[],[]
...: def change():
...: return [1]
...: for i in [a,b,c]:
...: i[:]=change()
...: print(a)
...: print(b)
[1]
[1]
```
|
You are only changing the variable `i`. You are not changing the original list variables: `a, b, c`.
In the For loop, the variable `i` is assigned with copy of reference of `a, b, c`. It is a different variable altogether. Changing the value of `i`, does not change the value of `a, b, c`.
If you want to actually change the value of a, b, c, you have to use the `append` method.
```py
a,b,c=[],[],[]
def change():
return [1]
for i in [a,b,c]:
i.append(change())
print(a)
```
for more information, read this [stackoverflow answer on passing value by reference in python](https://stackoverflow.com/a/986145/634935)
|
38,193,878
|
I am trying to automate my project create process and would like as part of it to create a new jupyter notebook and populate it with some cells and content that I usually have in every notebook (i.e., imports, titles, etc.)
Is it possible to do this via python?
|
2016/07/05
|
[
"https://Stackoverflow.com/questions/38193878",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5983171/"
] |
You can do it using **nbformat**. Below an example taken from [Creating an IPython Notebook programatically](https://gist.github.com/fperez/9716279):
```py
import nbformat as nbf
nb = nbf.v4.new_notebook()
text = """\
# My first automatic Jupyter Notebook
This is an auto-generated notebook."""
code = """\
%pylab inline
hist(normal(size=2000), bins=50);"""
nb['cells'] = [nbf.v4.new_markdown_cell(text),
nbf.v4.new_code_cell(code)]
fname = 'test.ipynb'
with open(fname, 'w') as f:
nbf.write(nb, f)
```
|
This is absolutely possible. Notebooks are just json files. This
[notebook](https://gist.github.com/anonymous/1c79fa2d43f656b4463d7ce7e2d1a03b "notebook") for example is just:
```
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Header 1"
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {
"ExecuteTime": {
"end_time": "2016-09-16T16:28:53.333738",
"start_time": "2016-09-16T16:28:53.330843"
},
"collapsed": false
},
"outputs": [],
"source": [
"def foo(bar):\n",
" # Standard functions I want to define.\n",
" pass"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Header 2"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 2",
"language": "python",
"name": "python2"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 2
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython2",
"version": "2.7.10"
},
"toc": {
"toc_cell": false,
"toc_number_sections": true,
"toc_threshold": 6,
"toc_window_display": false
}
},
"nbformat": 4,
"nbformat_minor": 0
}
```
While messy it's just a list of cell objects. I would probably create my template in an actual notebook and save it rather than trying to generate the initial template by hand. If you want to add titles or other variables programmatically, you could always copy the raw notebook text in the \*.ipynb file into a python file and insert values using string formatting.
|
55,411,549
|
**EDIT:** I am trying to manipulate JSON files in Python. In my data some polygons have multiple related information: coordinates (`LineString`) and **area percent** and **area** (`Text` and `Area` in `Point`), I want to combine them to a single JSON object. As an example, the data from files are as follows:
```
data = {
"type": "FeatureCollection",
"name": "entities",
"features": [{
"type": "Feature",
"properties": {
"Layer": "0",
"SubClasses": "AcDbEntity:AcDbBlockReference",
"EntityHandle": "2F1"
},
"geometry": {
"type": "LineString",
"coordinates": [
[61.971069681118479, 36.504485105673659],
[46.471068755199667, 36.504485105673659],
[46.471068755199667, 35.954489281866685],
[44.371068755199758, 35.954489281866685],
[44.371068755199758, 36.10448936390457],
[43.371069617387093, 36.104489150107824],
[43.371069617387093, 23.904496401184584],
[48.172716774891342, 23.904496401184584],
[48.171892994728751, 17.404489374370311],
[61.17106949647404, 17.404489281863786],
[61.17106949647404, 19.404489281863786],
[61.971069689453991, 19.404489282256687],
[61.971069681118479, 36.504485105673659]
]
}
},
{
"type": "Feature",
"properties": {
"Layer": "0",
"SubClasses": "AcDbEntity:AcDbMText",
"EntityHandle": "2F1",
"Text": "6%"
},
"geometry": {
"type": "Point",
"coordinates": [49.745686139884583, 28.11445704760262, 0.0]
}
},
{
"type": "Feature",
"properties": {
"Layer": "0",
"SubClasses": "AcDbEntity:AcDbMText",
"EntityHandle": "2F1",
"Area": "100"
},
"geometry": {
"type": "Point",
"coordinates": [50.216857362443989, 63.981197759829229, 0.0]
}
},
{
"type": "Feature",
"properties": {
"Layer": "0",
"SubClasses": "AcDbEntity:AcDbBlockReference",
"EntityHandle": "2F7"
},
"geometry": {
"type": "LineString",
"coordinates": [
[62.37106968111857, 36.504489398648715],
[62.371069689452725, 19.404489281863786],
[63.171069496474047, 19.404489281863786],
[63.171069496474047, 17.404489281863786],
[77.921070051947027, 17.404489281863786],
[77.921070051947027, 19.504489281855054],
[78.671070051947027, 19.504489281855054],
[78.671070051897914, 36.504485105717322],
[62.37106968111857, 36.504489398648715]
]
}
},
{
"type": "Feature",
"properties": {
"Layer": "0",
"SubClasses": "AcDbEntity:AcDbMText",
"EntityHandle": "2F7",
"Text": "5.8%"
},
"geometry": {
"type": "Point",
"coordinates": [67.27548061311245, 28.11445704760262, 0.0]
}
}
]
}
```
I want to combine `Point`'s `Text` and `Area` key and values to `LineString` based on `EntityHandle`'s values, and also delete `Point` lines. The expected output is:
```
{
"type": "FeatureCollection",
"name": "entities",
"features": [{
"type": "Feature",
"properties": {
"Layer": "0",
"SubClasses": "AcDbEntity:AcDbBlockReference",
"EntityHandle": "2F1",
"Text": "6%",
"Area": "100"
},
"geometry": {
"type": "LineString",
"coordinates": [
[61.971069681118479, 36.504485105673659],
[46.471068755199667, 36.504485105673659],
[46.471068755199667, 35.954489281866685],
[44.371068755199758, 35.954489281866685],
[44.371068755199758, 36.10448936390457],
[43.371069617387093, 36.104489150107824],
[43.371069617387093, 23.904496401184584],
[48.172716774891342, 23.904496401184584],
[48.171892994728751, 17.404489374370311],
[61.17106949647404, 17.404489281863786],
[61.17106949647404, 19.404489281863786],
[61.971069689453991, 19.404489282256687],
[61.971069681118479, 36.504485105673659]
]
}
},
{
"type": "Feature",
"properties": {
"Layer": "0",
"SubClasses": "AcDbEntity:AcDbBlockReference",
"EntityHandle": "2F7",
"Text": "5.8%"
},
"geometry": {
"type": "LineString",
"coordinates": [
[62.37106968111857, 36.504489398648715],
[62.371069689452725, 19.404489281863786],
[63.171069496474047, 19.404489281863786],
[63.171069496474047, 17.404489281863786],
[77.921070051947027, 17.404489281863786],
[77.921070051947027, 19.504489281855054],
[78.671070051947027, 19.504489281855054],
[78.671070051897914, 36.504485105717322],
[62.37106968111857, 36.504489398648715]
]
}
}
]
}
```
Is it possible to get result above in Python? Thanks.
**Updated solution**, thanks to @dodopy:
```
import json
features = data["features"]
point_handle_text = {
i["properties"]["EntityHandle"]: i["properties"]["Text"]
for i in features
if i["geometry"]["type"] == "Point"
}
point_handle_area = {
i["properties"]["EntityHandle"]: i["properties"]["Area"]
for i in features
if i["geometry"]["type"] == "Point"
}
combine_features = []
for i in features:
if i["geometry"]["type"] == "LineString":
i["properties"]["Text"] = point_handle_text.get(i["properties"]["EntityHandle"])
combine_features.append(i)
data["features"] = combine_features
combine_features = []
for i in features:
if i["geometry"]["type"] == "LineString":
i["properties"]["Area"] = point_handle_area.get(i["properties"]["EntityHandle"])
combine_features.append(i)
data["features"] = combine_features
with open('test.geojson', 'w+') as f:
json.dump(data, f, indent=2)
```
But I get an error:
```
Traceback (most recent call last):
File "<ipython-input-131-d132c8854a9c>", line 6, in <module>
for i in features
File "<ipython-input-131-d132c8854a9c>", line 7, in <dictcomp>
if i["geometry"]["type"] == "Point"
KeyError: 'Text'
```
|
2019/03/29
|
[
"https://Stackoverflow.com/questions/55411549",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8410477/"
] |
Can you try the following:
```
>>> df = pd.DataFrame([[1, 2, 4], [4, 5, 6], [7, '[8', 9]])
>>> df = df.astype('str')
>>> df
0 1 2
0 1 2 4
1 4 5 6
2 7 [8 9
>>> df[df.columns[[df[i].str.contains('[', regex=False).any() for i in df.columns]]]
1
0 2
1 5
2 [8
>>>
```
|
Use this:
```
s=df.applymap(lambda x: '[' in x).any()
print(df[s[s].index])
```
Output:
```
col2 col4
0 b [d
1 [f h
2 j l
3 n [pa
```
|
55,411,549
|
**EDIT:** I am trying to manipulate JSON files in Python. In my data some polygons have multiple related information: coordinates (`LineString`) and **area percent** and **area** (`Text` and `Area` in `Point`), I want to combine them to a single JSON object. As an example, the data from files are as follows:
```
data = {
"type": "FeatureCollection",
"name": "entities",
"features": [{
"type": "Feature",
"properties": {
"Layer": "0",
"SubClasses": "AcDbEntity:AcDbBlockReference",
"EntityHandle": "2F1"
},
"geometry": {
"type": "LineString",
"coordinates": [
[61.971069681118479, 36.504485105673659],
[46.471068755199667, 36.504485105673659],
[46.471068755199667, 35.954489281866685],
[44.371068755199758, 35.954489281866685],
[44.371068755199758, 36.10448936390457],
[43.371069617387093, 36.104489150107824],
[43.371069617387093, 23.904496401184584],
[48.172716774891342, 23.904496401184584],
[48.171892994728751, 17.404489374370311],
[61.17106949647404, 17.404489281863786],
[61.17106949647404, 19.404489281863786],
[61.971069689453991, 19.404489282256687],
[61.971069681118479, 36.504485105673659]
]
}
},
{
"type": "Feature",
"properties": {
"Layer": "0",
"SubClasses": "AcDbEntity:AcDbMText",
"EntityHandle": "2F1",
"Text": "6%"
},
"geometry": {
"type": "Point",
"coordinates": [49.745686139884583, 28.11445704760262, 0.0]
}
},
{
"type": "Feature",
"properties": {
"Layer": "0",
"SubClasses": "AcDbEntity:AcDbMText",
"EntityHandle": "2F1",
"Area": "100"
},
"geometry": {
"type": "Point",
"coordinates": [50.216857362443989, 63.981197759829229, 0.0]
}
},
{
"type": "Feature",
"properties": {
"Layer": "0",
"SubClasses": "AcDbEntity:AcDbBlockReference",
"EntityHandle": "2F7"
},
"geometry": {
"type": "LineString",
"coordinates": [
[62.37106968111857, 36.504489398648715],
[62.371069689452725, 19.404489281863786],
[63.171069496474047, 19.404489281863786],
[63.171069496474047, 17.404489281863786],
[77.921070051947027, 17.404489281863786],
[77.921070051947027, 19.504489281855054],
[78.671070051947027, 19.504489281855054],
[78.671070051897914, 36.504485105717322],
[62.37106968111857, 36.504489398648715]
]
}
},
{
"type": "Feature",
"properties": {
"Layer": "0",
"SubClasses": "AcDbEntity:AcDbMText",
"EntityHandle": "2F7",
"Text": "5.8%"
},
"geometry": {
"type": "Point",
"coordinates": [67.27548061311245, 28.11445704760262, 0.0]
}
}
]
}
```
I want to combine `Point`'s `Text` and `Area` key and values to `LineString` based on `EntityHandle`'s values, and also delete `Point` lines. The expected output is:
```
{
"type": "FeatureCollection",
"name": "entities",
"features": [{
"type": "Feature",
"properties": {
"Layer": "0",
"SubClasses": "AcDbEntity:AcDbBlockReference",
"EntityHandle": "2F1",
"Text": "6%",
"Area": "100"
},
"geometry": {
"type": "LineString",
"coordinates": [
[61.971069681118479, 36.504485105673659],
[46.471068755199667, 36.504485105673659],
[46.471068755199667, 35.954489281866685],
[44.371068755199758, 35.954489281866685],
[44.371068755199758, 36.10448936390457],
[43.371069617387093, 36.104489150107824],
[43.371069617387093, 23.904496401184584],
[48.172716774891342, 23.904496401184584],
[48.171892994728751, 17.404489374370311],
[61.17106949647404, 17.404489281863786],
[61.17106949647404, 19.404489281863786],
[61.971069689453991, 19.404489282256687],
[61.971069681118479, 36.504485105673659]
]
}
},
{
"type": "Feature",
"properties": {
"Layer": "0",
"SubClasses": "AcDbEntity:AcDbBlockReference",
"EntityHandle": "2F7",
"Text": "5.8%"
},
"geometry": {
"type": "LineString",
"coordinates": [
[62.37106968111857, 36.504489398648715],
[62.371069689452725, 19.404489281863786],
[63.171069496474047, 19.404489281863786],
[63.171069496474047, 17.404489281863786],
[77.921070051947027, 17.404489281863786],
[77.921070051947027, 19.504489281855054],
[78.671070051947027, 19.504489281855054],
[78.671070051897914, 36.504485105717322],
[62.37106968111857, 36.504489398648715]
]
}
}
]
}
```
Is it possible to get result above in Python? Thanks.
**Updated solution**, thanks to @dodopy:
```
import json
features = data["features"]
point_handle_text = {
i["properties"]["EntityHandle"]: i["properties"]["Text"]
for i in features
if i["geometry"]["type"] == "Point"
}
point_handle_area = {
i["properties"]["EntityHandle"]: i["properties"]["Area"]
for i in features
if i["geometry"]["type"] == "Point"
}
combine_features = []
for i in features:
if i["geometry"]["type"] == "LineString":
i["properties"]["Text"] = point_handle_text.get(i["properties"]["EntityHandle"])
combine_features.append(i)
data["features"] = combine_features
combine_features = []
for i in features:
if i["geometry"]["type"] == "LineString":
i["properties"]["Area"] = point_handle_area.get(i["properties"]["EntityHandle"])
combine_features.append(i)
data["features"] = combine_features
with open('test.geojson', 'w+') as f:
json.dump(data, f, indent=2)
```
But I get an error:
```
Traceback (most recent call last):
File "<ipython-input-131-d132c8854a9c>", line 6, in <module>
for i in features
File "<ipython-input-131-d132c8854a9c>", line 7, in <dictcomp>
if i["geometry"]["type"] == "Point"
KeyError: 'Text'
```
|
2019/03/29
|
[
"https://Stackoverflow.com/questions/55411549",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8410477/"
] |
Can you try the following:
```
>>> df = pd.DataFrame([[1, 2, 4], [4, 5, 6], [7, '[8', 9]])
>>> df = df.astype('str')
>>> df
0 1 2
0 1 2 4
1 4 5 6
2 7 [8 9
>>> df[df.columns[[df[i].str.contains('[', regex=False).any() for i in df.columns]]]
1
0 2
1 5
2 [8
>>>
```
|
**Please try this, hope this works for you**
```
df = pd.DataFrame([['a', 'b', 'c','[d'], ['e','[f','g','h'],['i','j','k','l'],['m','n','o','[p']],columns=['col1','col2','col3','col4'])
cols = []
for col in df.columns:
if df[col].str.contains('[',regex=False).any() == True:
cols.append(col)
df[cols]
```
**Output**
```
col2 col4
0 b [d
1 [f h
2 j l
3 n [p
```
|
55,411,549
|
**EDIT:** I am trying to manipulate JSON files in Python. In my data some polygons have multiple related information: coordinates (`LineString`) and **area percent** and **area** (`Text` and `Area` in `Point`), I want to combine them to a single JSON object. As an example, the data from files are as follows:
```
data = {
"type": "FeatureCollection",
"name": "entities",
"features": [{
"type": "Feature",
"properties": {
"Layer": "0",
"SubClasses": "AcDbEntity:AcDbBlockReference",
"EntityHandle": "2F1"
},
"geometry": {
"type": "LineString",
"coordinates": [
[61.971069681118479, 36.504485105673659],
[46.471068755199667, 36.504485105673659],
[46.471068755199667, 35.954489281866685],
[44.371068755199758, 35.954489281866685],
[44.371068755199758, 36.10448936390457],
[43.371069617387093, 36.104489150107824],
[43.371069617387093, 23.904496401184584],
[48.172716774891342, 23.904496401184584],
[48.171892994728751, 17.404489374370311],
[61.17106949647404, 17.404489281863786],
[61.17106949647404, 19.404489281863786],
[61.971069689453991, 19.404489282256687],
[61.971069681118479, 36.504485105673659]
]
}
},
{
"type": "Feature",
"properties": {
"Layer": "0",
"SubClasses": "AcDbEntity:AcDbMText",
"EntityHandle": "2F1",
"Text": "6%"
},
"geometry": {
"type": "Point",
"coordinates": [49.745686139884583, 28.11445704760262, 0.0]
}
},
{
"type": "Feature",
"properties": {
"Layer": "0",
"SubClasses": "AcDbEntity:AcDbMText",
"EntityHandle": "2F1",
"Area": "100"
},
"geometry": {
"type": "Point",
"coordinates": [50.216857362443989, 63.981197759829229, 0.0]
}
},
{
"type": "Feature",
"properties": {
"Layer": "0",
"SubClasses": "AcDbEntity:AcDbBlockReference",
"EntityHandle": "2F7"
},
"geometry": {
"type": "LineString",
"coordinates": [
[62.37106968111857, 36.504489398648715],
[62.371069689452725, 19.404489281863786],
[63.171069496474047, 19.404489281863786],
[63.171069496474047, 17.404489281863786],
[77.921070051947027, 17.404489281863786],
[77.921070051947027, 19.504489281855054],
[78.671070051947027, 19.504489281855054],
[78.671070051897914, 36.504485105717322],
[62.37106968111857, 36.504489398648715]
]
}
},
{
"type": "Feature",
"properties": {
"Layer": "0",
"SubClasses": "AcDbEntity:AcDbMText",
"EntityHandle": "2F7",
"Text": "5.8%"
},
"geometry": {
"type": "Point",
"coordinates": [67.27548061311245, 28.11445704760262, 0.0]
}
}
]
}
```
I want to combine `Point`'s `Text` and `Area` key and values to `LineString` based on `EntityHandle`'s values, and also delete `Point` lines. The expected output is:
```
{
"type": "FeatureCollection",
"name": "entities",
"features": [{
"type": "Feature",
"properties": {
"Layer": "0",
"SubClasses": "AcDbEntity:AcDbBlockReference",
"EntityHandle": "2F1",
"Text": "6%",
"Area": "100"
},
"geometry": {
"type": "LineString",
"coordinates": [
[61.971069681118479, 36.504485105673659],
[46.471068755199667, 36.504485105673659],
[46.471068755199667, 35.954489281866685],
[44.371068755199758, 35.954489281866685],
[44.371068755199758, 36.10448936390457],
[43.371069617387093, 36.104489150107824],
[43.371069617387093, 23.904496401184584],
[48.172716774891342, 23.904496401184584],
[48.171892994728751, 17.404489374370311],
[61.17106949647404, 17.404489281863786],
[61.17106949647404, 19.404489281863786],
[61.971069689453991, 19.404489282256687],
[61.971069681118479, 36.504485105673659]
]
}
},
{
"type": "Feature",
"properties": {
"Layer": "0",
"SubClasses": "AcDbEntity:AcDbBlockReference",
"EntityHandle": "2F7",
"Text": "5.8%"
},
"geometry": {
"type": "LineString",
"coordinates": [
[62.37106968111857, 36.504489398648715],
[62.371069689452725, 19.404489281863786],
[63.171069496474047, 19.404489281863786],
[63.171069496474047, 17.404489281863786],
[77.921070051947027, 17.404489281863786],
[77.921070051947027, 19.504489281855054],
[78.671070051947027, 19.504489281855054],
[78.671070051897914, 36.504485105717322],
[62.37106968111857, 36.504489398648715]
]
}
}
]
}
```
Is it possible to get result above in Python? Thanks.
**Updated solution**, thanks to @dodopy:
```
import json
features = data["features"]
point_handle_text = {
i["properties"]["EntityHandle"]: i["properties"]["Text"]
for i in features
if i["geometry"]["type"] == "Point"
}
point_handle_area = {
i["properties"]["EntityHandle"]: i["properties"]["Area"]
for i in features
if i["geometry"]["type"] == "Point"
}
combine_features = []
for i in features:
if i["geometry"]["type"] == "LineString":
i["properties"]["Text"] = point_handle_text.get(i["properties"]["EntityHandle"])
combine_features.append(i)
data["features"] = combine_features
combine_features = []
for i in features:
if i["geometry"]["type"] == "LineString":
i["properties"]["Area"] = point_handle_area.get(i["properties"]["EntityHandle"])
combine_features.append(i)
data["features"] = combine_features
with open('test.geojson', 'w+') as f:
json.dump(data, f, indent=2)
```
But I get an error:
```
Traceback (most recent call last):
File "<ipython-input-131-d132c8854a9c>", line 6, in <module>
for i in features
File "<ipython-input-131-d132c8854a9c>", line 7, in <dictcomp>
if i["geometry"]["type"] == "Point"
KeyError: 'Text'
```
|
2019/03/29
|
[
"https://Stackoverflow.com/questions/55411549",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8410477/"
] |
>
> I want to load only the columns that contain a value that starts with right bracket [
>
>
>
For this you need
[`series.str.startswith()`](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.str.startswith.html):
```
df.loc[:,df.apply(lambda x: x.str.startswith('[')).any()]
```
---
```
col2 col4
0 b [d
1 [f h
2 j l
3 n [p
```
Note that there is a difference between startswith and contains. The docs are explanatory.
|
Use this:
```
s=df.applymap(lambda x: '[' in x).any()
print(df[s[s].index])
```
Output:
```
col2 col4
0 b [d
1 [f h
2 j l
3 n [pa
```
|
55,411,549
|
**EDIT:** I am trying to manipulate JSON files in Python. In my data some polygons have multiple related information: coordinates (`LineString`) and **area percent** and **area** (`Text` and `Area` in `Point`), I want to combine them to a single JSON object. As an example, the data from files are as follows:
```
data = {
"type": "FeatureCollection",
"name": "entities",
"features": [{
"type": "Feature",
"properties": {
"Layer": "0",
"SubClasses": "AcDbEntity:AcDbBlockReference",
"EntityHandle": "2F1"
},
"geometry": {
"type": "LineString",
"coordinates": [
[61.971069681118479, 36.504485105673659],
[46.471068755199667, 36.504485105673659],
[46.471068755199667, 35.954489281866685],
[44.371068755199758, 35.954489281866685],
[44.371068755199758, 36.10448936390457],
[43.371069617387093, 36.104489150107824],
[43.371069617387093, 23.904496401184584],
[48.172716774891342, 23.904496401184584],
[48.171892994728751, 17.404489374370311],
[61.17106949647404, 17.404489281863786],
[61.17106949647404, 19.404489281863786],
[61.971069689453991, 19.404489282256687],
[61.971069681118479, 36.504485105673659]
]
}
},
{
"type": "Feature",
"properties": {
"Layer": "0",
"SubClasses": "AcDbEntity:AcDbMText",
"EntityHandle": "2F1",
"Text": "6%"
},
"geometry": {
"type": "Point",
"coordinates": [49.745686139884583, 28.11445704760262, 0.0]
}
},
{
"type": "Feature",
"properties": {
"Layer": "0",
"SubClasses": "AcDbEntity:AcDbMText",
"EntityHandle": "2F1",
"Area": "100"
},
"geometry": {
"type": "Point",
"coordinates": [50.216857362443989, 63.981197759829229, 0.0]
}
},
{
"type": "Feature",
"properties": {
"Layer": "0",
"SubClasses": "AcDbEntity:AcDbBlockReference",
"EntityHandle": "2F7"
},
"geometry": {
"type": "LineString",
"coordinates": [
[62.37106968111857, 36.504489398648715],
[62.371069689452725, 19.404489281863786],
[63.171069496474047, 19.404489281863786],
[63.171069496474047, 17.404489281863786],
[77.921070051947027, 17.404489281863786],
[77.921070051947027, 19.504489281855054],
[78.671070051947027, 19.504489281855054],
[78.671070051897914, 36.504485105717322],
[62.37106968111857, 36.504489398648715]
]
}
},
{
"type": "Feature",
"properties": {
"Layer": "0",
"SubClasses": "AcDbEntity:AcDbMText",
"EntityHandle": "2F7",
"Text": "5.8%"
},
"geometry": {
"type": "Point",
"coordinates": [67.27548061311245, 28.11445704760262, 0.0]
}
}
]
}
```
I want to combine `Point`'s `Text` and `Area` key and values to `LineString` based on `EntityHandle`'s values, and also delete `Point` lines. The expected output is:
```
{
"type": "FeatureCollection",
"name": "entities",
"features": [{
"type": "Feature",
"properties": {
"Layer": "0",
"SubClasses": "AcDbEntity:AcDbBlockReference",
"EntityHandle": "2F1",
"Text": "6%",
"Area": "100"
},
"geometry": {
"type": "LineString",
"coordinates": [
[61.971069681118479, 36.504485105673659],
[46.471068755199667, 36.504485105673659],
[46.471068755199667, 35.954489281866685],
[44.371068755199758, 35.954489281866685],
[44.371068755199758, 36.10448936390457],
[43.371069617387093, 36.104489150107824],
[43.371069617387093, 23.904496401184584],
[48.172716774891342, 23.904496401184584],
[48.171892994728751, 17.404489374370311],
[61.17106949647404, 17.404489281863786],
[61.17106949647404, 19.404489281863786],
[61.971069689453991, 19.404489282256687],
[61.971069681118479, 36.504485105673659]
]
}
},
{
"type": "Feature",
"properties": {
"Layer": "0",
"SubClasses": "AcDbEntity:AcDbBlockReference",
"EntityHandle": "2F7",
"Text": "5.8%"
},
"geometry": {
"type": "LineString",
"coordinates": [
[62.37106968111857, 36.504489398648715],
[62.371069689452725, 19.404489281863786],
[63.171069496474047, 19.404489281863786],
[63.171069496474047, 17.404489281863786],
[77.921070051947027, 17.404489281863786],
[77.921070051947027, 19.504489281855054],
[78.671070051947027, 19.504489281855054],
[78.671070051897914, 36.504485105717322],
[62.37106968111857, 36.504489398648715]
]
}
}
]
}
```
Is it possible to get result above in Python? Thanks.
**Updated solution**, thanks to @dodopy:
```
import json
features = data["features"]
point_handle_text = {
i["properties"]["EntityHandle"]: i["properties"]["Text"]
for i in features
if i["geometry"]["type"] == "Point"
}
point_handle_area = {
i["properties"]["EntityHandle"]: i["properties"]["Area"]
for i in features
if i["geometry"]["type"] == "Point"
}
combine_features = []
for i in features:
if i["geometry"]["type"] == "LineString":
i["properties"]["Text"] = point_handle_text.get(i["properties"]["EntityHandle"])
combine_features.append(i)
data["features"] = combine_features
combine_features = []
for i in features:
if i["geometry"]["type"] == "LineString":
i["properties"]["Area"] = point_handle_area.get(i["properties"]["EntityHandle"])
combine_features.append(i)
data["features"] = combine_features
with open('test.geojson', 'w+') as f:
json.dump(data, f, indent=2)
```
But I get an error:
```
Traceback (most recent call last):
File "<ipython-input-131-d132c8854a9c>", line 6, in <module>
for i in features
File "<ipython-input-131-d132c8854a9c>", line 7, in <dictcomp>
if i["geometry"]["type"] == "Point"
KeyError: 'Text'
```
|
2019/03/29
|
[
"https://Stackoverflow.com/questions/55411549",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8410477/"
] |
>
> I want to load only the columns that contain a value that starts with right bracket [
>
>
>
For this you need
[`series.str.startswith()`](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.str.startswith.html):
```
df.loc[:,df.apply(lambda x: x.str.startswith('[')).any()]
```
---
```
col2 col4
0 b [d
1 [f h
2 j l
3 n [p
```
Note that there is a difference between startswith and contains. The docs are explanatory.
|
**Please try this, hope this works for you**
```
df = pd.DataFrame([['a', 'b', 'c','[d'], ['e','[f','g','h'],['i','j','k','l'],['m','n','o','[p']],columns=['col1','col2','col3','col4'])
cols = []
for col in df.columns:
if df[col].str.contains('[',regex=False).any() == True:
cols.append(col)
df[cols]
```
**Output**
```
col2 col4
0 b [d
1 [f h
2 j l
3 n [p
```
|
62,461,130
|
I am trying to update a Lex bot using the python SDK put\_bot function. I need to pass a checksum value of the function as specified [here](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/lex-models.html#LexModelBuildingService.Client.put_bot). How do I get this value?
So far I have tried using below functions and the checksum values returned from these
1. The get\_bot with the prod alias
2. The get\_bot\_alias function
3. The get\_bot\_aliases function
Is the checksum value
|
2020/06/18
|
[
"https://Stackoverflow.com/questions/62461130",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13765073/"
] |
The issue here seems to be that in the first example you got one `div` with multiple components in it:
```
<div id="controls">
<NavControl />
<NavControl />
<NavControl />
</div>
```
The problem is that in the second example you are creating multiple `div` elements with the same id and each of them have nested component inside of it.
Here, in a loop, you create multiple `div` elements with `id="controls"`
```
<div id="controls" v-bind:key="control.id" v-for="control in controls">
<NavControl v-bind:control="control" />
</div>
```
It ends up being something similar to this:
```
<div id="controls">
<NavControl />
</div>
<div id="controls">
<NavControl />
</div>
<div id="controls">
<NavControl />
</div>
```
You would probably see it better if you inspected your code in the browser tools.
*As a side note: please, keep in mind that if you for whatever reason wanted to do something like this then you would use `class` instead of `id`.*
**Solution:**
What you would want to do instead is to create multiple components inside your `<div id="controls"></div>`.
To do that you would place `v-for` inside the component and not the outer `div`.
```
<NavControl v-for="control in controls" v-bind:control="control" :key="control.id"/>
```
|
I am unsure of the actual desired scope, so I will go over both.
If we want multiple `controls` in the template, as laid out, switch `id="controls"` to `class="controls"`:
```
<template>
<div id="navControlPanel">
<div class="controls" v-bind:key="control.id" v-for="control in controls">
<NavControl v-bind:control="control" />
</div>
</div>
</template>
...
.controls {
/* my styles */
}
```
If the `v-for` is out of scope, and we only want this ran on those items on `NavControl`:
```
<template>
<div id="navControlPanel">
<div class="controls">
<NavControl
v-for="control in controls"
/>
</div>
</div>
</template>
```
|
24,245,324
|
Something about the `id` of objects of type `str` (in python 2.7) puzzles me. The `str` type is immutable, so I would expect that once it is created, it will always have the same `id`. I believe I don't phrase myself so well, so instead I'll post an example of input and output sequence.
```
>>> id('so')
140614155123888
>>> id('so')
140614155123848
>>> id('so')
140614155123808
```
so in the meanwhile, it changes all the time. However, after having a variable pointing at that string, things change:
```
>>> so = 'so'
>>> id('so')
140614155123728
>>> so = 'so'
>>> id(so)
140614155123728
>>> not_so = 'so'
>>> id(not_so)
140614155123728
```
So it looks like it freezes the id, once a variable holds that value. Indeed, after `del so` and `del not_so`, the output of `id('so')` start changing again.
This is **not** the same behaviour as with (small) integers.
I know there is not real connection between immutability and having the same `id`; still, I am trying to figure out the source of this behaviour. I believe that someone whose familiar with python's internals would be less surprised than me, so I am trying to reach the same point...
Update
======
Trying the same with a different string gave different results...
```
>>> id('hello')
139978087896384
>>> id('hello')
139978087896384
>>> id('hello')
139978087896384
```
Now it **is** equal...
|
2014/06/16
|
[
"https://Stackoverflow.com/questions/24245324",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1565828/"
] |
This behavior is specific to the Python interactive shell. If I put the following in a .py file:
```
print id('so')
print id('so')
print id('so')
```
and execute it, I receive the following output:
```
2888960
2888960
2888960
```
In CPython, a string literal is treated as a constant, which we can see in the bytecode of the snippet above:
```
2 0 LOAD_GLOBAL 0 (id)
3 LOAD_CONST 1 ('so')
6 CALL_FUNCTION 1
9 PRINT_ITEM
10 PRINT_NEWLINE
3 11 LOAD_GLOBAL 0 (id)
14 LOAD_CONST 1 ('so')
17 CALL_FUNCTION 1
20 PRINT_ITEM
21 PRINT_NEWLINE
4 22 LOAD_GLOBAL 0 (id)
25 LOAD_CONST 1 ('so')
28 CALL_FUNCTION 1
31 PRINT_ITEM
32 PRINT_NEWLINE
33 LOAD_CONST 0 (None)
36 RETURN_VALUE
```
The *same* constant (i.e. the same string object) is loaded 3 times, so the IDs are the same.
|
In your first example a new instance of the string `'so'` is created each time, hence different id.
In the second example you are binding the string to a variable and Python can then maintain a shared copy of the string.
|
24,245,324
|
Something about the `id` of objects of type `str` (in python 2.7) puzzles me. The `str` type is immutable, so I would expect that once it is created, it will always have the same `id`. I believe I don't phrase myself so well, so instead I'll post an example of input and output sequence.
```
>>> id('so')
140614155123888
>>> id('so')
140614155123848
>>> id('so')
140614155123808
```
so in the meanwhile, it changes all the time. However, after having a variable pointing at that string, things change:
```
>>> so = 'so'
>>> id('so')
140614155123728
>>> so = 'so'
>>> id(so)
140614155123728
>>> not_so = 'so'
>>> id(not_so)
140614155123728
```
So it looks like it freezes the id, once a variable holds that value. Indeed, after `del so` and `del not_so`, the output of `id('so')` start changing again.
This is **not** the same behaviour as with (small) integers.
I know there is not real connection between immutability and having the same `id`; still, I am trying to figure out the source of this behaviour. I believe that someone whose familiar with python's internals would be less surprised than me, so I am trying to reach the same point...
Update
======
Trying the same with a different string gave different results...
```
>>> id('hello')
139978087896384
>>> id('hello')
139978087896384
>>> id('hello')
139978087896384
```
Now it **is** equal...
|
2014/06/16
|
[
"https://Stackoverflow.com/questions/24245324",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1565828/"
] |
In your first example a new instance of the string `'so'` is created each time, hence different id.
In the second example you are binding the string to a variable and Python can then maintain a shared copy of the string.
|
So while Python is not guaranteed to intern strings, it will frequently reuse the same string, and `is` may mislead. It's important to know that you shouldn't check `id` or `is` for equality of strings.
To demonstrate this, one way I've discovered to force a new string in Python 2.6 at least:
```
>>> so = 'so'
>>> new_so = '{0}'.format(so)
>>> so is new_so
False
```
and here's a bit more Python exploration:
```
>>> id(so)
102596064
>>> id(new_so)
259679968
>>> so == new_so
True
```
|
24,245,324
|
Something about the `id` of objects of type `str` (in python 2.7) puzzles me. The `str` type is immutable, so I would expect that once it is created, it will always have the same `id`. I believe I don't phrase myself so well, so instead I'll post an example of input and output sequence.
```
>>> id('so')
140614155123888
>>> id('so')
140614155123848
>>> id('so')
140614155123808
```
so in the meanwhile, it changes all the time. However, after having a variable pointing at that string, things change:
```
>>> so = 'so'
>>> id('so')
140614155123728
>>> so = 'so'
>>> id(so)
140614155123728
>>> not_so = 'so'
>>> id(not_so)
140614155123728
```
So it looks like it freezes the id, once a variable holds that value. Indeed, after `del so` and `del not_so`, the output of `id('so')` start changing again.
This is **not** the same behaviour as with (small) integers.
I know there is not real connection between immutability and having the same `id`; still, I am trying to figure out the source of this behaviour. I believe that someone whose familiar with python's internals would be less surprised than me, so I am trying to reach the same point...
Update
======
Trying the same with a different string gave different results...
```
>>> id('hello')
139978087896384
>>> id('hello')
139978087896384
>>> id('hello')
139978087896384
```
Now it **is** equal...
|
2014/06/16
|
[
"https://Stackoverflow.com/questions/24245324",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1565828/"
] |
CPython does not promise to intern *all* strings by default, but in practice, a lot of places in the Python codebase do reuse already-created string objects. A lot of Python internals use (the C-equivalent of) the [`sys.intern()` function call](https://docs.python.org/3/library/sys.html#sys.intern) to explicitly intern Python strings, but unless you hit one of those special cases, two identical Python string literals will produce different strings.
Python is also free to *reuse* memory locations, and Python will also optimize immutable *literals* by storing them once, at compile time, with the bytecode in code objects. The Python REPL (interactive interpreter) also stores the most recent expression result in the `_` name, which muddles up things some more.
As such, you *will* see the same id crop up from time to time.
Running just the line `id(<string literal>)` in the REPL goes through several steps:
1. The line is compiled, which includes creating a constant for the string object:
```
>>> compile("id('foo')", '<stdin>', 'single').co_consts
('foo', None)
```
This shows the stored constants with the compiled bytecode; in this case a string `'foo'` and the `None` singleton. Simple expressions consisting of that produce an immutable value may be optimised at this stage, see the note on optimizers, below.
2. On execution, the string is loaded from the code constants, and `id()` returns the memory location. The resulting `int` value is bound to `_`, as well as printed:
```
>>> import dis
>>> dis.dis(compile("id('foo')", '<stdin>', 'single'))
1 0 LOAD_NAME 0 (id)
3 LOAD_CONST 0 ('foo')
6 CALL_FUNCTION 1
9 PRINT_EXPR
10 LOAD_CONST 1 (None)
13 RETURN_VALUE
```
3. The code object is not referenced by anything, reference count drops to 0 and the code object is deleted. As a consequence, so is the string object.
Python can then *perhaps* reuse the same memory location for a new string object, if you re-run the same code. This usually leads to the same memory address being printed if you repeat this code. *This does depend on what else you do with your Python memory*.
ID reuse is *not* predictable; if in the meantime the garbage collector runs to clear circular references, other memory could be freed and you'll get new memory addresses.
Next, the Python compiler will also intern any Python string stored as a constant, provided it looks enough like a valid identifier. The Python [code object factory function PyCode\_New](https://github.com/python/cpython/blob/a6eac83c1804fd14ed076b1776ffeea8dcb9478a/Objects/codeobject.c#L96-L222) will intern any string object that contains only ASCII letters, digits or underscores, by calling [`intern_string_constants()`](https://github.com/python/cpython/blob/a6eac83c1804fd14ed076b1776ffeea8dcb9478a/Objects/codeobject.c#L45-L93). This function recurses through the constants structures and for any string object `v` found there executes:
```c
if (all_name_chars(v)) {
PyObject *w = v;
PyUnicode_InternInPlace(&v);
if (w != v) {
PyTuple_SET_ITEM(tuple, i, v);
modified = 1;
}
}
```
where [`all_name_chars()`](https://github.com/python/cpython/blob/a6eac83c1804fd14ed076b1776ffeea8dcb9478a/Objects/codeobject.c#L13-L29) is documented as
```c
/* all_name_chars(s): true iff s matches [a-zA-Z0-9_]* */
```
Since you created strings that fit that criterion, they are interned, which is why you see the same ID being used for the `'so'` string in your second test: as long as a reference to the interned version survives, interning will cause future `'so'` literals to reuse the interned string object, even in new code blocks and bound to different identifiers. In your first test, you don't save a reference to the string, so the interned strings are discarded before they can be reused.
Incidentally, your new name `so = 'so'` binds a string to a name that *contains the same characters*. In other words, you are creating a global whose name and value are equal. As Python interns both identifiers and qualifying constants, you end up using the same string object for both the identifier and its value:
```
>>> compile("so = 'so'", '<stdin>', 'single').co_names[0] is compile("so = 'so'", '<stdin>', 'single').co_consts[0]
True
```
If you create strings that are either not code object constants, or contain characters outside of the letters + numbers + underscore range, you'll see the `id()` value not being reused:
```
>>> some_var = 'Look ma, spaces and punctuation!'
>>> some_other_var = 'Look ma, spaces and punctuation!'
>>> id(some_var)
4493058384
>>> id(some_other_var)
4493058456
>>> foo = 'Concatenating_' + 'also_helps_if_long_enough'
>>> bar = 'Concatenating_' + 'also_helps_if_long_enough'
>>> foo is bar
False
>>> foo == bar
True
```
The Python compiler either uses the [peephole optimizer](https://github.com/python/cpython/blob/3.6/Python/peephole.c) (Python versions < 3.7) or the more capable [AST optimizer](https://github.com/python/cpython/blob/3.7/Python/ast_opt.c) (3.7 and newer) to pre-calculate (fold) the results of simple expressions involving constants. The peepholder limits it's output to a sequence of length 20 or less (to prevent bloating code objects and memory use), while the AST optimizer uses a separate limit for strings of 4096 characters. This means that concatenating shorter strings consisting only of name characters *can* still lead to interned strings if the resulting string fits within the optimizer limits of your current Python version.
E.g. on Python 3.7, `'foo' * 20` will result in a single interned string, because constant folding turns this into a single value, while on Python 3.6 or older only `'foo' * 6` would be folded:
```
>>> import dis, sys
>>> sys.version_info
sys.version_info(major=3, minor=7, micro=4, releaselevel='final', serial=0)
>>> dis.dis("'foo' * 20")
1 0 LOAD_CONST 0 ('foofoofoofoofoofoofoofoofoofoofoofoofoofoofoofoofoofoofoofoo')
2 RETURN_VALUE
```
and
```
>>> dis.dis("'foo' * 6")
1 0 LOAD_CONST 2 ('foofoofoofoofoofoo')
2 RETURN_VALUE
>>> dis.dis("'foo' * 7")
1 0 LOAD_CONST 0 ('foo')
2 LOAD_CONST 1 (7)
4 BINARY_MULTIPLY
6 RETURN_VALUE
```
|
In your first example a new instance of the string `'so'` is created each time, hence different id.
In the second example you are binding the string to a variable and Python can then maintain a shared copy of the string.
|
24,245,324
|
Something about the `id` of objects of type `str` (in python 2.7) puzzles me. The `str` type is immutable, so I would expect that once it is created, it will always have the same `id`. I believe I don't phrase myself so well, so instead I'll post an example of input and output sequence.
```
>>> id('so')
140614155123888
>>> id('so')
140614155123848
>>> id('so')
140614155123808
```
so in the meanwhile, it changes all the time. However, after having a variable pointing at that string, things change:
```
>>> so = 'so'
>>> id('so')
140614155123728
>>> so = 'so'
>>> id(so)
140614155123728
>>> not_so = 'so'
>>> id(not_so)
140614155123728
```
So it looks like it freezes the id, once a variable holds that value. Indeed, after `del so` and `del not_so`, the output of `id('so')` start changing again.
This is **not** the same behaviour as with (small) integers.
I know there is not real connection between immutability and having the same `id`; still, I am trying to figure out the source of this behaviour. I believe that someone whose familiar with python's internals would be less surprised than me, so I am trying to reach the same point...
Update
======
Trying the same with a different string gave different results...
```
>>> id('hello')
139978087896384
>>> id('hello')
139978087896384
>>> id('hello')
139978087896384
```
Now it **is** equal...
|
2014/06/16
|
[
"https://Stackoverflow.com/questions/24245324",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1565828/"
] |
This behavior is specific to the Python interactive shell. If I put the following in a .py file:
```
print id('so')
print id('so')
print id('so')
```
and execute it, I receive the following output:
```
2888960
2888960
2888960
```
In CPython, a string literal is treated as a constant, which we can see in the bytecode of the snippet above:
```
2 0 LOAD_GLOBAL 0 (id)
3 LOAD_CONST 1 ('so')
6 CALL_FUNCTION 1
9 PRINT_ITEM
10 PRINT_NEWLINE
3 11 LOAD_GLOBAL 0 (id)
14 LOAD_CONST 1 ('so')
17 CALL_FUNCTION 1
20 PRINT_ITEM
21 PRINT_NEWLINE
4 22 LOAD_GLOBAL 0 (id)
25 LOAD_CONST 1 ('so')
28 CALL_FUNCTION 1
31 PRINT_ITEM
32 PRINT_NEWLINE
33 LOAD_CONST 0 (None)
36 RETURN_VALUE
```
The *same* constant (i.e. the same string object) is loaded 3 times, so the IDs are the same.
|
So while Python is not guaranteed to intern strings, it will frequently reuse the same string, and `is` may mislead. It's important to know that you shouldn't check `id` or `is` for equality of strings.
To demonstrate this, one way I've discovered to force a new string in Python 2.6 at least:
```
>>> so = 'so'
>>> new_so = '{0}'.format(so)
>>> so is new_so
False
```
and here's a bit more Python exploration:
```
>>> id(so)
102596064
>>> id(new_so)
259679968
>>> so == new_so
True
```
|
24,245,324
|
Something about the `id` of objects of type `str` (in python 2.7) puzzles me. The `str` type is immutable, so I would expect that once it is created, it will always have the same `id`. I believe I don't phrase myself so well, so instead I'll post an example of input and output sequence.
```
>>> id('so')
140614155123888
>>> id('so')
140614155123848
>>> id('so')
140614155123808
```
so in the meanwhile, it changes all the time. However, after having a variable pointing at that string, things change:
```
>>> so = 'so'
>>> id('so')
140614155123728
>>> so = 'so'
>>> id(so)
140614155123728
>>> not_so = 'so'
>>> id(not_so)
140614155123728
```
So it looks like it freezes the id, once a variable holds that value. Indeed, after `del so` and `del not_so`, the output of `id('so')` start changing again.
This is **not** the same behaviour as with (small) integers.
I know there is not real connection between immutability and having the same `id`; still, I am trying to figure out the source of this behaviour. I believe that someone whose familiar with python's internals would be less surprised than me, so I am trying to reach the same point...
Update
======
Trying the same with a different string gave different results...
```
>>> id('hello')
139978087896384
>>> id('hello')
139978087896384
>>> id('hello')
139978087896384
```
Now it **is** equal...
|
2014/06/16
|
[
"https://Stackoverflow.com/questions/24245324",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1565828/"
] |
CPython does not promise to intern *all* strings by default, but in practice, a lot of places in the Python codebase do reuse already-created string objects. A lot of Python internals use (the C-equivalent of) the [`sys.intern()` function call](https://docs.python.org/3/library/sys.html#sys.intern) to explicitly intern Python strings, but unless you hit one of those special cases, two identical Python string literals will produce different strings.
Python is also free to *reuse* memory locations, and Python will also optimize immutable *literals* by storing them once, at compile time, with the bytecode in code objects. The Python REPL (interactive interpreter) also stores the most recent expression result in the `_` name, which muddles up things some more.
As such, you *will* see the same id crop up from time to time.
Running just the line `id(<string literal>)` in the REPL goes through several steps:
1. The line is compiled, which includes creating a constant for the string object:
```
>>> compile("id('foo')", '<stdin>', 'single').co_consts
('foo', None)
```
This shows the stored constants with the compiled bytecode; in this case a string `'foo'` and the `None` singleton. Simple expressions consisting of that produce an immutable value may be optimised at this stage, see the note on optimizers, below.
2. On execution, the string is loaded from the code constants, and `id()` returns the memory location. The resulting `int` value is bound to `_`, as well as printed:
```
>>> import dis
>>> dis.dis(compile("id('foo')", '<stdin>', 'single'))
1 0 LOAD_NAME 0 (id)
3 LOAD_CONST 0 ('foo')
6 CALL_FUNCTION 1
9 PRINT_EXPR
10 LOAD_CONST 1 (None)
13 RETURN_VALUE
```
3. The code object is not referenced by anything, reference count drops to 0 and the code object is deleted. As a consequence, so is the string object.
Python can then *perhaps* reuse the same memory location for a new string object, if you re-run the same code. This usually leads to the same memory address being printed if you repeat this code. *This does depend on what else you do with your Python memory*.
ID reuse is *not* predictable; if in the meantime the garbage collector runs to clear circular references, other memory could be freed and you'll get new memory addresses.
Next, the Python compiler will also intern any Python string stored as a constant, provided it looks enough like a valid identifier. The Python [code object factory function PyCode\_New](https://github.com/python/cpython/blob/a6eac83c1804fd14ed076b1776ffeea8dcb9478a/Objects/codeobject.c#L96-L222) will intern any string object that contains only ASCII letters, digits or underscores, by calling [`intern_string_constants()`](https://github.com/python/cpython/blob/a6eac83c1804fd14ed076b1776ffeea8dcb9478a/Objects/codeobject.c#L45-L93). This function recurses through the constants structures and for any string object `v` found there executes:
```c
if (all_name_chars(v)) {
PyObject *w = v;
PyUnicode_InternInPlace(&v);
if (w != v) {
PyTuple_SET_ITEM(tuple, i, v);
modified = 1;
}
}
```
where [`all_name_chars()`](https://github.com/python/cpython/blob/a6eac83c1804fd14ed076b1776ffeea8dcb9478a/Objects/codeobject.c#L13-L29) is documented as
```c
/* all_name_chars(s): true iff s matches [a-zA-Z0-9_]* */
```
Since you created strings that fit that criterion, they are interned, which is why you see the same ID being used for the `'so'` string in your second test: as long as a reference to the interned version survives, interning will cause future `'so'` literals to reuse the interned string object, even in new code blocks and bound to different identifiers. In your first test, you don't save a reference to the string, so the interned strings are discarded before they can be reused.
Incidentally, your new name `so = 'so'` binds a string to a name that *contains the same characters*. In other words, you are creating a global whose name and value are equal. As Python interns both identifiers and qualifying constants, you end up using the same string object for both the identifier and its value:
```
>>> compile("so = 'so'", '<stdin>', 'single').co_names[0] is compile("so = 'so'", '<stdin>', 'single').co_consts[0]
True
```
If you create strings that are either not code object constants, or contain characters outside of the letters + numbers + underscore range, you'll see the `id()` value not being reused:
```
>>> some_var = 'Look ma, spaces and punctuation!'
>>> some_other_var = 'Look ma, spaces and punctuation!'
>>> id(some_var)
4493058384
>>> id(some_other_var)
4493058456
>>> foo = 'Concatenating_' + 'also_helps_if_long_enough'
>>> bar = 'Concatenating_' + 'also_helps_if_long_enough'
>>> foo is bar
False
>>> foo == bar
True
```
The Python compiler either uses the [peephole optimizer](https://github.com/python/cpython/blob/3.6/Python/peephole.c) (Python versions < 3.7) or the more capable [AST optimizer](https://github.com/python/cpython/blob/3.7/Python/ast_opt.c) (3.7 and newer) to pre-calculate (fold) the results of simple expressions involving constants. The peepholder limits it's output to a sequence of length 20 or less (to prevent bloating code objects and memory use), while the AST optimizer uses a separate limit for strings of 4096 characters. This means that concatenating shorter strings consisting only of name characters *can* still lead to interned strings if the resulting string fits within the optimizer limits of your current Python version.
E.g. on Python 3.7, `'foo' * 20` will result in a single interned string, because constant folding turns this into a single value, while on Python 3.6 or older only `'foo' * 6` would be folded:
```
>>> import dis, sys
>>> sys.version_info
sys.version_info(major=3, minor=7, micro=4, releaselevel='final', serial=0)
>>> dis.dis("'foo' * 20")
1 0 LOAD_CONST 0 ('foofoofoofoofoofoofoofoofoofoofoofoofoofoofoofoofoofoofoofoo')
2 RETURN_VALUE
```
and
```
>>> dis.dis("'foo' * 6")
1 0 LOAD_CONST 2 ('foofoofoofoofoofoo')
2 RETURN_VALUE
>>> dis.dis("'foo' * 7")
1 0 LOAD_CONST 0 ('foo')
2 LOAD_CONST 1 (7)
4 BINARY_MULTIPLY
6 RETURN_VALUE
```
|
This behavior is specific to the Python interactive shell. If I put the following in a .py file:
```
print id('so')
print id('so')
print id('so')
```
and execute it, I receive the following output:
```
2888960
2888960
2888960
```
In CPython, a string literal is treated as a constant, which we can see in the bytecode of the snippet above:
```
2 0 LOAD_GLOBAL 0 (id)
3 LOAD_CONST 1 ('so')
6 CALL_FUNCTION 1
9 PRINT_ITEM
10 PRINT_NEWLINE
3 11 LOAD_GLOBAL 0 (id)
14 LOAD_CONST 1 ('so')
17 CALL_FUNCTION 1
20 PRINT_ITEM
21 PRINT_NEWLINE
4 22 LOAD_GLOBAL 0 (id)
25 LOAD_CONST 1 ('so')
28 CALL_FUNCTION 1
31 PRINT_ITEM
32 PRINT_NEWLINE
33 LOAD_CONST 0 (None)
36 RETURN_VALUE
```
The *same* constant (i.e. the same string object) is loaded 3 times, so the IDs are the same.
|
24,245,324
|
Something about the `id` of objects of type `str` (in python 2.7) puzzles me. The `str` type is immutable, so I would expect that once it is created, it will always have the same `id`. I believe I don't phrase myself so well, so instead I'll post an example of input and output sequence.
```
>>> id('so')
140614155123888
>>> id('so')
140614155123848
>>> id('so')
140614155123808
```
so in the meanwhile, it changes all the time. However, after having a variable pointing at that string, things change:
```
>>> so = 'so'
>>> id('so')
140614155123728
>>> so = 'so'
>>> id(so)
140614155123728
>>> not_so = 'so'
>>> id(not_so)
140614155123728
```
So it looks like it freezes the id, once a variable holds that value. Indeed, after `del so` and `del not_so`, the output of `id('so')` start changing again.
This is **not** the same behaviour as with (small) integers.
I know there is not real connection between immutability and having the same `id`; still, I am trying to figure out the source of this behaviour. I believe that someone whose familiar with python's internals would be less surprised than me, so I am trying to reach the same point...
Update
======
Trying the same with a different string gave different results...
```
>>> id('hello')
139978087896384
>>> id('hello')
139978087896384
>>> id('hello')
139978087896384
```
Now it **is** equal...
|
2014/06/16
|
[
"https://Stackoverflow.com/questions/24245324",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1565828/"
] |
This behavior is specific to the Python interactive shell. If I put the following in a .py file:
```
print id('so')
print id('so')
print id('so')
```
and execute it, I receive the following output:
```
2888960
2888960
2888960
```
In CPython, a string literal is treated as a constant, which we can see in the bytecode of the snippet above:
```
2 0 LOAD_GLOBAL 0 (id)
3 LOAD_CONST 1 ('so')
6 CALL_FUNCTION 1
9 PRINT_ITEM
10 PRINT_NEWLINE
3 11 LOAD_GLOBAL 0 (id)
14 LOAD_CONST 1 ('so')
17 CALL_FUNCTION 1
20 PRINT_ITEM
21 PRINT_NEWLINE
4 22 LOAD_GLOBAL 0 (id)
25 LOAD_CONST 1 ('so')
28 CALL_FUNCTION 1
31 PRINT_ITEM
32 PRINT_NEWLINE
33 LOAD_CONST 0 (None)
36 RETURN_VALUE
```
The *same* constant (i.e. the same string object) is loaded 3 times, so the IDs are the same.
|
A more simplified way to understand the behaviour is to check the following [Data Types and Variables](http://www.python-course.eu/variables.php).
Section "A String Pecularity" illustrates your question using special characters as example.
|
24,245,324
|
Something about the `id` of objects of type `str` (in python 2.7) puzzles me. The `str` type is immutable, so I would expect that once it is created, it will always have the same `id`. I believe I don't phrase myself so well, so instead I'll post an example of input and output sequence.
```
>>> id('so')
140614155123888
>>> id('so')
140614155123848
>>> id('so')
140614155123808
```
so in the meanwhile, it changes all the time. However, after having a variable pointing at that string, things change:
```
>>> so = 'so'
>>> id('so')
140614155123728
>>> so = 'so'
>>> id(so)
140614155123728
>>> not_so = 'so'
>>> id(not_so)
140614155123728
```
So it looks like it freezes the id, once a variable holds that value. Indeed, after `del so` and `del not_so`, the output of `id('so')` start changing again.
This is **not** the same behaviour as with (small) integers.
I know there is not real connection between immutability and having the same `id`; still, I am trying to figure out the source of this behaviour. I believe that someone whose familiar with python's internals would be less surprised than me, so I am trying to reach the same point...
Update
======
Trying the same with a different string gave different results...
```
>>> id('hello')
139978087896384
>>> id('hello')
139978087896384
>>> id('hello')
139978087896384
```
Now it **is** equal...
|
2014/06/16
|
[
"https://Stackoverflow.com/questions/24245324",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1565828/"
] |
CPython does not promise to intern *all* strings by default, but in practice, a lot of places in the Python codebase do reuse already-created string objects. A lot of Python internals use (the C-equivalent of) the [`sys.intern()` function call](https://docs.python.org/3/library/sys.html#sys.intern) to explicitly intern Python strings, but unless you hit one of those special cases, two identical Python string literals will produce different strings.
Python is also free to *reuse* memory locations, and Python will also optimize immutable *literals* by storing them once, at compile time, with the bytecode in code objects. The Python REPL (interactive interpreter) also stores the most recent expression result in the `_` name, which muddles up things some more.
As such, you *will* see the same id crop up from time to time.
Running just the line `id(<string literal>)` in the REPL goes through several steps:
1. The line is compiled, which includes creating a constant for the string object:
```
>>> compile("id('foo')", '<stdin>', 'single').co_consts
('foo', None)
```
This shows the stored constants with the compiled bytecode; in this case a string `'foo'` and the `None` singleton. Simple expressions consisting of that produce an immutable value may be optimised at this stage, see the note on optimizers, below.
2. On execution, the string is loaded from the code constants, and `id()` returns the memory location. The resulting `int` value is bound to `_`, as well as printed:
```
>>> import dis
>>> dis.dis(compile("id('foo')", '<stdin>', 'single'))
1 0 LOAD_NAME 0 (id)
3 LOAD_CONST 0 ('foo')
6 CALL_FUNCTION 1
9 PRINT_EXPR
10 LOAD_CONST 1 (None)
13 RETURN_VALUE
```
3. The code object is not referenced by anything, reference count drops to 0 and the code object is deleted. As a consequence, so is the string object.
Python can then *perhaps* reuse the same memory location for a new string object, if you re-run the same code. This usually leads to the same memory address being printed if you repeat this code. *This does depend on what else you do with your Python memory*.
ID reuse is *not* predictable; if in the meantime the garbage collector runs to clear circular references, other memory could be freed and you'll get new memory addresses.
Next, the Python compiler will also intern any Python string stored as a constant, provided it looks enough like a valid identifier. The Python [code object factory function PyCode\_New](https://github.com/python/cpython/blob/a6eac83c1804fd14ed076b1776ffeea8dcb9478a/Objects/codeobject.c#L96-L222) will intern any string object that contains only ASCII letters, digits or underscores, by calling [`intern_string_constants()`](https://github.com/python/cpython/blob/a6eac83c1804fd14ed076b1776ffeea8dcb9478a/Objects/codeobject.c#L45-L93). This function recurses through the constants structures and for any string object `v` found there executes:
```c
if (all_name_chars(v)) {
PyObject *w = v;
PyUnicode_InternInPlace(&v);
if (w != v) {
PyTuple_SET_ITEM(tuple, i, v);
modified = 1;
}
}
```
where [`all_name_chars()`](https://github.com/python/cpython/blob/a6eac83c1804fd14ed076b1776ffeea8dcb9478a/Objects/codeobject.c#L13-L29) is documented as
```c
/* all_name_chars(s): true iff s matches [a-zA-Z0-9_]* */
```
Since you created strings that fit that criterion, they are interned, which is why you see the same ID being used for the `'so'` string in your second test: as long as a reference to the interned version survives, interning will cause future `'so'` literals to reuse the interned string object, even in new code blocks and bound to different identifiers. In your first test, you don't save a reference to the string, so the interned strings are discarded before they can be reused.
Incidentally, your new name `so = 'so'` binds a string to a name that *contains the same characters*. In other words, you are creating a global whose name and value are equal. As Python interns both identifiers and qualifying constants, you end up using the same string object for both the identifier and its value:
```
>>> compile("so = 'so'", '<stdin>', 'single').co_names[0] is compile("so = 'so'", '<stdin>', 'single').co_consts[0]
True
```
If you create strings that are either not code object constants, or contain characters outside of the letters + numbers + underscore range, you'll see the `id()` value not being reused:
```
>>> some_var = 'Look ma, spaces and punctuation!'
>>> some_other_var = 'Look ma, spaces and punctuation!'
>>> id(some_var)
4493058384
>>> id(some_other_var)
4493058456
>>> foo = 'Concatenating_' + 'also_helps_if_long_enough'
>>> bar = 'Concatenating_' + 'also_helps_if_long_enough'
>>> foo is bar
False
>>> foo == bar
True
```
The Python compiler either uses the [peephole optimizer](https://github.com/python/cpython/blob/3.6/Python/peephole.c) (Python versions < 3.7) or the more capable [AST optimizer](https://github.com/python/cpython/blob/3.7/Python/ast_opt.c) (3.7 and newer) to pre-calculate (fold) the results of simple expressions involving constants. The peepholder limits it's output to a sequence of length 20 or less (to prevent bloating code objects and memory use), while the AST optimizer uses a separate limit for strings of 4096 characters. This means that concatenating shorter strings consisting only of name characters *can* still lead to interned strings if the resulting string fits within the optimizer limits of your current Python version.
E.g. on Python 3.7, `'foo' * 20` will result in a single interned string, because constant folding turns this into a single value, while on Python 3.6 or older only `'foo' * 6` would be folded:
```
>>> import dis, sys
>>> sys.version_info
sys.version_info(major=3, minor=7, micro=4, releaselevel='final', serial=0)
>>> dis.dis("'foo' * 20")
1 0 LOAD_CONST 0 ('foofoofoofoofoofoofoofoofoofoofoofoofoofoofoofoofoofoofoofoo')
2 RETURN_VALUE
```
and
```
>>> dis.dis("'foo' * 6")
1 0 LOAD_CONST 2 ('foofoofoofoofoofoo')
2 RETURN_VALUE
>>> dis.dis("'foo' * 7")
1 0 LOAD_CONST 0 ('foo')
2 LOAD_CONST 1 (7)
4 BINARY_MULTIPLY
6 RETURN_VALUE
```
|
So while Python is not guaranteed to intern strings, it will frequently reuse the same string, and `is` may mislead. It's important to know that you shouldn't check `id` or `is` for equality of strings.
To demonstrate this, one way I've discovered to force a new string in Python 2.6 at least:
```
>>> so = 'so'
>>> new_so = '{0}'.format(so)
>>> so is new_so
False
```
and here's a bit more Python exploration:
```
>>> id(so)
102596064
>>> id(new_so)
259679968
>>> so == new_so
True
```
|
24,245,324
|
Something about the `id` of objects of type `str` (in python 2.7) puzzles me. The `str` type is immutable, so I would expect that once it is created, it will always have the same `id`. I believe I don't phrase myself so well, so instead I'll post an example of input and output sequence.
```
>>> id('so')
140614155123888
>>> id('so')
140614155123848
>>> id('so')
140614155123808
```
so in the meanwhile, it changes all the time. However, after having a variable pointing at that string, things change:
```
>>> so = 'so'
>>> id('so')
140614155123728
>>> so = 'so'
>>> id(so)
140614155123728
>>> not_so = 'so'
>>> id(not_so)
140614155123728
```
So it looks like it freezes the id, once a variable holds that value. Indeed, after `del so` and `del not_so`, the output of `id('so')` start changing again.
This is **not** the same behaviour as with (small) integers.
I know there is not real connection between immutability and having the same `id`; still, I am trying to figure out the source of this behaviour. I believe that someone whose familiar with python's internals would be less surprised than me, so I am trying to reach the same point...
Update
======
Trying the same with a different string gave different results...
```
>>> id('hello')
139978087896384
>>> id('hello')
139978087896384
>>> id('hello')
139978087896384
```
Now it **is** equal...
|
2014/06/16
|
[
"https://Stackoverflow.com/questions/24245324",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1565828/"
] |
A more simplified way to understand the behaviour is to check the following [Data Types and Variables](http://www.python-course.eu/variables.php).
Section "A String Pecularity" illustrates your question using special characters as example.
|
So while Python is not guaranteed to intern strings, it will frequently reuse the same string, and `is` may mislead. It's important to know that you shouldn't check `id` or `is` for equality of strings.
To demonstrate this, one way I've discovered to force a new string in Python 2.6 at least:
```
>>> so = 'so'
>>> new_so = '{0}'.format(so)
>>> so is new_so
False
```
and here's a bit more Python exploration:
```
>>> id(so)
102596064
>>> id(new_so)
259679968
>>> so == new_so
True
```
|
24,245,324
|
Something about the `id` of objects of type `str` (in python 2.7) puzzles me. The `str` type is immutable, so I would expect that once it is created, it will always have the same `id`. I believe I don't phrase myself so well, so instead I'll post an example of input and output sequence.
```
>>> id('so')
140614155123888
>>> id('so')
140614155123848
>>> id('so')
140614155123808
```
so in the meanwhile, it changes all the time. However, after having a variable pointing at that string, things change:
```
>>> so = 'so'
>>> id('so')
140614155123728
>>> so = 'so'
>>> id(so)
140614155123728
>>> not_so = 'so'
>>> id(not_so)
140614155123728
```
So it looks like it freezes the id, once a variable holds that value. Indeed, after `del so` and `del not_so`, the output of `id('so')` start changing again.
This is **not** the same behaviour as with (small) integers.
I know there is not real connection between immutability and having the same `id`; still, I am trying to figure out the source of this behaviour. I believe that someone whose familiar with python's internals would be less surprised than me, so I am trying to reach the same point...
Update
======
Trying the same with a different string gave different results...
```
>>> id('hello')
139978087896384
>>> id('hello')
139978087896384
>>> id('hello')
139978087896384
```
Now it **is** equal...
|
2014/06/16
|
[
"https://Stackoverflow.com/questions/24245324",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1565828/"
] |
CPython does not promise to intern *all* strings by default, but in practice, a lot of places in the Python codebase do reuse already-created string objects. A lot of Python internals use (the C-equivalent of) the [`sys.intern()` function call](https://docs.python.org/3/library/sys.html#sys.intern) to explicitly intern Python strings, but unless you hit one of those special cases, two identical Python string literals will produce different strings.
Python is also free to *reuse* memory locations, and Python will also optimize immutable *literals* by storing them once, at compile time, with the bytecode in code objects. The Python REPL (interactive interpreter) also stores the most recent expression result in the `_` name, which muddles up things some more.
As such, you *will* see the same id crop up from time to time.
Running just the line `id(<string literal>)` in the REPL goes through several steps:
1. The line is compiled, which includes creating a constant for the string object:
```
>>> compile("id('foo')", '<stdin>', 'single').co_consts
('foo', None)
```
This shows the stored constants with the compiled bytecode; in this case a string `'foo'` and the `None` singleton. Simple expressions consisting of that produce an immutable value may be optimised at this stage, see the note on optimizers, below.
2. On execution, the string is loaded from the code constants, and `id()` returns the memory location. The resulting `int` value is bound to `_`, as well as printed:
```
>>> import dis
>>> dis.dis(compile("id('foo')", '<stdin>', 'single'))
1 0 LOAD_NAME 0 (id)
3 LOAD_CONST 0 ('foo')
6 CALL_FUNCTION 1
9 PRINT_EXPR
10 LOAD_CONST 1 (None)
13 RETURN_VALUE
```
3. The code object is not referenced by anything, reference count drops to 0 and the code object is deleted. As a consequence, so is the string object.
Python can then *perhaps* reuse the same memory location for a new string object, if you re-run the same code. This usually leads to the same memory address being printed if you repeat this code. *This does depend on what else you do with your Python memory*.
ID reuse is *not* predictable; if in the meantime the garbage collector runs to clear circular references, other memory could be freed and you'll get new memory addresses.
Next, the Python compiler will also intern any Python string stored as a constant, provided it looks enough like a valid identifier. The Python [code object factory function PyCode\_New](https://github.com/python/cpython/blob/a6eac83c1804fd14ed076b1776ffeea8dcb9478a/Objects/codeobject.c#L96-L222) will intern any string object that contains only ASCII letters, digits or underscores, by calling [`intern_string_constants()`](https://github.com/python/cpython/blob/a6eac83c1804fd14ed076b1776ffeea8dcb9478a/Objects/codeobject.c#L45-L93). This function recurses through the constants structures and for any string object `v` found there executes:
```c
if (all_name_chars(v)) {
PyObject *w = v;
PyUnicode_InternInPlace(&v);
if (w != v) {
PyTuple_SET_ITEM(tuple, i, v);
modified = 1;
}
}
```
where [`all_name_chars()`](https://github.com/python/cpython/blob/a6eac83c1804fd14ed076b1776ffeea8dcb9478a/Objects/codeobject.c#L13-L29) is documented as
```c
/* all_name_chars(s): true iff s matches [a-zA-Z0-9_]* */
```
Since you created strings that fit that criterion, they are interned, which is why you see the same ID being used for the `'so'` string in your second test: as long as a reference to the interned version survives, interning will cause future `'so'` literals to reuse the interned string object, even in new code blocks and bound to different identifiers. In your first test, you don't save a reference to the string, so the interned strings are discarded before they can be reused.
Incidentally, your new name `so = 'so'` binds a string to a name that *contains the same characters*. In other words, you are creating a global whose name and value are equal. As Python interns both identifiers and qualifying constants, you end up using the same string object for both the identifier and its value:
```
>>> compile("so = 'so'", '<stdin>', 'single').co_names[0] is compile("so = 'so'", '<stdin>', 'single').co_consts[0]
True
```
If you create strings that are either not code object constants, or contain characters outside of the letters + numbers + underscore range, you'll see the `id()` value not being reused:
```
>>> some_var = 'Look ma, spaces and punctuation!'
>>> some_other_var = 'Look ma, spaces and punctuation!'
>>> id(some_var)
4493058384
>>> id(some_other_var)
4493058456
>>> foo = 'Concatenating_' + 'also_helps_if_long_enough'
>>> bar = 'Concatenating_' + 'also_helps_if_long_enough'
>>> foo is bar
False
>>> foo == bar
True
```
The Python compiler either uses the [peephole optimizer](https://github.com/python/cpython/blob/3.6/Python/peephole.c) (Python versions < 3.7) or the more capable [AST optimizer](https://github.com/python/cpython/blob/3.7/Python/ast_opt.c) (3.7 and newer) to pre-calculate (fold) the results of simple expressions involving constants. The peepholder limits it's output to a sequence of length 20 or less (to prevent bloating code objects and memory use), while the AST optimizer uses a separate limit for strings of 4096 characters. This means that concatenating shorter strings consisting only of name characters *can* still lead to interned strings if the resulting string fits within the optimizer limits of your current Python version.
E.g. on Python 3.7, `'foo' * 20` will result in a single interned string, because constant folding turns this into a single value, while on Python 3.6 or older only `'foo' * 6` would be folded:
```
>>> import dis, sys
>>> sys.version_info
sys.version_info(major=3, minor=7, micro=4, releaselevel='final', serial=0)
>>> dis.dis("'foo' * 20")
1 0 LOAD_CONST 0 ('foofoofoofoofoofoofoofoofoofoofoofoofoofoofoofoofoofoofoofoo')
2 RETURN_VALUE
```
and
```
>>> dis.dis("'foo' * 6")
1 0 LOAD_CONST 2 ('foofoofoofoofoofoo')
2 RETURN_VALUE
>>> dis.dis("'foo' * 7")
1 0 LOAD_CONST 0 ('foo')
2 LOAD_CONST 1 (7)
4 BINARY_MULTIPLY
6 RETURN_VALUE
```
|
A more simplified way to understand the behaviour is to check the following [Data Types and Variables](http://www.python-course.eu/variables.php).
Section "A String Pecularity" illustrates your question using special characters as example.
|
7,927,904
|
How do I close a file in python after opening it this way:
`line = open("file.txt", "r").readlines()[7]`
|
2011/10/28
|
[
"https://Stackoverflow.com/questions/7927904",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/442852/"
] |
Best to use a context manager. This will close the file automatically at the end of the block and doesn't rely on implementation details of the garbarge collection
```
with open("file.txt", "r") as f:
line = f.readlines()[7]
```
|
There are several ways, e.g you could just use `f = open(...)` and `del f`.
If your Python version supports it you can also use the `with` statement:
```
with open("file.txt", "r") as f:
line = f.readlines()[7]
```
This automatically closes your file.
**EDIT:** I don't get why I'm downvoted for explaining how to create a correct way to deal with files. Maybe *"without assigning a variable"* is just not preferable.
|
37,582,199
|
How I can pass the data from object oriented programming to mysql in python? Do I need to make connection in every class?
Update:
This is my object orinted
```
class AttentionDataPoint(DataPoint):
def __init__ (self, _dataValueBytes):
DataPoint._init_(self, _dataValueBytes)
self.attentionValue=self._dataValueBytes[0]
def __str__(self):
if(self.attentionValue):
return "Attention Level: " + str(self.attentionValue)
class MeditationDataPoint(DataPoint):
def __init__ (self, _dataValueBytes):
DataPoint._init_(self, _dataValueBytes)
self.meditationValue=self._dataValueBytes[0]
def __str__(self):
if(self.meditationValue):
return "Meditation Level: " + str(self.meditationValue)
```
And I try to get the data to mysql using this coding.
```
import time
import smtplib
import datetime
import MySQLdb
db = MySQLdb.connect("192.168.0.101", "fyp", "123456", "system")
cur = db.cursor()
while True:
Meditation_Level = meditationValue()
Attention_Level = attentionValue()
current_time = datetime.datetime.now()
sql = "INSERT INTO table (id, Meditation_Level, Attention_Level, current_time) VALUES ('test1', %s, %s, %s)"
data = (Meditation_Level, Attention_Level, current_time)
cur.execute(sql, data)
db.commit()
db.close()
```
Update:
DataPoint
```
class DataPoint:
def __init__(self, dataValueBytes):
self._dataValueBytes = dataValueBytes
```
|
2016/06/02
|
[
"https://Stackoverflow.com/questions/37582199",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6341585/"
] |
You asked a [similar question](https://stackoverflow.com/questions/37575008/adding-sheet2-to-existing-excelfile-from-data-of-sheet1-with-pandas-python) on adding data to a second excel sheet. Perhaps you can address there any issues around the `to_excel()` part.
On the category count, you can do:
```
df.Manufacturer.value_counts().to_frame()
```
to get a `pd.Series` with the `counts`. You need to convert the result `.to_frame()` because only `DataFrame` has `to_excel()` method.
So altogether, using my linked answer:
```
import pandas as pd
from openpyxl import load_workbook
book = load_workbook('Abc.xlsx')
writer = pd.ExcelWriter('Abc.xlsx', engine='openpyxl')
writer.book = book
writer.sheets = dict((ws.title, ws) for ws in book.worksheets)
df.Manufacturer.value_counts().to_frame().to_excel(writer, sheet_name='Categories')
writer.save()
```
|
As answered by [Stefan](https://stackoverflow.com/users/1494637/stefan), using `value_counts()` over the designated column would do.
Since you are saving multiple DataFrames to a single workbook, I'd use `pandas.ExcelWriter`:
```
import pandas as pd
writer = pd.ExcelWriter('file_name.xlsx')
df.to_excel(writer) # this one writes to 'Sheet1' by default
pd.Series.to_frame(df.Manufacturer.value_counts()).to_excel(writer, 'Sheet2')
writer.save()
```
Without necessarily using `openpyxl`. As noted in [`to_excel()`](http://pandas.pydata.org/pandas-docs/version/0.17.0/generated/pandas.DataFrame.to_excel.html) documentation,
>
> If passing an existing ExcelWriter object, then the sheet will be added to the existing workbook. This can be used to save different DataFrames to one workbook
>
>
>
Note that in order to use `to_excel()`, the `Series` (returned from `value_counts()`) must be cast to a `DataFrame`. this can be done as above (by `to_frame()`) or explicitly by using:
```
pd.DataFrame(df.Manufacturer.value_counts()).to_excel(writer, 'Sheet2')
```
While the first is usually a bit faster, the second may be considered more readable.
|
4,870,393
|
We have a gazillion spatial coordinates (x, y and z) representing atoms in 3d space, and I'm constructing a function that will translate these points to a new coordinate system. Shifting the coordinates to an arbitrary origin is simple, but I can't wrap my head around the next step: 3d point rotation calculations. In other words, I'm trying to translate the points from (x, y, z) to (x', y', z'), where x', y' and z' are in terms of i', j' and k', the new axis vectors I'm making with the help of the [euclid python module](http://pypi.python.org/pypi/euclid/0.01).
I *think* all I need is a euclid quaternion to do this, i.e.
```
>>> q * Vector3(x, y, z)
Vector3(x', y', z')
```
but to make THAT i believe I need a rotation axis vector and an angle of rotation. But I have no idea how to calculate these from i', j' and k'. This seems like a simple procedure to code from scratch, but I suspect something like this requires linear algebra to figure out on my own. Many thanks for a nudge in the right direction.
|
2011/02/02
|
[
"https://Stackoverflow.com/questions/4870393",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/572043/"
] |
Using quaternions to represent rotation isn't difficult from an algebraic point of view. Personally, I find it hard to reason *visually* about quaternions, but the formulas involved in using them for rotations are not overly complicated. I'll provide a basic set of reference functions here.1 (See also this lovely answer by [hosolmaz](https://stackoverflow.com/a/42180896/577088), in which he packages these together to create a handy Quaternion class.)
You can think of a quaternion (for our purposes) as a scalar plus a 3-d vector -- abstractly, `w + xi + yj + zk`, here represented by an ordinary tuple `(w, x, y, z)`. The space of 3-d rotations is represented in full by a sub-space of the quaternions, the space of *unit* quaternions, so you want to make sure that your quaternions are normalized. You can do so in just the way you would normalize any 4-vector (i.e. magnitude should be close to 1; if it isn't, scale down the values by the magnitude):
```
def normalize(v, tolerance=0.00001):
mag2 = sum(n * n for n in v)
if abs(mag2 - 1.0) > tolerance:
mag = sqrt(mag2)
v = tuple(n / mag for n in v)
return v
```
Please note that for simplicity, the following functions assume that quaternion values are *already normalized*. In practice, you'll need to renormalize them from time to time, but the best way to deal with that will depend on the problem domain. These functions provide just the very basics, for reference purposes only.
Every rotation is represented by a unit quaternion, and concatenations of rotations correspond to *multiplications* of unit quaternions. The formula2 for this is as follows:
```
def q_mult(q1, q2):
w1, x1, y1, z1 = q1
w2, x2, y2, z2 = q2
w = w1 * w2 - x1 * x2 - y1 * y2 - z1 * z2
x = w1 * x2 + x1 * w2 + y1 * z2 - z1 * y2
y = w1 * y2 + y1 * w2 + z1 * x2 - x1 * z2
z = w1 * z2 + z1 * w2 + x1 * y2 - y1 * x2
return w, x, y, z
```
To rotate a *vector* by a quaternion, you need the quaternion's conjugate too:
```
def q_conjugate(q):
w, x, y, z = q
return (w, -x, -y, -z)
```
Quaternion-vector multiplication is then a matter of converting your vector into a quaternion (by setting `w = 0` and leaving `x`, `y`, and `z` the same) and then multiplying `q * v * q_conjugate(q)`:
```
def qv_mult(q1, v1):
q2 = (0.0,) + v1
return q_mult(q_mult(q1, q2), q_conjugate(q1))[1:]
```
Finally, you need to know how to convert from axis-angle rotations to quaternions and back. This is also surprisingly straightforward. It makes sense to "sanitize" input and output here by calling `normalize`.
```
def axisangle_to_q(v, theta):
v = normalize(v)
x, y, z = v
theta /= 2
w = cos(theta)
x = x * sin(theta)
y = y * sin(theta)
z = z * sin(theta)
return w, x, y, z
```
And back:
```
def q_to_axisangle(q):
w, v = q[0], q[1:]
theta = acos(w) * 2.0
return normalize(v), theta
```
Here's a quick usage example. A sequence of 90-degree rotations about the x, y, and z axes will return a vector on the y axis to its original position. This code performs those rotations:
```
x_axis_unit = (1, 0, 0)
y_axis_unit = (0, 1, 0)
z_axis_unit = (0, 0, 1)
r1 = axisangle_to_q(x_axis_unit, numpy.pi / 2)
r2 = axisangle_to_q(y_axis_unit, numpy.pi / 2)
r3 = axisangle_to_q(z_axis_unit, numpy.pi / 2)
v = qv_mult(r1, y_axis_unit)
v = qv_mult(r2, v)
v = qv_mult(r3, v)
print v
# output: (0.0, 1.0, 2.220446049250313e-16)
```
Keep in mind that this sequence of rotations won't return *all* vectors to the same position; for example, for a vector on the x axis, it will correspond to a 90 degree rotation about the y axis. (Think of the [right-hand-rule](https://en.wikipedia.org/wiki/Right-hand_rule) here; a positive rotation about the y axis pushes a vector on the x axis into the *negative* z region.)
```
v = qv_mult(r1, x_axis_unit)
v = qv_mult(r2, v)
v = qv_mult(r3, v)
print v
# output: (4.930380657631324e-32, 2.220446049250313e-16, -1.0)
```
As always, please let me know if you find any problems here.
---
1. These are adapted from an OpenGL tutorial [archived here](https://web.archive.org/web/20150515143824/http://content.gpwiki.org/index.php/OpenGL:Tutorials:Using_Quaternions_to_represent_rotation).
2. The quaternion multiplication formula looks like a horrible rat's nest at first, but the derivation is easy, albeit tedious. Working with pencil and paper, you can represent the two quaternions like so: `w + xi + yj + zk`. Then note that `ii = jj = kk = -1`; that `ij = k`, `jk = i`, `ki = j`; and that `ji = -k`, `kj = -i`, `ik = -j`. Finally, multiply the two quaternions, distributing out the terms and rearranging them based on the results of each of the 16 multiplications. This helps to illustrate *why* you can use quaternions to represent rotation; the last six identities follow the right-hand rule, creating bijections between rotations *from* `i` to `j` and rotations *around* `k`, and so on.
If you do this, you'll see that the identity `ii = jj = kk = -1` explains the last three terms in the `w` formula; that `jk = i` explains the third term in the `x` formula; and that `kj = -i` explains the fourth term in the `x` formula. The `y` and `z` formulas work the same way. The remaining terms are all examples of ordinary scalar multiplication.
|
Note that the inversion of matrix is not that trivial at all! Firstly, all n (where n is the dimension of your space) points must be in general position (i.e. no individual point can be expressed as a linear combination of rest of the points [caveat: this may seem to be a simple requirement indeed, but in the realm of numerical linear algebra, it's nontrivial; final decison wheter such configuration really exist or not, will eventually be based on the 'actual domain' specific knowledge]).
Also the 'correspondence' of the new and old points may not be exact (and then you should utilize the best possible approximator of the 'true correspondence', i.e.:). Pseudo inverse (instead of trying to utilize the plain inverse) is recommend allways when your lib provides it.
The pseudo inverse has the advantage that you'll be able to use more points for your transformation, hence increasing the probability that at least n points will be in general position.
Here is an example, rotation of unit square 90 deg. ccw in 2D (but obviously this determination works in any dim), with `numpy`:
```
In []: P= matrix([[0, 0, 1, 1],
[0, 1, 1, 0]])
In []: Pn= matrix([[0, -1, -1, 0],
[0, 0, 1, 1]])
In []: T= Pn* pinv(P)
In []: (T* P).round()
Out[]:
matrix([[ 0., -1., -1., 0.],
[ 0., 0., 1., 1.]])
```
P.S. `numpy` is also fast. Transformation of 1 million points in my modest computer:
```
In []: P= matrix(rand(2, 1e6))
In []: %timeit T* P
10 loops, best of 3: 37.7 ms per loop
```
|
4,870,393
|
We have a gazillion spatial coordinates (x, y and z) representing atoms in 3d space, and I'm constructing a function that will translate these points to a new coordinate system. Shifting the coordinates to an arbitrary origin is simple, but I can't wrap my head around the next step: 3d point rotation calculations. In other words, I'm trying to translate the points from (x, y, z) to (x', y', z'), where x', y' and z' are in terms of i', j' and k', the new axis vectors I'm making with the help of the [euclid python module](http://pypi.python.org/pypi/euclid/0.01).
I *think* all I need is a euclid quaternion to do this, i.e.
```
>>> q * Vector3(x, y, z)
Vector3(x', y', z')
```
but to make THAT i believe I need a rotation axis vector and an angle of rotation. But I have no idea how to calculate these from i', j' and k'. This seems like a simple procedure to code from scratch, but I suspect something like this requires linear algebra to figure out on my own. Many thanks for a nudge in the right direction.
|
2011/02/02
|
[
"https://Stackoverflow.com/questions/4870393",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/572043/"
] |
Using quaternions to represent rotation isn't difficult from an algebraic point of view. Personally, I find it hard to reason *visually* about quaternions, but the formulas involved in using them for rotations are not overly complicated. I'll provide a basic set of reference functions here.1 (See also this lovely answer by [hosolmaz](https://stackoverflow.com/a/42180896/577088), in which he packages these together to create a handy Quaternion class.)
You can think of a quaternion (for our purposes) as a scalar plus a 3-d vector -- abstractly, `w + xi + yj + zk`, here represented by an ordinary tuple `(w, x, y, z)`. The space of 3-d rotations is represented in full by a sub-space of the quaternions, the space of *unit* quaternions, so you want to make sure that your quaternions are normalized. You can do so in just the way you would normalize any 4-vector (i.e. magnitude should be close to 1; if it isn't, scale down the values by the magnitude):
```
def normalize(v, tolerance=0.00001):
mag2 = sum(n * n for n in v)
if abs(mag2 - 1.0) > tolerance:
mag = sqrt(mag2)
v = tuple(n / mag for n in v)
return v
```
Please note that for simplicity, the following functions assume that quaternion values are *already normalized*. In practice, you'll need to renormalize them from time to time, but the best way to deal with that will depend on the problem domain. These functions provide just the very basics, for reference purposes only.
Every rotation is represented by a unit quaternion, and concatenations of rotations correspond to *multiplications* of unit quaternions. The formula2 for this is as follows:
```
def q_mult(q1, q2):
w1, x1, y1, z1 = q1
w2, x2, y2, z2 = q2
w = w1 * w2 - x1 * x2 - y1 * y2 - z1 * z2
x = w1 * x2 + x1 * w2 + y1 * z2 - z1 * y2
y = w1 * y2 + y1 * w2 + z1 * x2 - x1 * z2
z = w1 * z2 + z1 * w2 + x1 * y2 - y1 * x2
return w, x, y, z
```
To rotate a *vector* by a quaternion, you need the quaternion's conjugate too:
```
def q_conjugate(q):
w, x, y, z = q
return (w, -x, -y, -z)
```
Quaternion-vector multiplication is then a matter of converting your vector into a quaternion (by setting `w = 0` and leaving `x`, `y`, and `z` the same) and then multiplying `q * v * q_conjugate(q)`:
```
def qv_mult(q1, v1):
q2 = (0.0,) + v1
return q_mult(q_mult(q1, q2), q_conjugate(q1))[1:]
```
Finally, you need to know how to convert from axis-angle rotations to quaternions and back. This is also surprisingly straightforward. It makes sense to "sanitize" input and output here by calling `normalize`.
```
def axisangle_to_q(v, theta):
v = normalize(v)
x, y, z = v
theta /= 2
w = cos(theta)
x = x * sin(theta)
y = y * sin(theta)
z = z * sin(theta)
return w, x, y, z
```
And back:
```
def q_to_axisangle(q):
w, v = q[0], q[1:]
theta = acos(w) * 2.0
return normalize(v), theta
```
Here's a quick usage example. A sequence of 90-degree rotations about the x, y, and z axes will return a vector on the y axis to its original position. This code performs those rotations:
```
x_axis_unit = (1, 0, 0)
y_axis_unit = (0, 1, 0)
z_axis_unit = (0, 0, 1)
r1 = axisangle_to_q(x_axis_unit, numpy.pi / 2)
r2 = axisangle_to_q(y_axis_unit, numpy.pi / 2)
r3 = axisangle_to_q(z_axis_unit, numpy.pi / 2)
v = qv_mult(r1, y_axis_unit)
v = qv_mult(r2, v)
v = qv_mult(r3, v)
print v
# output: (0.0, 1.0, 2.220446049250313e-16)
```
Keep in mind that this sequence of rotations won't return *all* vectors to the same position; for example, for a vector on the x axis, it will correspond to a 90 degree rotation about the y axis. (Think of the [right-hand-rule](https://en.wikipedia.org/wiki/Right-hand_rule) here; a positive rotation about the y axis pushes a vector on the x axis into the *negative* z region.)
```
v = qv_mult(r1, x_axis_unit)
v = qv_mult(r2, v)
v = qv_mult(r3, v)
print v
# output: (4.930380657631324e-32, 2.220446049250313e-16, -1.0)
```
As always, please let me know if you find any problems here.
---
1. These are adapted from an OpenGL tutorial [archived here](https://web.archive.org/web/20150515143824/http://content.gpwiki.org/index.php/OpenGL:Tutorials:Using_Quaternions_to_represent_rotation).
2. The quaternion multiplication formula looks like a horrible rat's nest at first, but the derivation is easy, albeit tedious. Working with pencil and paper, you can represent the two quaternions like so: `w + xi + yj + zk`. Then note that `ii = jj = kk = -1`; that `ij = k`, `jk = i`, `ki = j`; and that `ji = -k`, `kj = -i`, `ik = -j`. Finally, multiply the two quaternions, distributing out the terms and rearranging them based on the results of each of the 16 multiplications. This helps to illustrate *why* you can use quaternions to represent rotation; the last six identities follow the right-hand rule, creating bijections between rotations *from* `i` to `j` and rotations *around* `k`, and so on.
If you do this, you'll see that the identity `ii = jj = kk = -1` explains the last three terms in the `w` formula; that `jk = i` explains the third term in the `x` formula; and that `kj = -i` explains the fourth term in the `x` formula. The `y` and `z` formulas work the same way. The remaining terms are all examples of ordinary scalar multiplication.
|
This question and the answer given by @senderle really helped me with one of my projects. The answer is minimal and covers the core of most quaternion computations that one might need to perform.
For my own project, I found it tedious to have separate functions for all the operations and import them one by one every time I need one, so I implemented an object oriented version.
quaternion.py:
```
import numpy as np
from math import sin, cos, acos, sqrt
def normalize(v, tolerance=0.00001):
mag2 = sum(n * n for n in v)
if abs(mag2 - 1.0) > tolerance:
mag = sqrt(mag2)
v = tuple(n / mag for n in v)
return np.array(v)
class Quaternion:
def from_axisangle(theta, v):
theta = theta
v = normalize(v)
new_quaternion = Quaternion()
new_quaternion._axisangle_to_q(theta, v)
return new_quaternion
def from_value(value):
new_quaternion = Quaternion()
new_quaternion._val = value
return new_quaternion
def _axisangle_to_q(self, theta, v):
x = v[0]
y = v[1]
z = v[2]
w = cos(theta/2.)
x = x * sin(theta/2.)
y = y * sin(theta/2.)
z = z * sin(theta/2.)
self._val = np.array([w, x, y, z])
def __mul__(self, b):
if isinstance(b, Quaternion):
return self._multiply_with_quaternion(b)
elif isinstance(b, (list, tuple, np.ndarray)):
if len(b) != 3:
raise Exception(f"Input vector has invalid length {len(b)}")
return self._multiply_with_vector(b)
else:
raise Exception(f"Multiplication with unknown type {type(b)}")
def _multiply_with_quaternion(self, q2):
w1, x1, y1, z1 = self._val
w2, x2, y2, z2 = q2._val
w = w1 * w2 - x1 * x2 - y1 * y2 - z1 * z2
x = w1 * x2 + x1 * w2 + y1 * z2 - z1 * y2
y = w1 * y2 + y1 * w2 + z1 * x2 - x1 * z2
z = w1 * z2 + z1 * w2 + x1 * y2 - y1 * x2
result = Quaternion.from_value(np.array((w, x, y, z)))
return result
def _multiply_with_vector(self, v):
q2 = Quaternion.from_value(np.append((0.0), v))
return (self * q2 * self.get_conjugate())._val[1:]
def get_conjugate(self):
w, x, y, z = self._val
result = Quaternion.from_value(np.array((w, -x, -y, -z)))
return result
def __repr__(self):
theta, v = self.get_axisangle()
return f"((%.6f; %.6f, %.6f, %.6f))"%(theta, v[0], v[1], v[2])
def get_axisangle(self):
w, v = self._val[0], self._val[1:]
theta = acos(w) * 2.0
return theta, normalize(v)
def tolist(self):
return self._val.tolist()
def vector_norm(self):
w, v = self.get_axisangle()
return np.linalg.norm(v)
```
In this version, one can just use the overloaded operators for quaternion-quaternion and quaternion-vector multiplication
```
from quaternion import Quaternion
import numpy as np
x_axis_unit = (1, 0, 0)
y_axis_unit = (0, 1, 0)
z_axis_unit = (0, 0, 1)
r1 = Quaternion.from_axisangle(np.pi / 2, x_axis_unit)
r2 = Quaternion.from_axisangle(np.pi / 2, y_axis_unit)
r3 = Quaternion.from_axisangle(np.pi / 2, z_axis_unit)
# Quaternion - vector multiplication
v = r1 * y_axis_unit
v = r2 * v
v = r3 * v
print(v)
# Quaternion - quaternion multiplication
r_total = r3 * r2 * r1
v = r_total * y_axis_unit
print(v)
```
I did not intend to implement a full-fledged quaternion module, so this is again for instructional purposes, as in @senderle's great answer. I hope this helps out to those who want to understand and try out new things with quaternions.
|
4,870,393
|
We have a gazillion spatial coordinates (x, y and z) representing atoms in 3d space, and I'm constructing a function that will translate these points to a new coordinate system. Shifting the coordinates to an arbitrary origin is simple, but I can't wrap my head around the next step: 3d point rotation calculations. In other words, I'm trying to translate the points from (x, y, z) to (x', y', z'), where x', y' and z' are in terms of i', j' and k', the new axis vectors I'm making with the help of the [euclid python module](http://pypi.python.org/pypi/euclid/0.01).
I *think* all I need is a euclid quaternion to do this, i.e.
```
>>> q * Vector3(x, y, z)
Vector3(x', y', z')
```
but to make THAT i believe I need a rotation axis vector and an angle of rotation. But I have no idea how to calculate these from i', j' and k'. This seems like a simple procedure to code from scratch, but I suspect something like this requires linear algebra to figure out on my own. Many thanks for a nudge in the right direction.
|
2011/02/02
|
[
"https://Stackoverflow.com/questions/4870393",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/572043/"
] |
This question and the answer given by @senderle really helped me with one of my projects. The answer is minimal and covers the core of most quaternion computations that one might need to perform.
For my own project, I found it tedious to have separate functions for all the operations and import them one by one every time I need one, so I implemented an object oriented version.
quaternion.py:
```
import numpy as np
from math import sin, cos, acos, sqrt
def normalize(v, tolerance=0.00001):
mag2 = sum(n * n for n in v)
if abs(mag2 - 1.0) > tolerance:
mag = sqrt(mag2)
v = tuple(n / mag for n in v)
return np.array(v)
class Quaternion:
def from_axisangle(theta, v):
theta = theta
v = normalize(v)
new_quaternion = Quaternion()
new_quaternion._axisangle_to_q(theta, v)
return new_quaternion
def from_value(value):
new_quaternion = Quaternion()
new_quaternion._val = value
return new_quaternion
def _axisangle_to_q(self, theta, v):
x = v[0]
y = v[1]
z = v[2]
w = cos(theta/2.)
x = x * sin(theta/2.)
y = y * sin(theta/2.)
z = z * sin(theta/2.)
self._val = np.array([w, x, y, z])
def __mul__(self, b):
if isinstance(b, Quaternion):
return self._multiply_with_quaternion(b)
elif isinstance(b, (list, tuple, np.ndarray)):
if len(b) != 3:
raise Exception(f"Input vector has invalid length {len(b)}")
return self._multiply_with_vector(b)
else:
raise Exception(f"Multiplication with unknown type {type(b)}")
def _multiply_with_quaternion(self, q2):
w1, x1, y1, z1 = self._val
w2, x2, y2, z2 = q2._val
w = w1 * w2 - x1 * x2 - y1 * y2 - z1 * z2
x = w1 * x2 + x1 * w2 + y1 * z2 - z1 * y2
y = w1 * y2 + y1 * w2 + z1 * x2 - x1 * z2
z = w1 * z2 + z1 * w2 + x1 * y2 - y1 * x2
result = Quaternion.from_value(np.array((w, x, y, z)))
return result
def _multiply_with_vector(self, v):
q2 = Quaternion.from_value(np.append((0.0), v))
return (self * q2 * self.get_conjugate())._val[1:]
def get_conjugate(self):
w, x, y, z = self._val
result = Quaternion.from_value(np.array((w, -x, -y, -z)))
return result
def __repr__(self):
theta, v = self.get_axisangle()
return f"((%.6f; %.6f, %.6f, %.6f))"%(theta, v[0], v[1], v[2])
def get_axisangle(self):
w, v = self._val[0], self._val[1:]
theta = acos(w) * 2.0
return theta, normalize(v)
def tolist(self):
return self._val.tolist()
def vector_norm(self):
w, v = self.get_axisangle()
return np.linalg.norm(v)
```
In this version, one can just use the overloaded operators for quaternion-quaternion and quaternion-vector multiplication
```
from quaternion import Quaternion
import numpy as np
x_axis_unit = (1, 0, 0)
y_axis_unit = (0, 1, 0)
z_axis_unit = (0, 0, 1)
r1 = Quaternion.from_axisangle(np.pi / 2, x_axis_unit)
r2 = Quaternion.from_axisangle(np.pi / 2, y_axis_unit)
r3 = Quaternion.from_axisangle(np.pi / 2, z_axis_unit)
# Quaternion - vector multiplication
v = r1 * y_axis_unit
v = r2 * v
v = r3 * v
print(v)
# Quaternion - quaternion multiplication
r_total = r3 * r2 * r1
v = r_total * y_axis_unit
print(v)
```
I did not intend to implement a full-fledged quaternion module, so this is again for instructional purposes, as in @senderle's great answer. I hope this helps out to those who want to understand and try out new things with quaternions.
|
Note that the inversion of matrix is not that trivial at all! Firstly, all n (where n is the dimension of your space) points must be in general position (i.e. no individual point can be expressed as a linear combination of rest of the points [caveat: this may seem to be a simple requirement indeed, but in the realm of numerical linear algebra, it's nontrivial; final decison wheter such configuration really exist or not, will eventually be based on the 'actual domain' specific knowledge]).
Also the 'correspondence' of the new and old points may not be exact (and then you should utilize the best possible approximator of the 'true correspondence', i.e.:). Pseudo inverse (instead of trying to utilize the plain inverse) is recommend allways when your lib provides it.
The pseudo inverse has the advantage that you'll be able to use more points for your transformation, hence increasing the probability that at least n points will be in general position.
Here is an example, rotation of unit square 90 deg. ccw in 2D (but obviously this determination works in any dim), with `numpy`:
```
In []: P= matrix([[0, 0, 1, 1],
[0, 1, 1, 0]])
In []: Pn= matrix([[0, -1, -1, 0],
[0, 0, 1, 1]])
In []: T= Pn* pinv(P)
In []: (T* P).round()
Out[]:
matrix([[ 0., -1., -1., 0.],
[ 0., 0., 1., 1.]])
```
P.S. `numpy` is also fast. Transformation of 1 million points in my modest computer:
```
In []: P= matrix(rand(2, 1e6))
In []: %timeit T* P
10 loops, best of 3: 37.7 ms per loop
```
|
6,931,112
|
I was wondering how the random number functions work. I mean is the server time used or which other methods are used to generate random numbers? Are they really random numbers or do they lean to a certain pattern? Let's say in python:
```
import random
number = random.randint(1,10)
```
|
2011/08/03
|
[
"https://Stackoverflow.com/questions/6931112",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/366121/"
] |
Try:
```
links.parent().addClass('current')
.closest('li').addClass('current');
```
[jQuery Docs](http://api.jquery.com/parent/)
|
Use the `.parent()` api to do it
[Docs](http://api.jquery.com/parent/)
**EDIT**
Something like this:
```
$(this).parent().parent().addClass("current")
```
|
6,931,112
|
I was wondering how the random number functions work. I mean is the server time used or which other methods are used to generate random numbers? Are they really random numbers or do they lean to a certain pattern? Let's say in python:
```
import random
number = random.randint(1,10)
```
|
2011/08/03
|
[
"https://Stackoverflow.com/questions/6931112",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/366121/"
] |
Try:
```
links.parent().addClass('current')
.closest('li').addClass('current');
```
[jQuery Docs](http://api.jquery.com/parent/)
|
Seems to me like it's working. I only changed the hrefs in your links so they wouldn't be duplicates and added some CSS to show the highlighted elements [in this JSFiddle](http://jsfiddle.net/3BYwT/2/)
|
457,207
|
I was wondering if anyone had any experience in working programmatically with .pdf files. I have a .pdf file and I need to crop every page down to a certain size.
After a quick Google search I found the pyPdf library for python but my experiments with it failed. When I changed the cropBox and trimBox attributes on a page object the results were not what I had expected and appeared to be quite random.
Has anyone had any experience with this? Code examples would be well appreciated, preferably in python.
|
2009/01/19
|
[
"https://Stackoverflow.com/questions/457207",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/367099/"
] |
Use this to get the dimension of pdf
```
from PyPDF2 import PdfWriter, PdfReader, PdfMerger
reader = PdfReader("/Users/user.name/Downloads/sample.pdf")
page = reader.pages[0]
print(page.cropbox.lower_left)
print(page.cropbox.lower_right)
print(page.cropbox.upper_left)
print(page.cropbox.upper_right)
```
After this get page reference and then apply crop command
```
page.mediabox.lower_right = (lower_right_new_x_coordinate, lower_right_new_y_coordinate)
page.mediabox.lower_left = (lower_left_new_x_coordinate, lower_left_new_y_coordinate)
page.mediabox.upper_right = (upper_right_new_x_coordinate, upper_right_new_y_coordinate)
page.mediabox.upper_left = (upper_left_new_x_coordinate, upper_left_new_y_coordinate)
#f or example :- my custom coordinates
# page.mediabox.lower_right = (611, 500)
# page.mediabox.lower_left = (0, 500)
# page.mediabox.upper_right = (611, 700)
# page.mediabox.upper_left = (0, 700)
```
|
Acrobat Javascript API has a setPageBoxes method, but Adobe doesn't provide any Python code samples. Only C++, C# and VB.
|
457,207
|
I was wondering if anyone had any experience in working programmatically with .pdf files. I have a .pdf file and I need to crop every page down to a certain size.
After a quick Google search I found the pyPdf library for python but my experiments with it failed. When I changed the cropBox and trimBox attributes on a page object the results were not what I had expected and appeared to be quite random.
Has anyone had any experience with this? Code examples would be well appreciated, preferably in python.
|
2009/01/19
|
[
"https://Stackoverflow.com/questions/457207",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/367099/"
] |
You are probably looking for a free solution, but if you have money to spend, [PDFlib](http://pdflib.com) is a fabulous library. It has never disappointed me.
|
You can convert the PDF to Postscript (pstopdf or ps2pdf) and than use text processing on the Postscript file. After that you can convert the output back to PDF.
This works nicely if the PDFs you want to process are all generated by the same application and are somewhat similar. If they come from different sources it is usually to hard to process the Postscript files - the structure is varying to much. But even than you migt be able to fix page sizes and the like with a few regular expressions.
|
457,207
|
I was wondering if anyone had any experience in working programmatically with .pdf files. I have a .pdf file and I need to crop every page down to a certain size.
After a quick Google search I found the pyPdf library for python but my experiments with it failed. When I changed the cropBox and trimBox attributes on a page object the results were not what I had expected and appeared to be quite random.
Has anyone had any experience with this? Code examples would be well appreciated, preferably in python.
|
2009/01/19
|
[
"https://Stackoverflow.com/questions/457207",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/367099/"
] |
How do I know the coordinates to crop?
--------------------------------------
Thanks for all answers above.
Step 1. Run the following code to get (x1, y1).
```py
from PyPDF2 import PdfWriter, PdfReader
reader = PdfReader("test.pdf")
page = reader.pages[0]
print(page.cropbox.upper_right)
```
Step 2. View the pdf file in full screen mode.
Step 3. Capture the screen as an image file screen.jpg.
Step 4. Open screen.jpg by MS paint or GIMP. These applications show the coordinate of the cursor.
Step 5. Remember the following coordinates, (x2, y2), (x3, y3), (x4, y4) and (x5, y5), where (x4, y4) and (x5, y5) determine the rectangle you want to crop.
[](https://i.stack.imgur.com/L6kux.png)
Step 6. Get page.cropbox.upper\_left and page.cropbox.lower\_right by the following formulas. Here is a [tool](https://jsfiddle.net/p9L6rsco/) for calculating.
```py
page.cropbox.upper_left = (x1*(x4-x2)/(x3-x2),(1-y4/y3)*y1)
page.cropbox.lower_right = (x1*(x5-x2)/(x3-x2),(1-y5/y3)*y1)
```
Step 7. Run the following code to crop the pdf file.
```py
from PyPDF2 import PdfWriter, PdfReader
reader = PdfReader('test.pdf')
writer = PdfWriter()
for page in reader.pages:
page.cropbox.upper_left = (100,200)
page.cropbox.lower_right = (300,400)
writer.add_page(page)
with open('result.pdf','wb') as fp:
writer.write(fp)
```
|
Acrobat Javascript API has a setPageBoxes method, but Adobe doesn't provide any Python code samples. Only C++, C# and VB.
|
457,207
|
I was wondering if anyone had any experience in working programmatically with .pdf files. I have a .pdf file and I need to crop every page down to a certain size.
After a quick Google search I found the pyPdf library for python but my experiments with it failed. When I changed the cropBox and trimBox attributes on a page object the results were not what I had expected and appeared to be quite random.
Has anyone had any experience with this? Code examples would be well appreciated, preferably in python.
|
2009/01/19
|
[
"https://Stackoverflow.com/questions/457207",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/367099/"
] |
You are probably looking for a free solution, but if you have money to spend, [PDFlib](http://pdflib.com) is a fabulous library. It has never disappointed me.
|
Acrobat Javascript API has a setPageBoxes method, but Adobe doesn't provide any Python code samples. Only C++, C# and VB.
|
457,207
|
I was wondering if anyone had any experience in working programmatically with .pdf files. I have a .pdf file and I need to crop every page down to a certain size.
After a quick Google search I found the pyPdf library for python but my experiments with it failed. When I changed the cropBox and trimBox attributes on a page object the results were not what I had expected and appeared to be quite random.
Has anyone had any experience with this? Code examples would be well appreciated, preferably in python.
|
2009/01/19
|
[
"https://Stackoverflow.com/questions/457207",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/367099/"
] |
Use this to get the dimension of pdf
```
from PyPDF2 import PdfWriter, PdfReader, PdfMerger
reader = PdfReader("/Users/user.name/Downloads/sample.pdf")
page = reader.pages[0]
print(page.cropbox.lower_left)
print(page.cropbox.lower_right)
print(page.cropbox.upper_left)
print(page.cropbox.upper_right)
```
After this get page reference and then apply crop command
```
page.mediabox.lower_right = (lower_right_new_x_coordinate, lower_right_new_y_coordinate)
page.mediabox.lower_left = (lower_left_new_x_coordinate, lower_left_new_y_coordinate)
page.mediabox.upper_right = (upper_right_new_x_coordinate, upper_right_new_y_coordinate)
page.mediabox.upper_left = (upper_left_new_x_coordinate, upper_left_new_y_coordinate)
#f or example :- my custom coordinates
# page.mediabox.lower_right = (611, 500)
# page.mediabox.lower_left = (0, 500)
# page.mediabox.upper_right = (611, 700)
# page.mediabox.upper_left = (0, 700)
```
|
You can convert the PDF to Postscript (pstopdf or ps2pdf) and than use text processing on the Postscript file. After that you can convert the output back to PDF.
This works nicely if the PDFs you want to process are all generated by the same application and are somewhat similar. If they come from different sources it is usually to hard to process the Postscript files - the structure is varying to much. But even than you migt be able to fix page sizes and the like with a few regular expressions.
|
457,207
|
I was wondering if anyone had any experience in working programmatically with .pdf files. I have a .pdf file and I need to crop every page down to a certain size.
After a quick Google search I found the pyPdf library for python but my experiments with it failed. When I changed the cropBox and trimBox attributes on a page object the results were not what I had expected and appeared to be quite random.
Has anyone had any experience with this? Code examples would be well appreciated, preferably in python.
|
2009/01/19
|
[
"https://Stackoverflow.com/questions/457207",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/367099/"
] |
[pyPdf](https://pypi.org/project/pyPdf/) does what I expect in this area. Using the following script:
```
#!/usr/bin/python
#
from pyPdf import PdfFileWriter, PdfFileReader
with open("in.pdf", "rb") as in_f:
input1 = PdfFileReader(in_f)
output = PdfFileWriter()
numPages = input1.getNumPages()
print "document has %s pages." % numPages
for i in range(numPages):
page = input1.getPage(i)
print page.mediaBox.getUpperRight_x(), page.mediaBox.getUpperRight_y()
page.trimBox.lowerLeft = (25, 25)
page.trimBox.upperRight = (225, 225)
page.cropBox.lowerLeft = (50, 50)
page.cropBox.upperRight = (200, 200)
output.addPage(page)
with open("out.pdf", "wb") as out_f:
output.write(out_f)
```
The resulting document has a trim box that is 200x200 points and starts at 25,25 points inside the media box.
The crop box is 25 points inside the trim box.
Here is how my sample document looks in acrobat professional after processing with the above code:
[](https://i.stack.imgur.com/vM7e6.png)
This document will appear blank when loaded in acrobat reader.
|
Cropping pages of a .pdf file
=============================
```
from PIL import Image
def ImageCrop():
img = Image.open("page_1.jpg")
left = 90
top = 580
right = 1600
bottom = 2000
img_res = img.crop((left, top, right, bottom))
with open(outfile4, 'w') as f:
img_res.save(outfile4,'JPEG')
ImageCrop()
```
|
457,207
|
I was wondering if anyone had any experience in working programmatically with .pdf files. I have a .pdf file and I need to crop every page down to a certain size.
After a quick Google search I found the pyPdf library for python but my experiments with it failed. When I changed the cropBox and trimBox attributes on a page object the results were not what I had expected and appeared to be quite random.
Has anyone had any experience with this? Code examples would be well appreciated, preferably in python.
|
2009/01/19
|
[
"https://Stackoverflow.com/questions/457207",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/367099/"
] |
Acrobat Javascript API has a setPageBoxes method, but Adobe doesn't provide any Python code samples. Only C++, C# and VB.
|
Cropping pages of a .pdf file
=============================
```
from PIL import Image
def ImageCrop():
img = Image.open("page_1.jpg")
left = 90
top = 580
right = 1600
bottom = 2000
img_res = img.crop((left, top, right, bottom))
with open(outfile4, 'w') as f:
img_res.save(outfile4,'JPEG')
ImageCrop()
```
|
457,207
|
I was wondering if anyone had any experience in working programmatically with .pdf files. I have a .pdf file and I need to crop every page down to a certain size.
After a quick Google search I found the pyPdf library for python but my experiments with it failed. When I changed the cropBox and trimBox attributes on a page object the results were not what I had expected and appeared to be quite random.
Has anyone had any experience with this? Code examples would be well appreciated, preferably in python.
|
2009/01/19
|
[
"https://Stackoverflow.com/questions/457207",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/367099/"
] |
[pyPdf](https://pypi.org/project/pyPdf/) does what I expect in this area. Using the following script:
```
#!/usr/bin/python
#
from pyPdf import PdfFileWriter, PdfFileReader
with open("in.pdf", "rb") as in_f:
input1 = PdfFileReader(in_f)
output = PdfFileWriter()
numPages = input1.getNumPages()
print "document has %s pages." % numPages
for i in range(numPages):
page = input1.getPage(i)
print page.mediaBox.getUpperRight_x(), page.mediaBox.getUpperRight_y()
page.trimBox.lowerLeft = (25, 25)
page.trimBox.upperRight = (225, 225)
page.cropBox.lowerLeft = (50, 50)
page.cropBox.upperRight = (200, 200)
output.addPage(page)
with open("out.pdf", "wb") as out_f:
output.write(out_f)
```
The resulting document has a trim box that is 200x200 points and starts at 25,25 points inside the media box.
The crop box is 25 points inside the trim box.
Here is how my sample document looks in acrobat professional after processing with the above code:
[](https://i.stack.imgur.com/vM7e6.png)
This document will appear blank when loaded in acrobat reader.
|
How do I know the coordinates to crop?
--------------------------------------
Thanks for all answers above.
Step 1. Run the following code to get (x1, y1).
```py
from PyPDF2 import PdfWriter, PdfReader
reader = PdfReader("test.pdf")
page = reader.pages[0]
print(page.cropbox.upper_right)
```
Step 2. View the pdf file in full screen mode.
Step 3. Capture the screen as an image file screen.jpg.
Step 4. Open screen.jpg by MS paint or GIMP. These applications show the coordinate of the cursor.
Step 5. Remember the following coordinates, (x2, y2), (x3, y3), (x4, y4) and (x5, y5), where (x4, y4) and (x5, y5) determine the rectangle you want to crop.
[](https://i.stack.imgur.com/L6kux.png)
Step 6. Get page.cropbox.upper\_left and page.cropbox.lower\_right by the following formulas. Here is a [tool](https://jsfiddle.net/p9L6rsco/) for calculating.
```py
page.cropbox.upper_left = (x1*(x4-x2)/(x3-x2),(1-y4/y3)*y1)
page.cropbox.lower_right = (x1*(x5-x2)/(x3-x2),(1-y5/y3)*y1)
```
Step 7. Run the following code to crop the pdf file.
```py
from PyPDF2 import PdfWriter, PdfReader
reader = PdfReader('test.pdf')
writer = PdfWriter()
for page in reader.pages:
page.cropbox.upper_left = (100,200)
page.cropbox.lower_right = (300,400)
writer.add_page(page)
with open('result.pdf','wb') as fp:
writer.write(fp)
```
|
457,207
|
I was wondering if anyone had any experience in working programmatically with .pdf files. I have a .pdf file and I need to crop every page down to a certain size.
After a quick Google search I found the pyPdf library for python but my experiments with it failed. When I changed the cropBox and trimBox attributes on a page object the results were not what I had expected and appeared to be quite random.
Has anyone had any experience with this? Code examples would be well appreciated, preferably in python.
|
2009/01/19
|
[
"https://Stackoverflow.com/questions/457207",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/367099/"
] |
Use this to get the dimension of pdf
```
from PyPDF2 import PdfWriter, PdfReader, PdfMerger
reader = PdfReader("/Users/user.name/Downloads/sample.pdf")
page = reader.pages[0]
print(page.cropbox.lower_left)
print(page.cropbox.lower_right)
print(page.cropbox.upper_left)
print(page.cropbox.upper_right)
```
After this get page reference and then apply crop command
```
page.mediabox.lower_right = (lower_right_new_x_coordinate, lower_right_new_y_coordinate)
page.mediabox.lower_left = (lower_left_new_x_coordinate, lower_left_new_y_coordinate)
page.mediabox.upper_right = (upper_right_new_x_coordinate, upper_right_new_y_coordinate)
page.mediabox.upper_left = (upper_left_new_x_coordinate, upper_left_new_y_coordinate)
#f or example :- my custom coordinates
# page.mediabox.lower_right = (611, 500)
# page.mediabox.lower_left = (0, 500)
# page.mediabox.upper_right = (611, 700)
# page.mediabox.upper_left = (0, 700)
```
|
How do I know the coordinates to crop?
--------------------------------------
Thanks for all answers above.
Step 1. Run the following code to get (x1, y1).
```py
from PyPDF2 import PdfWriter, PdfReader
reader = PdfReader("test.pdf")
page = reader.pages[0]
print(page.cropbox.upper_right)
```
Step 2. View the pdf file in full screen mode.
Step 3. Capture the screen as an image file screen.jpg.
Step 4. Open screen.jpg by MS paint or GIMP. These applications show the coordinate of the cursor.
Step 5. Remember the following coordinates, (x2, y2), (x3, y3), (x4, y4) and (x5, y5), where (x4, y4) and (x5, y5) determine the rectangle you want to crop.
[](https://i.stack.imgur.com/L6kux.png)
Step 6. Get page.cropbox.upper\_left and page.cropbox.lower\_right by the following formulas. Here is a [tool](https://jsfiddle.net/p9L6rsco/) for calculating.
```py
page.cropbox.upper_left = (x1*(x4-x2)/(x3-x2),(1-y4/y3)*y1)
page.cropbox.lower_right = (x1*(x5-x2)/(x3-x2),(1-y5/y3)*y1)
```
Step 7. Run the following code to crop the pdf file.
```py
from PyPDF2 import PdfWriter, PdfReader
reader = PdfReader('test.pdf')
writer = PdfWriter()
for page in reader.pages:
page.cropbox.upper_left = (100,200)
page.cropbox.lower_right = (300,400)
writer.add_page(page)
with open('result.pdf','wb') as fp:
writer.write(fp)
```
|
457,207
|
I was wondering if anyone had any experience in working programmatically with .pdf files. I have a .pdf file and I need to crop every page down to a certain size.
After a quick Google search I found the pyPdf library for python but my experiments with it failed. When I changed the cropBox and trimBox attributes on a page object the results were not what I had expected and appeared to be quite random.
Has anyone had any experience with this? Code examples would be well appreciated, preferably in python.
|
2009/01/19
|
[
"https://Stackoverflow.com/questions/457207",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/367099/"
] |
How do I know the coordinates to crop?
--------------------------------------
Thanks for all answers above.
Step 1. Run the following code to get (x1, y1).
```py
from PyPDF2 import PdfWriter, PdfReader
reader = PdfReader("test.pdf")
page = reader.pages[0]
print(page.cropbox.upper_right)
```
Step 2. View the pdf file in full screen mode.
Step 3. Capture the screen as an image file screen.jpg.
Step 4. Open screen.jpg by MS paint or GIMP. These applications show the coordinate of the cursor.
Step 5. Remember the following coordinates, (x2, y2), (x3, y3), (x4, y4) and (x5, y5), where (x4, y4) and (x5, y5) determine the rectangle you want to crop.
[](https://i.stack.imgur.com/L6kux.png)
Step 6. Get page.cropbox.upper\_left and page.cropbox.lower\_right by the following formulas. Here is a [tool](https://jsfiddle.net/p9L6rsco/) for calculating.
```py
page.cropbox.upper_left = (x1*(x4-x2)/(x3-x2),(1-y4/y3)*y1)
page.cropbox.lower_right = (x1*(x5-x2)/(x3-x2),(1-y5/y3)*y1)
```
Step 7. Run the following code to crop the pdf file.
```py
from PyPDF2 import PdfWriter, PdfReader
reader = PdfReader('test.pdf')
writer = PdfWriter()
for page in reader.pages:
page.cropbox.upper_left = (100,200)
page.cropbox.lower_right = (300,400)
writer.add_page(page)
with open('result.pdf','wb') as fp:
writer.write(fp)
```
|
You are probably looking for a free solution, but if you have money to spend, [PDFlib](http://pdflib.com) is a fabulous library. It has never disappointed me.
|
16,128,572
|
I have number of molecules in smiles format and I want to get molecular name from smiles format of molecule and I want to use python for that conversion.
for example :
```
CN1CCC[C@H]1c2cccnc2 - Nicotine
OCCc1c(C)[n+](=cs1)Cc2cnc(C)nc(N)2 - Thiamin
```
which python module will help me in doing such conversions?
Kindly let me know.
|
2013/04/21
|
[
"https://Stackoverflow.com/questions/16128572",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/778942/"
] |
I don't know of any one module that will let you do this, I had to play at data wrangler to try to get a satisfactory answer.
I tackled this using Wikipedia which is being used more and more for structured bioinformatics / chemoinformatics data, but as it turned out my program reveals that a lot of that data is incorrect.
I used urllib to submit a SPARQL query to dbpedia, first searching for the smiles string and failing that searching for the molecular weight of the compound.
```
import sys
import urllib
import urllib2
import traceback
import pybel
import json
def query(q,epr,f='application/json'):
try:
params = {'query': q}
params = urllib.urlencode(params)
opener = urllib2.build_opener(urllib2.HTTPHandler)
request = urllib2.Request(epr+'?'+params)
request.add_header('Accept', f)
request.get_method = lambda: 'GET'
url = opener.open(request)
return url.read()
except Exception, e:
traceback.print_exc(file=sys.stdout)
raise e
url = 'http://dbpedia.org/sparql'
q1 = '''
select ?name where {
?s <http://dbpedia.org/property/smiles> "%s"@en.
?s rdfs:label ?name.
FILTER(LANG(?name) = "" || LANGMATCHES(LANG(?name), "en"))
}
limit 10
'''
q2 = '''
select ?name where {
?s <http://dbpedia.org/property/molecularWeight> '%s'^^xsd:double.
?s rdfs:label ?name.
FILTER(LANG(?name) = "" || LANGMATCHES(LANG(?name), "en"))
}
limit 10
'''
smiles = filter(None, '''
CN1CCC[C@H]1c2cccnc2
CN(CCC1)[C@@H]1C2=CC=CN=C2
OCCc1c(C)[n+](=cs1)Cc2cnc(C)nc(N)2
Cc1nnc2CN=C(c3ccccc3)c4cc(Cl)ccc4-n12
CN1C(=O)CN=C(c2ccccc2)c3cc(Cl)ccc13
CCc1nn(C)c2c(=O)[nH]c(nc12)c3cc(ccc3OCC)S(=O)(=O)N4CCN(C)CC4
CC(C)(N)Cc1ccccc1
CN(C)C(=O)Cc1c(nc2ccc(C)cn12)c3ccc(C)cc3
COc1ccc2[nH]c(nc2c1)S(=O)Cc3ncc(C)c(OC)c3C
CCN(CC)C(=O)[C@H]1CN(C)[C@@H]2Cc3c[nH]c4cccc(C2=C1)c34
'''.splitlines())
OBMolecules = {}
for smile in smiles:
try:
OBMolecules[smile] = pybel.readstring('smi', smile)
except Exception as e:
print e
for smi in smiles:
print '--------------'
print smi
try:
print "searching by smiles string.."
results = json.loads(query(q1 % smi, url))
if len(results['results']['bindings']) == 0:
raise Exception('no results from smiles')
else:
print 'NAME: ', results['results']['bindings'][0]['name']['value']
except Exception as e:
print e
try:
mol_weight = round(OBMolecules[smi].molwt, 2)
print "search ing by molecular weight %s" % mol_weight
results = json.loads(query(q2 % mol_weight, url))
if len(results['results']['bindings']) == 0:
raise Exception('no results from molecular weight')
else:
print 'NAME: ', results['results']['bindings'][0]['name']['value']
except Exception as e:
print e
```
output...
```
--------------
CN1CCC[C@H]1c2cccnc2
searching by smiles string..
no results from smiles
search ing by molecular weight 162.23
NAME: Anabasine
--------------
CN(CCC1)[C@@H]1C2=CC=CN=C2
searching by smiles string..
no results from smiles
search ing by molecular weight 162.23
NAME: Anabasine
--------------
OCCc1c(C)[n+](=cs1)Cc2cnc(C)nc(N)2
searching by smiles string..
no results from smiles
search ing by molecular weight 267.37
NAME: Pipradrol
--------------
Cc1nnc2CN=C(c3ccccc3)c4cc(Cl)ccc4-n12
searching by smiles string..
no results from smiles
search ing by molecular weight 308.76
no results from molecular weight
--------------
CN1C(=O)CN=C(c2ccccc2)c3cc(Cl)ccc13
searching by smiles string..
no results from smiles
search ing by molecular weight 284.74
NAME: Mazindol
--------------
CCc1nn(C)c2c(=O)[nH]c(nc12)c3cc(ccc3OCC)S(=O)(=O)N4CCN(C)CC4
searching by smiles string..
no results from smiles
search ing by molecular weight 460.55
no results from molecular weight
--------------
CC(C)(N)Cc1ccccc1
searching by smiles string..
no results from smiles
search ing by molecular weight 149.23
NAME: Phenpromethamine
--------------
CN(C)C(=O)Cc1c(nc2ccc(C)cn12)c3ccc(C)cc3
searching by smiles string..
no results from smiles
search ing by molecular weight 307.39
NAME: Talastine
--------------
COc1ccc2[nH]c(nc2c1)S(=O)Cc3ncc(C)c(OC)c3C
searching by smiles string..
no results from smiles
search ing by molecular weight 345.42
no results from molecular weight
--------------
CCN(CC)C(=O)[C@H]1CN(C)[C@@H]2Cc3c[nH]c4cccc(C2=C1)c34
searching by smiles string..
no results from smiles
search ing by molecular weight 323.43
NAME: Lysergic acid diethylamide
```
As you can see the first two results which *should* be nicotine come out wrong, this is because the wikipedia entry for nicotine reports the molecular mass in the molecular weight field.
|
Reference: [NCI/CADD](https://cactus.nci.nih.gov/chemical/structure)
from urllib.request import urlopen
```
def CIRconvert(smi):
try:
url ="https://cactus.nci.nih.gov/chemical/structure/" + smi+"/iupac_name"
ans = urlopen(url).read().decode('utf8')
return ans
except:
return 'Name Not Available'
smiles = 'CCCCC(C)CC'
print(smiles, CIRconvert(smiles))
```
Output:
CCCCC(C)CC - **3-Methylheptane**
|
16,128,572
|
I have number of molecules in smiles format and I want to get molecular name from smiles format of molecule and I want to use python for that conversion.
for example :
```
CN1CCC[C@H]1c2cccnc2 - Nicotine
OCCc1c(C)[n+](=cs1)Cc2cnc(C)nc(N)2 - Thiamin
```
which python module will help me in doing such conversions?
Kindly let me know.
|
2013/04/21
|
[
"https://Stackoverflow.com/questions/16128572",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/778942/"
] |
I don't know of any one module that will let you do this, I had to play at data wrangler to try to get a satisfactory answer.
I tackled this using Wikipedia which is being used more and more for structured bioinformatics / chemoinformatics data, but as it turned out my program reveals that a lot of that data is incorrect.
I used urllib to submit a SPARQL query to dbpedia, first searching for the smiles string and failing that searching for the molecular weight of the compound.
```
import sys
import urllib
import urllib2
import traceback
import pybel
import json
def query(q,epr,f='application/json'):
try:
params = {'query': q}
params = urllib.urlencode(params)
opener = urllib2.build_opener(urllib2.HTTPHandler)
request = urllib2.Request(epr+'?'+params)
request.add_header('Accept', f)
request.get_method = lambda: 'GET'
url = opener.open(request)
return url.read()
except Exception, e:
traceback.print_exc(file=sys.stdout)
raise e
url = 'http://dbpedia.org/sparql'
q1 = '''
select ?name where {
?s <http://dbpedia.org/property/smiles> "%s"@en.
?s rdfs:label ?name.
FILTER(LANG(?name) = "" || LANGMATCHES(LANG(?name), "en"))
}
limit 10
'''
q2 = '''
select ?name where {
?s <http://dbpedia.org/property/molecularWeight> '%s'^^xsd:double.
?s rdfs:label ?name.
FILTER(LANG(?name) = "" || LANGMATCHES(LANG(?name), "en"))
}
limit 10
'''
smiles = filter(None, '''
CN1CCC[C@H]1c2cccnc2
CN(CCC1)[C@@H]1C2=CC=CN=C2
OCCc1c(C)[n+](=cs1)Cc2cnc(C)nc(N)2
Cc1nnc2CN=C(c3ccccc3)c4cc(Cl)ccc4-n12
CN1C(=O)CN=C(c2ccccc2)c3cc(Cl)ccc13
CCc1nn(C)c2c(=O)[nH]c(nc12)c3cc(ccc3OCC)S(=O)(=O)N4CCN(C)CC4
CC(C)(N)Cc1ccccc1
CN(C)C(=O)Cc1c(nc2ccc(C)cn12)c3ccc(C)cc3
COc1ccc2[nH]c(nc2c1)S(=O)Cc3ncc(C)c(OC)c3C
CCN(CC)C(=O)[C@H]1CN(C)[C@@H]2Cc3c[nH]c4cccc(C2=C1)c34
'''.splitlines())
OBMolecules = {}
for smile in smiles:
try:
OBMolecules[smile] = pybel.readstring('smi', smile)
except Exception as e:
print e
for smi in smiles:
print '--------------'
print smi
try:
print "searching by smiles string.."
results = json.loads(query(q1 % smi, url))
if len(results['results']['bindings']) == 0:
raise Exception('no results from smiles')
else:
print 'NAME: ', results['results']['bindings'][0]['name']['value']
except Exception as e:
print e
try:
mol_weight = round(OBMolecules[smi].molwt, 2)
print "search ing by molecular weight %s" % mol_weight
results = json.loads(query(q2 % mol_weight, url))
if len(results['results']['bindings']) == 0:
raise Exception('no results from molecular weight')
else:
print 'NAME: ', results['results']['bindings'][0]['name']['value']
except Exception as e:
print e
```
output...
```
--------------
CN1CCC[C@H]1c2cccnc2
searching by smiles string..
no results from smiles
search ing by molecular weight 162.23
NAME: Anabasine
--------------
CN(CCC1)[C@@H]1C2=CC=CN=C2
searching by smiles string..
no results from smiles
search ing by molecular weight 162.23
NAME: Anabasine
--------------
OCCc1c(C)[n+](=cs1)Cc2cnc(C)nc(N)2
searching by smiles string..
no results from smiles
search ing by molecular weight 267.37
NAME: Pipradrol
--------------
Cc1nnc2CN=C(c3ccccc3)c4cc(Cl)ccc4-n12
searching by smiles string..
no results from smiles
search ing by molecular weight 308.76
no results from molecular weight
--------------
CN1C(=O)CN=C(c2ccccc2)c3cc(Cl)ccc13
searching by smiles string..
no results from smiles
search ing by molecular weight 284.74
NAME: Mazindol
--------------
CCc1nn(C)c2c(=O)[nH]c(nc12)c3cc(ccc3OCC)S(=O)(=O)N4CCN(C)CC4
searching by smiles string..
no results from smiles
search ing by molecular weight 460.55
no results from molecular weight
--------------
CC(C)(N)Cc1ccccc1
searching by smiles string..
no results from smiles
search ing by molecular weight 149.23
NAME: Phenpromethamine
--------------
CN(C)C(=O)Cc1c(nc2ccc(C)cn12)c3ccc(C)cc3
searching by smiles string..
no results from smiles
search ing by molecular weight 307.39
NAME: Talastine
--------------
COc1ccc2[nH]c(nc2c1)S(=O)Cc3ncc(C)c(OC)c3C
searching by smiles string..
no results from smiles
search ing by molecular weight 345.42
no results from molecular weight
--------------
CCN(CC)C(=O)[C@H]1CN(C)[C@@H]2Cc3c[nH]c4cccc(C2=C1)c34
searching by smiles string..
no results from smiles
search ing by molecular weight 323.43
NAME: Lysergic acid diethylamide
```
As you can see the first two results which *should* be nicotine come out wrong, this is because the wikipedia entry for nicotine reports the molecular mass in the molecular weight field.
|
Download the RDkit module and use something like this:
```
ms_smis = [["CN1CCC[C@H]1c2cccnc2", "Nicotine"],
["OCCc1c(C)[n+](=cs1)Cc2cnc(C)nc(N)2", "Thiamin"]]
ms = [[Chem.MolFromSmiles(x[0]), x[1]] for x in ms_smis]
for m in ms: Draw.MolToFile(m[0], m[1] + ".png", size=(800, 800))
```
Here is the documentation:
<https://www.rdkit.org/docs/GettingStartedInPython.html>
|
16,128,572
|
I have number of molecules in smiles format and I want to get molecular name from smiles format of molecule and I want to use python for that conversion.
for example :
```
CN1CCC[C@H]1c2cccnc2 - Nicotine
OCCc1c(C)[n+](=cs1)Cc2cnc(C)nc(N)2 - Thiamin
```
which python module will help me in doing such conversions?
Kindly let me know.
|
2013/04/21
|
[
"https://Stackoverflow.com/questions/16128572",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/778942/"
] |
There is a section in the [open babel documentation](http://openbabel.org/docs/dev/Fingerprints/fingerprints.html#similarity-searching) on similarity searching you may want to look at, you could combine this with a sdl file derived from [Chembl](https://www.ebi.ac.uk/chembl/downloads).
I will give this a go later as it way be much more fruitful than my previous answer!
|
Reference: [NCI/CADD](https://cactus.nci.nih.gov/chemical/structure)
from urllib.request import urlopen
```
def CIRconvert(smi):
try:
url ="https://cactus.nci.nih.gov/chemical/structure/" + smi+"/iupac_name"
ans = urlopen(url).read().decode('utf8')
return ans
except:
return 'Name Not Available'
smiles = 'CCCCC(C)CC'
print(smiles, CIRconvert(smiles))
```
Output:
CCCCC(C)CC - **3-Methylheptane**
|
16,128,572
|
I have number of molecules in smiles format and I want to get molecular name from smiles format of molecule and I want to use python for that conversion.
for example :
```
CN1CCC[C@H]1c2cccnc2 - Nicotine
OCCc1c(C)[n+](=cs1)Cc2cnc(C)nc(N)2 - Thiamin
```
which python module will help me in doing such conversions?
Kindly let me know.
|
2013/04/21
|
[
"https://Stackoverflow.com/questions/16128572",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/778942/"
] |
There is a section in the [open babel documentation](http://openbabel.org/docs/dev/Fingerprints/fingerprints.html#similarity-searching) on similarity searching you may want to look at, you could combine this with a sdl file derived from [Chembl](https://www.ebi.ac.uk/chembl/downloads).
I will give this a go later as it way be much more fruitful than my previous answer!
|
Download the RDkit module and use something like this:
```
ms_smis = [["CN1CCC[C@H]1c2cccnc2", "Nicotine"],
["OCCc1c(C)[n+](=cs1)Cc2cnc(C)nc(N)2", "Thiamin"]]
ms = [[Chem.MolFromSmiles(x[0]), x[1]] for x in ms_smis]
for m in ms: Draw.MolToFile(m[0], m[1] + ".png", size=(800, 800))
```
Here is the documentation:
<https://www.rdkit.org/docs/GettingStartedInPython.html>
|
68,236,416
|
I am trying to install django using pipenv but failing to do so.
```
pipenv install django
Creating a virtualenv for this project...
Pipfile: /home/djangoTry/Pipfile
Using /usr/bin/python3.9 (3.9.5) to create virtualenv...
⠴ Creating virtual environment...RuntimeError: failed to query /usr/bin/python3.9 with code 1 err: 'Traceback (most recent call last):\n File "/home/.local/lib/python3.6/site-packages/virtualenv/discovery/py_info.py", line 16, in <module>\n from distutils import dist\nImportError: cannot import name \'dist\' from \'distutils\' (/usr/lib/python3.9/distutils/__init__.py)\n'
✘ Failed creating virtual environment
[pipenv.exceptions.VirtualenvCreationException]:
Failed to create virtual environment.
```
Other info
```
python --version
Python 2.7.17
python3 --version
Python 3.6.9
python3.9 --version
Python 3.9.4
pip3 --version
pip 21.1 from /home/aman/.local/lib/python3.6/site-packages/pip (python 3.6)
```
Please help.Thanks
|
2021/07/03
|
[
"https://Stackoverflow.com/questions/68236416",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/16091170/"
] |
The problem isn't with installing Django, is on creating your venv, I don't create on this way so I will pass the way that i create
Step 1 - Install pip
Step 2 - Install virtaulEnv
[check here how to install both](https://gist.github.com/Geoyi/d9fab4f609e9f75941946be45000632b)
after installed, `run python3 -m venv venv`this wil create an venv on your current directory. Initiate them with `. venv/bin/activate`and so run `pip install django`. That should work
|
Try pipenv install --python 3.9
|
72,664,661
|
I’m new to python and trying to get a loop working. I’m trying to iterate through a dataframe – for each unique value in the “Acct” column I want to create a csv file, with the account name. So there should be an Account1.csv, Account2.csv and Account3.csv created.
When I run this code, only Account3.csv is being created (I’m guessing the others are being created but overwritten somehow?)
```
data = [('Account1', 'ibm', 20),
('Account1', 'aapl', 30),
('Account1', 'googl', 15),
('Account1', 'cvs', 18),
('Account2', 'wal', 24),
('Account2', 'abc', 65),
('Account2', 'bmw', 98),
('Account2', 'deo', 70),
('Account2', 'glen', 40),
('Account3', 'aapl', 50),
('Account3', 'googl', 68),
('Account3', 'orcl', 95)]
data_df = pd.DataFrame(data, columns = ['Acct', 'ticker', 'qty'])
acctlist = data_df["Acct"].drop_duplicates()
save_to = "c:/temp/csv/"
save_as = Acct + ".csv"
for Acct in acctlist:
#print(data_df[data_df["Acct"] == Acct])
Acct_df = (data_df[data_df["Acct"] == Acct])
Acct_df.to_csv(save_to + save_as)
```
|
2022/06/17
|
[
"https://Stackoverflow.com/questions/72664661",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/19361388/"
] |
you need to use a `TextInput` component to handle this kind of event. You can find the API here: <https://reactnative.dev/docs/textinput>.
Bear in mind that react library is built as an API so it can take advantage of different renders. But the issue is that when using the mobile native renderer you will not have the DOM properties you would have on a web view, it is way different.
|
First, you add `onKeyDown={keyDownHandler}` to a React element. This will trigger your `keyDownHandler` function each time a key is pressed. Ex:
```
<div className='example-class' onKeyDown={keyDownHandler}>
```
You can set up your function like this:
```
const keyDownHandler = (
event: React.KeyboardEvent<HTMLDivElement>
) => {
if (event.code === 'ArrowDown') {
// do something
} else if (event.code === 'A') {
// do something
}
}
```
|
72,664,661
|
I’m new to python and trying to get a loop working. I’m trying to iterate through a dataframe – for each unique value in the “Acct” column I want to create a csv file, with the account name. So there should be an Account1.csv, Account2.csv and Account3.csv created.
When I run this code, only Account3.csv is being created (I’m guessing the others are being created but overwritten somehow?)
```
data = [('Account1', 'ibm', 20),
('Account1', 'aapl', 30),
('Account1', 'googl', 15),
('Account1', 'cvs', 18),
('Account2', 'wal', 24),
('Account2', 'abc', 65),
('Account2', 'bmw', 98),
('Account2', 'deo', 70),
('Account2', 'glen', 40),
('Account3', 'aapl', 50),
('Account3', 'googl', 68),
('Account3', 'orcl', 95)]
data_df = pd.DataFrame(data, columns = ['Acct', 'ticker', 'qty'])
acctlist = data_df["Acct"].drop_duplicates()
save_to = "c:/temp/csv/"
save_as = Acct + ".csv"
for Acct in acctlist:
#print(data_df[data_df["Acct"] == Acct])
Acct_df = (data_df[data_df["Acct"] == Acct])
Acct_df.to_csv(save_to + save_as)
```
|
2022/06/17
|
[
"https://Stackoverflow.com/questions/72664661",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/19361388/"
] |
I am not sure about the View component, but for TextInput this might work!
```
<TextInput
onKeyPress={handleKeyDown}
placeholder="Sample Text"
/>
```
and for function part:
```
const handleKeyDown = (e) => {
if(e.nativeEvent.key == "Enter"){
dismissKeyboard();
}
}
```
|
First, you add `onKeyDown={keyDownHandler}` to a React element. This will trigger your `keyDownHandler` function each time a key is pressed. Ex:
```
<div className='example-class' onKeyDown={keyDownHandler}>
```
You can set up your function like this:
```
const keyDownHandler = (
event: React.KeyboardEvent<HTMLDivElement>
) => {
if (event.code === 'ArrowDown') {
// do something
} else if (event.code === 'A') {
// do something
}
}
```
|
72,664,661
|
I’m new to python and trying to get a loop working. I’m trying to iterate through a dataframe – for each unique value in the “Acct” column I want to create a csv file, with the account name. So there should be an Account1.csv, Account2.csv and Account3.csv created.
When I run this code, only Account3.csv is being created (I’m guessing the others are being created but overwritten somehow?)
```
data = [('Account1', 'ibm', 20),
('Account1', 'aapl', 30),
('Account1', 'googl', 15),
('Account1', 'cvs', 18),
('Account2', 'wal', 24),
('Account2', 'abc', 65),
('Account2', 'bmw', 98),
('Account2', 'deo', 70),
('Account2', 'glen', 40),
('Account3', 'aapl', 50),
('Account3', 'googl', 68),
('Account3', 'orcl', 95)]
data_df = pd.DataFrame(data, columns = ['Acct', 'ticker', 'qty'])
acctlist = data_df["Acct"].drop_duplicates()
save_to = "c:/temp/csv/"
save_as = Acct + ".csv"
for Acct in acctlist:
#print(data_df[data_df["Acct"] == Acct])
Acct_df = (data_df[data_df["Acct"] == Acct])
Acct_df.to_csv(save_to + save_as)
```
|
2022/06/17
|
[
"https://Stackoverflow.com/questions/72664661",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/19361388/"
] |
Well, it's not the most elegant way, but it does exactly as you want:
```
import { Keyboard, TextInput } from 'react-native';
<TextInput autoFocus onFocus={() => Keyboard.dismiss()} />
```
The keyboard key event listener will get key presses now without showing the keyboard.
Hope this helps.
|
First, you add `onKeyDown={keyDownHandler}` to a React element. This will trigger your `keyDownHandler` function each time a key is pressed. Ex:
```
<div className='example-class' onKeyDown={keyDownHandler}>
```
You can set up your function like this:
```
const keyDownHandler = (
event: React.KeyboardEvent<HTMLDivElement>
) => {
if (event.code === 'ArrowDown') {
// do something
} else if (event.code === 'A') {
// do something
}
}
```
|
72,664,661
|
I’m new to python and trying to get a loop working. I’m trying to iterate through a dataframe – for each unique value in the “Acct” column I want to create a csv file, with the account name. So there should be an Account1.csv, Account2.csv and Account3.csv created.
When I run this code, only Account3.csv is being created (I’m guessing the others are being created but overwritten somehow?)
```
data = [('Account1', 'ibm', 20),
('Account1', 'aapl', 30),
('Account1', 'googl', 15),
('Account1', 'cvs', 18),
('Account2', 'wal', 24),
('Account2', 'abc', 65),
('Account2', 'bmw', 98),
('Account2', 'deo', 70),
('Account2', 'glen', 40),
('Account3', 'aapl', 50),
('Account3', 'googl', 68),
('Account3', 'orcl', 95)]
data_df = pd.DataFrame(data, columns = ['Acct', 'ticker', 'qty'])
acctlist = data_df["Acct"].drop_duplicates()
save_to = "c:/temp/csv/"
save_as = Acct + ".csv"
for Acct in acctlist:
#print(data_df[data_df["Acct"] == Acct])
Acct_df = (data_df[data_df["Acct"] == Acct])
Acct_df.to_csv(save_to + save_as)
```
|
2022/06/17
|
[
"https://Stackoverflow.com/questions/72664661",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/19361388/"
] |
Well, it's not the most elegant way, but it does exactly as you want:
```
import { Keyboard, TextInput } from 'react-native';
<TextInput autoFocus onFocus={() => Keyboard.dismiss()} />
```
The keyboard key event listener will get key presses now without showing the keyboard.
Hope this helps.
|
you need to use a `TextInput` component to handle this kind of event. You can find the API here: <https://reactnative.dev/docs/textinput>.
Bear in mind that react library is built as an API so it can take advantage of different renders. But the issue is that when using the mobile native renderer you will not have the DOM properties you would have on a web view, it is way different.
|
72,664,661
|
I’m new to python and trying to get a loop working. I’m trying to iterate through a dataframe – for each unique value in the “Acct” column I want to create a csv file, with the account name. So there should be an Account1.csv, Account2.csv and Account3.csv created.
When I run this code, only Account3.csv is being created (I’m guessing the others are being created but overwritten somehow?)
```
data = [('Account1', 'ibm', 20),
('Account1', 'aapl', 30),
('Account1', 'googl', 15),
('Account1', 'cvs', 18),
('Account2', 'wal', 24),
('Account2', 'abc', 65),
('Account2', 'bmw', 98),
('Account2', 'deo', 70),
('Account2', 'glen', 40),
('Account3', 'aapl', 50),
('Account3', 'googl', 68),
('Account3', 'orcl', 95)]
data_df = pd.DataFrame(data, columns = ['Acct', 'ticker', 'qty'])
acctlist = data_df["Acct"].drop_duplicates()
save_to = "c:/temp/csv/"
save_as = Acct + ".csv"
for Acct in acctlist:
#print(data_df[data_df["Acct"] == Acct])
Acct_df = (data_df[data_df["Acct"] == Acct])
Acct_df.to_csv(save_to + save_as)
```
|
2022/06/17
|
[
"https://Stackoverflow.com/questions/72664661",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/19361388/"
] |
Well, it's not the most elegant way, but it does exactly as you want:
```
import { Keyboard, TextInput } from 'react-native';
<TextInput autoFocus onFocus={() => Keyboard.dismiss()} />
```
The keyboard key event listener will get key presses now without showing the keyboard.
Hope this helps.
|
I am not sure about the View component, but for TextInput this might work!
```
<TextInput
onKeyPress={handleKeyDown}
placeholder="Sample Text"
/>
```
and for function part:
```
const handleKeyDown = (e) => {
if(e.nativeEvent.key == "Enter"){
dismissKeyboard();
}
}
```
|
43,536,182
|
i am having on HDFS this huge file which is an extract of my db. e.g.:
```
1||||||1||||||||||||||0002||01||1999-06-01 16:18:38||||2999-12-31 00:00:00||||||||||||||||||||||||||||||||||||||||||||||||||||||||2||||0||W.ISHIHARA||||1999-06-01 16:18:38||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||19155||||||||||||||1||1||NBV||||||||||||||U||||||||N||||||||||||||||||||||
1||||||8||2000-08-25 00:00:00||||||||3||||0001||01||1999-06-01 16:26:16||||1999-06-01 17:57:10||||||||||300||||||PH||400||Yes||PH||0255097�`||400||||1||103520||||||1||4||10||||20||||||||||2||||0||S.OSARI||1961-10-05 00:00:00||1999-06-01 16:26:16||�o��������������||�o��������������||1||||����||||1||1994-01-24 00:00:00||2||||||75||1999-08-25 00:00:00||1999-08-25 00:00:00||0||1||||4||||||�l��������������||�o��������������||�l��������������||||�o��������������||NP||||�l��������������||�l��������������||||||5||19055||||||||||1||||8||1||NBV||||||||||||||U||||||||N||||||||||||||||||||||
```
* Size of the file: 40GB
* Number of records: ~120 000 000
* Number of fields: 112
* Field sep: ||
* Line sep: \n
* Encoding: sjis
I want to load this file in hive using pyspark (1.6 with python 3). But my jobs keep failing.
Here is my code:
```
toProcessFileDF = sc.binaryFiles("MyFile")\
.flatMap(lambda x: x[1].split(b'\n'))\
.map(lambda x: x.decode('sjis'))\
.filter(lambda x: x.count('|')==sepCnt*2)\
.map(lambda x: x.split('||'))\
.toDF(schema=tableSchema) #tableSchema is the schema retrieved from hive
toProcessFileDF.write.saveAsTable(tableName, mode='append')
```
I received several errors but amongst other, jave 143 (memory error), heartbeat timeout, and kernel has died. (let me know if you need the exact log errors).
Is it the right way to do it ? Maybe there is a more clever way or more efficient. Can you advice me anything on how to perform this ?
|
2017/04/21
|
[
"https://Stackoverflow.com/questions/43536182",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5013752/"
] |
I found the databrick csv reader quite useful to do that.
```
toProcessFileDF_raw = sqlContext.read.format('com.databricks.spark.csv')\
.options(header='false',
inferschema='false',
charset='shift-jis',
delimiter='\t')\
.load(toProcessFile)
```
I can use the delimiter option to split with only one character unfortunately. Therefore my solution is to split with tab because i am sure i do not have any in my file. and then I can apply a split on my line.
This is not perfect, but at least i have the right encoding and i do not put everything in memory.
|
Logs would have been helpful.
`binaryFiles` will read this as binary file from HDFS as a single record and returned in a key-value pair, where the key is the path of each file, the value is the content of each file.
Note from Spark documentation: Small files are preferred, large file is also allowable, but may cause bad performance.
It would be better if you use `textFile`
```
toProcessFileDF = sc.textFile("MyFile")\
.map(lambda line: line.split("||"))\
....
```
Also, one option would be to specify a larger number of partitions when reading in the initial text file (e.g. sc.textFile(path, 200000)) rather than re-partitioning after reading.
Another important thing is to make sure that your input file is splittable (some compression options make it not splittable, and in that case Spark may have to read it on a single machine causing OOMs).
|
60,054,247
|
I have a python list of list that I would like to insert into a MySQL database.
The list looks like this:
```
[[column_name_1,value_1],[column_name_2, value_2],.......]
```
I want to insert this into the database table which has column names as `column_name_1, column_name_2.....`
so that the database table looks like this:
```
column_name_1 | column_name_2 | column_name_3 | column_name_4
value_1 | value_2 | value_3 | value_4
value_1 | value_2 | value_3 | value_4
value_1 | value_2 | value_3 | value_4
```
How do I make this happen in python?
Thanks in advance!
|
2020/02/04
|
[
"https://Stackoverflow.com/questions/60054247",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12675531/"
] |
You have several options.
```
working_directory=$PWD
python3.8 main.py "$working_directory"
```
This will pass the value of your variable as a command line argument. In Python you will use `sys.argv[1]` to access it.
```
export working_directory=$PWD
python3.8 main.py
```
This will add an environment variable using `export`. In Python you can access this using `import os` and `os.getenv('working_directory')`.
You also can use this syntax:
```
working_directory=$PWD python3.8 main.py
```
This results in the same (an environment variable) without cluttering the environment of the calling shell.
|
You can use the `pathlib` library to avoid putting it as a parameter:
```
import pathlib
working_derictory = pathlib.Path().absolute()
```
|
7,521,623
|
I am trying to parse HTML with BeautifulSoup.
The content I want is like this:
```
<a class="yil-biz-ttl" id="yil_biz_ttl-2" href="http://some-web-url/" title="some title">Title</a>
```
i tried and got the following error:
```
maxx = soup.findAll("href", {"class: "yil-biz-ttl"})
------------------------------------------------------------
File "<ipython console>", line 1
maxx = soup.findAll("href", {"class: "yil-biz-ttl"})
^
SyntaxError: invalid syntax
```
what i want is the string : `http://some-web-url/`
|
2011/09/22
|
[
"https://Stackoverflow.com/questions/7521623",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/648866/"
] |
You're missing a close-quote after `"class`:
```
maxx = soup.findAll("href", {"class: "yil-biz-ttl"})
```
should be
```
maxx = soup.findAll("href", {"class": "yil-biz-ttl"})
```
also, I don't think you can search for an attribute like `href` like that, I think you need to search for a tag:
```
maxx = [link['href'] for link in soup.findAll("a", {"class": "yil-biz-ttl"})]
```
|
Well first of all you have a syntax error. You have your quotes wrong in `class` part.
Try:
`maxx = soup.findAll("href", {"class": "yil-biz-ttl"})`
|
7,521,623
|
I am trying to parse HTML with BeautifulSoup.
The content I want is like this:
```
<a class="yil-biz-ttl" id="yil_biz_ttl-2" href="http://some-web-url/" title="some title">Title</a>
```
i tried and got the following error:
```
maxx = soup.findAll("href", {"class: "yil-biz-ttl"})
------------------------------------------------------------
File "<ipython console>", line 1
maxx = soup.findAll("href", {"class: "yil-biz-ttl"})
^
SyntaxError: invalid syntax
```
what i want is the string : `http://some-web-url/`
|
2011/09/22
|
[
"https://Stackoverflow.com/questions/7521623",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/648866/"
] |
```
soup.findAll('a', {'class': 'yil-biz-ttl'})[0]['href']
```
To find all such links:
```
for link in soup.findAll('a', {'class': 'yil-biz-ttl'}):
try:
print link['href']
except KeyError:
pass
```
|
You're missing a close-quote after `"class`:
```
maxx = soup.findAll("href", {"class: "yil-biz-ttl"})
```
should be
```
maxx = soup.findAll("href", {"class": "yil-biz-ttl"})
```
also, I don't think you can search for an attribute like `href` like that, I think you need to search for a tag:
```
maxx = [link['href'] for link in soup.findAll("a", {"class": "yil-biz-ttl"})]
```
|
7,521,623
|
I am trying to parse HTML with BeautifulSoup.
The content I want is like this:
```
<a class="yil-biz-ttl" id="yil_biz_ttl-2" href="http://some-web-url/" title="some title">Title</a>
```
i tried and got the following error:
```
maxx = soup.findAll("href", {"class: "yil-biz-ttl"})
------------------------------------------------------------
File "<ipython console>", line 1
maxx = soup.findAll("href", {"class: "yil-biz-ttl"})
^
SyntaxError: invalid syntax
```
what i want is the string : `http://some-web-url/`
|
2011/09/22
|
[
"https://Stackoverflow.com/questions/7521623",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/648866/"
] |
```
soup.findAll('a', {'class': 'yil-biz-ttl'})[0]['href']
```
To find all such links:
```
for link in soup.findAll('a', {'class': 'yil-biz-ttl'}):
try:
print link['href']
except KeyError:
pass
```
|
Well first of all you have a syntax error. You have your quotes wrong in `class` part.
Try:
`maxx = soup.findAll("href", {"class": "yil-biz-ttl"})`
|
7,521,623
|
I am trying to parse HTML with BeautifulSoup.
The content I want is like this:
```
<a class="yil-biz-ttl" id="yil_biz_ttl-2" href="http://some-web-url/" title="some title">Title</a>
```
i tried and got the following error:
```
maxx = soup.findAll("href", {"class: "yil-biz-ttl"})
------------------------------------------------------------
File "<ipython console>", line 1
maxx = soup.findAll("href", {"class: "yil-biz-ttl"})
^
SyntaxError: invalid syntax
```
what i want is the string : `http://some-web-url/`
|
2011/09/22
|
[
"https://Stackoverflow.com/questions/7521623",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/648866/"
] |
To find all `<a/>` elements from CSS class `"yil-biz-ttl"` that have `href` attribute with anything in it:
```
from bs4 import BeautifulSoup # $ pip install beautifulsoup4
soup = BeautifulSoup(HTML)
for link in soup("a", "yil-biz-ttl", href=True):
print(link['href'])
```
At the moment all other answers don't satisfy the above requirements.
|
Well first of all you have a syntax error. You have your quotes wrong in `class` part.
Try:
`maxx = soup.findAll("href", {"class": "yil-biz-ttl"})`
|
7,521,623
|
I am trying to parse HTML with BeautifulSoup.
The content I want is like this:
```
<a class="yil-biz-ttl" id="yil_biz_ttl-2" href="http://some-web-url/" title="some title">Title</a>
```
i tried and got the following error:
```
maxx = soup.findAll("href", {"class: "yil-biz-ttl"})
------------------------------------------------------------
File "<ipython console>", line 1
maxx = soup.findAll("href", {"class: "yil-biz-ttl"})
^
SyntaxError: invalid syntax
```
what i want is the string : `http://some-web-url/`
|
2011/09/22
|
[
"https://Stackoverflow.com/questions/7521623",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/648866/"
] |
```
soup.findAll('a', {'class': 'yil-biz-ttl'})[0]['href']
```
To find all such links:
```
for link in soup.findAll('a', {'class': 'yil-biz-ttl'}):
try:
print link['href']
except KeyError:
pass
```
|
To find all `<a/>` elements from CSS class `"yil-biz-ttl"` that have `href` attribute with anything in it:
```
from bs4 import BeautifulSoup # $ pip install beautifulsoup4
soup = BeautifulSoup(HTML)
for link in soup("a", "yil-biz-ttl", href=True):
print(link['href'])
```
At the moment all other answers don't satisfy the above requirements.
|
4,302,201
|
I'll give a simplified example here. Suppose I have an iterator in python, and each object that this iterator returns is itself iterable. I want to take all the objects returned by this iterator and chain them together into one long iterator. Is there a standard utility to make this possible?
Here is a contrived example.
```
x = iter([ xrange(0,5), xrange(5,10)])
```
`x` is an iterator that returns iterators, and I want to chain all the iterators returned by x into one big iterator. The result of such an operation in this example should be
equivalent to xrange(0,10), and it should be lazily evaluated.
|
2010/11/29
|
[
"https://Stackoverflow.com/questions/4302201",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/125921/"
] |
You can use [`itertools.chain.from_iterable`](http://docs.python.org/library/itertools.html#itertools.chain.from_iterable)
```
>>> import itertools
>>> x = iter([ xrange(0,5), xrange(5,10)])
>>> a = itertools.chain.from_iterable(x)
>>> list(a)
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
```
If that's not available on your version (apparently, it's new in 2.6), you can just do it manually:
```
>>> x = iter([ xrange(0,5), xrange(5,10)])
>>> a = (i for subiter in x for i in subiter)
>>> list(a)
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
```
|
Here's the old-fashioned way:
```
Python 2.2.3 (#42, May 30 2003, 18:12:08) [MSC 32 bit (Intel)] on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> from __future__ import generators
>>> def mychain(iterables):
... for it in iterables:
... for item in it:
... yield item
...
>>> x = iter([ xrange(0,5), xrange(5,10)])
>>> a = mychain(x)
>>> list(a)
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
>>>
```
I'm not sure what the purpose or benefit of using `iter()` is, in this case:
```
>>> x = [xrange(0,5), xrange(5,10)]
>>> a = mychain(x)
>>> list(a)
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
>>>
```
|
22,385,108
|
On the `operator` module, we have the `or_` function, [which is the bitwise or](http://docs.python.org/2/library/operator.html#operator.or_) (`|`).
However I can't seem to find the logical or (`or`).
The documentation [doesn't seem to list it](http://docs.python.org/2/library/operator.html#mapping-operators-to-functions).
I'm wondering why isn't it included? Is it not considered a operator?
Is there a builtin function that provides its behaviour?
|
2014/03/13
|
[
"https://Stackoverflow.com/questions/22385108",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1595865/"
] |
The `or` operator *short circuits*; the right-hand expression is not evaluated when the left-hand returns a true value. This applies to the `and` operator as well; when the left-hand side expression returns a false value, the right-hand expression is not evaluated.
You could not do this with a function; all operands *have* to be evaluated before the function can be called. As such, there is no equivalent function for it in the `operator` module.
Compare:
```
foo = None
result = foo and foo(bar)
```
with
```
foo = None
result = operator.boolean_and(foo, foo(bar)) # hypothetical and implementation
```
The latter expression will fail, because you cannot use `None` as a callable. The first version works, because the `and` operator won't evaluate the `foo(bar)` expression.
|
The reason you are getting `1` after executing `>>> 1 or 2` is because `1` is `true` so the expression has been satisfied.
It might make more sense if you try executing `>>> 0 or 2`. It will return `2` because the first statement is `0` or `false` so the second statement gets evaluated. In this case `2` evaluates to a `true` boolean value.
The `and` and `or` operators evaluate the first value and then plug the result into the "AND and OR" truth tables. In the case of `and` the second value is only considered if the first evaluates to `true`. In the case of the `or` operator, if the first value evaluates to `true` the expression is true and can return, if it isn't the second value is evaluated.
|
22,385,108
|
On the `operator` module, we have the `or_` function, [which is the bitwise or](http://docs.python.org/2/library/operator.html#operator.or_) (`|`).
However I can't seem to find the logical or (`or`).
The documentation [doesn't seem to list it](http://docs.python.org/2/library/operator.html#mapping-operators-to-functions).
I'm wondering why isn't it included? Is it not considered a operator?
Is there a builtin function that provides its behaviour?
|
2014/03/13
|
[
"https://Stackoverflow.com/questions/22385108",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1595865/"
] |
The `or` operator *short circuits*; the right-hand expression is not evaluated when the left-hand returns a true value. This applies to the `and` operator as well; when the left-hand side expression returns a false value, the right-hand expression is not evaluated.
You could not do this with a function; all operands *have* to be evaluated before the function can be called. As such, there is no equivalent function for it in the `operator` module.
Compare:
```
foo = None
result = foo and foo(bar)
```
with
```
foo = None
result = operator.boolean_and(foo, foo(bar)) # hypothetical and implementation
```
The latter expression will fail, because you cannot use `None` as a callable. The first version works, because the `and` operator won't evaluate the `foo(bar)` expression.
|
The closest thing to a built-in `or` function is [any](http://docs.python.org/2.7/library/functions.html#any):
```
>>> any((1, 2))
True
```
If you wanted to duplicate `or`'s functionality of returning non-boolean operands, you could use [next](http://docs.python.org/2.7/library/functions.html#next) with a filter:
```
>>> next(operand for operand in (1, 2) if operand)
1
```
But like Martijn said, neither are true drop-in replacements for `or` because it short-circuits. A true `or` function would have to accept functions to be able to avoid evaluating all the results:
```
logical_or(lambda: 1, lambda: 2)
```
This is somewhat unwieldy, and would be inconsistent with the rest of the `operator` module, so it's probably best that it's left out and you use other explicit methods instead.
|
22,385,108
|
On the `operator` module, we have the `or_` function, [which is the bitwise or](http://docs.python.org/2/library/operator.html#operator.or_) (`|`).
However I can't seem to find the logical or (`or`).
The documentation [doesn't seem to list it](http://docs.python.org/2/library/operator.html#mapping-operators-to-functions).
I'm wondering why isn't it included? Is it not considered a operator?
Is there a builtin function that provides its behaviour?
|
2014/03/13
|
[
"https://Stackoverflow.com/questions/22385108",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1595865/"
] |
The `or` operator *short circuits*; the right-hand expression is not evaluated when the left-hand returns a true value. This applies to the `and` operator as well; when the left-hand side expression returns a false value, the right-hand expression is not evaluated.
You could not do this with a function; all operands *have* to be evaluated before the function can be called. As such, there is no equivalent function for it in the `operator` module.
Compare:
```
foo = None
result = foo and foo(bar)
```
with
```
foo = None
result = operator.boolean_and(foo, foo(bar)) # hypothetical and implementation
```
The latter expression will fail, because you cannot use `None` as a callable. The first version works, because the `and` operator won't evaluate the `foo(bar)` expression.
|
It's not possible:
------------------
[This can explicitly be found in the docs](http://docs.python.org/2/reference/expressions.html#boolean-operations):
>
> The expression x or y first evaluates x; if x is true, its value is
> returned; otherwise, y is evaluated and the resulting value is
> returned.
>
>
>
It does not exist as an `operator` function because due to the language specification, it is impossible to implement because you cannot delay execution of a called argument when calling the function. Here is an example of `or` in action:
```
def foo():
return 'foo'
def bar():
raise RuntimeError
```
If bar is called, we get a Runtime error. And looking at the following line, we see that Python shortcuts the evaluation of the line, since `foo` returns a `True`-ish value.
```
>>> foo() or bar()
'foo'
```
We can approximate:
-------------------
We can simulate this behavior by passing in uncalled functions, and then calling them inside our `or` function:
```
def my_or(*funcs):
for func in funcs:
call = func()
if call:
return call
return call
>>> my_or(foo, bar)
'foo'
```
But you *cannot* shortcut execution of called callables that are passed to a function:
```
>>> my_or(foo, bar())
Traceback (most recent call last):
File "<pyshell#28>", line 1, in <module>
like_or(foo, bar())
File "<pyshell#24>", line 2, in bar
raise RuntimeError
RuntimeError
```
*So it would be improper to include such a function in the built-ins or standard library because users would expect an `or` function to work just as a boolean `or` test, which again, is impossible.*
|
22,385,108
|
On the `operator` module, we have the `or_` function, [which is the bitwise or](http://docs.python.org/2/library/operator.html#operator.or_) (`|`).
However I can't seem to find the logical or (`or`).
The documentation [doesn't seem to list it](http://docs.python.org/2/library/operator.html#mapping-operators-to-functions).
I'm wondering why isn't it included? Is it not considered a operator?
Is there a builtin function that provides its behaviour?
|
2014/03/13
|
[
"https://Stackoverflow.com/questions/22385108",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1595865/"
] |
The closest thing to a built-in `or` function is [any](http://docs.python.org/2.7/library/functions.html#any):
```
>>> any((1, 2))
True
```
If you wanted to duplicate `or`'s functionality of returning non-boolean operands, you could use [next](http://docs.python.org/2.7/library/functions.html#next) with a filter:
```
>>> next(operand for operand in (1, 2) if operand)
1
```
But like Martijn said, neither are true drop-in replacements for `or` because it short-circuits. A true `or` function would have to accept functions to be able to avoid evaluating all the results:
```
logical_or(lambda: 1, lambda: 2)
```
This is somewhat unwieldy, and would be inconsistent with the rest of the `operator` module, so it's probably best that it's left out and you use other explicit methods instead.
|
The reason you are getting `1` after executing `>>> 1 or 2` is because `1` is `true` so the expression has been satisfied.
It might make more sense if you try executing `>>> 0 or 2`. It will return `2` because the first statement is `0` or `false` so the second statement gets evaluated. In this case `2` evaluates to a `true` boolean value.
The `and` and `or` operators evaluate the first value and then plug the result into the "AND and OR" truth tables. In the case of `and` the second value is only considered if the first evaluates to `true`. In the case of the `or` operator, if the first value evaluates to `true` the expression is true and can return, if it isn't the second value is evaluated.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.