qid
int64 46k
74.7M
| question
stringlengths 54
37.8k
| date
stringlengths 10
10
| metadata
listlengths 3
3
| response_j
stringlengths 17
26k
| response_k
stringlengths 26
26k
|
|---|---|---|---|---|---|
3,330,280
|
I'm weaving my c code in python to speed up the loop:
```
from scipy import weave
from numpy import *
#1) create the array
a=zeros((200,300,400),int)
for i in range(200):
for j in range(300):
for k in range(400):
a[i,j,k]=i*300*400+j*400+k
#2) test on c code to access the array
code="""
for(int i=0;i<200;++i){
for(int j=0;j<300;++j){
for(int k=0;k<400;++k){
printf("%ld,",a[i*300*400+j*400+k]);
}
printf("\\n");
}
printf("\\n\\n");
}
"""
test =weave.inline(code, ['a'])
```
It's working all well, but it is still costly when the array is big.
Someone suggested me to use a.strides instead of the nasty "a[i\*300\*400+j\*400+k]"
I can't understand the document about .strides.
Any ideas
Thanks in advance
|
2010/07/25
|
[
"https://Stackoverflow.com/questions/3330280",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/389799/"
] |
You could replace the 3 for-loops with
```
grid=np.ogrid[0:200,0:300,0:400]
a=grid[0]*300*400+grid[1]*400+grid[2]
```
The following suggests this may result in a ~68x (or better? see below) speedup:
```
% python -mtimeit -s"import test" "test.m1()"
100 loops, best of 3: 17.5 msec per loop
% python -mtimeit -s"import test" "test.m2()"
1000 loops, best of 3: 247 usec per loop
```
test.py:
```
import numpy as np
n1,n2,n3=20,30,40
def m1():
a=np.zeros((n1,n2,n3),int)
for i in range(n1):
for j in range(n2):
for k in range(n3):
a[i,j,k]=i*300*400+j*400+k
return a
def m2():
grid=np.ogrid[0:n1,0:n2,0:n3]
b=grid[0]*300*400+grid[1]*400+grid[2]
return b
if __name__=='__main__':
assert(np.all(m1()==m2()))
```
With n1,n2,n3 = 200,300,400,
```
python -mtimeit -s"import test" "test.m2()"
```
took 182 ms on my machine, and
```
python -mtimeit -s"import test" "test.m1()"
```
has yet to finish.
|
I really hope, you didn't run the loop with all the print statements, as Justin already noted. Besides that:
```
from scipy import weave
n1, n2, n3 = 200, 300, 400
def m1():
a = np.zeros((n1,n2,n3), int)
for i in xrange(n1):
for j in xrange(n2):
for k in xrange(n3):
a[i,j,k] = i*300*400 + j*400 + k
return a
def m2():
grid = np.ogrid[0:n1,0:n2,0:n3]
b = grid[0]*300*400 + grid[1]*400 + grid[2]
return b
def m3():
a = np.zeros((n1,n2,n3), int)
code = """
int rows = Na[0];
int cols = Na[1];
int depth = Na[2];
int val = 0;
for (int i=0; i<rows; i++) {
for (int j=0; j<cols; j++) {
for (int k=0; k<depth; k++) {
val = (i*cols + j)*depth + k;
a[val] = val;
}
}
}"""
weave.inline(code, ['a'])
return a
%timeit m1()
%timeit m2()
%timeit m3()
np.all(m1() == m2())
np.all(m2() == m3())
```
Gives me:
```
1 loops, best of 3: 19.6 s per loop
1 loops, best of 3: 248 ms per loop
10 loops, best of 3: 144 ms per loop
```
Which seems to be pretty reasonable. If you want to speed it up further, you probably want to start using your GPU, which is quite perfect for number crunching like that.
In this special case, you could even do:
```
def m4():
a = np.zeros((n1,n2,n3), int)
code = """
int rows = Na[0];
int cols = Na[1];
int depth = Na[2];
for (int i=0; i<rows*cols*depth; i++) {
a[i] = i;
}"""
weave.inline(code, ['a'])
return a
```
But this is not getting much better anymore, since `np.zeros()` already takes most of the time:
```
%timeit np.zeros((n1,n2,n3), int)
10 loops, best of 3: 113 ms per loop
```
|
3,330,280
|
I'm weaving my c code in python to speed up the loop:
```
from scipy import weave
from numpy import *
#1) create the array
a=zeros((200,300,400),int)
for i in range(200):
for j in range(300):
for k in range(400):
a[i,j,k]=i*300*400+j*400+k
#2) test on c code to access the array
code="""
for(int i=0;i<200;++i){
for(int j=0;j<300;++j){
for(int k=0;k<400;++k){
printf("%ld,",a[i*300*400+j*400+k]);
}
printf("\\n");
}
printf("\\n\\n");
}
"""
test =weave.inline(code, ['a'])
```
It's working all well, but it is still costly when the array is big.
Someone suggested me to use a.strides instead of the nasty "a[i\*300\*400+j\*400+k]"
I can't understand the document about .strides.
Any ideas
Thanks in advance
|
2010/07/25
|
[
"https://Stackoverflow.com/questions/3330280",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/389799/"
] |
The problem is that you are printing out 2.4 million numbers to the screen in your C code. This is of course going to take a while because the numbers have to be converted into strings and then printed to the screen. Do you really need to print them all to the screen? What is your end goal here?
For a comparison, I tried just setting another array as each of the elements in a. This process took about .05 seconds in weave. I gave up on timing the printing of all elements to the screen after 30 seconds or so.
|
There is no way to speed up accessing a multidimensional array in C. You have to calculate the array index and you have to dereference it, this is as simple as it gets.
|
3,330,280
|
I'm weaving my c code in python to speed up the loop:
```
from scipy import weave
from numpy import *
#1) create the array
a=zeros((200,300,400),int)
for i in range(200):
for j in range(300):
for k in range(400):
a[i,j,k]=i*300*400+j*400+k
#2) test on c code to access the array
code="""
for(int i=0;i<200;++i){
for(int j=0;j<300;++j){
for(int k=0;k<400;++k){
printf("%ld,",a[i*300*400+j*400+k]);
}
printf("\\n");
}
printf("\\n\\n");
}
"""
test =weave.inline(code, ['a'])
```
It's working all well, but it is still costly when the array is big.
Someone suggested me to use a.strides instead of the nasty "a[i\*300\*400+j\*400+k]"
I can't understand the document about .strides.
Any ideas
Thanks in advance
|
2010/07/25
|
[
"https://Stackoverflow.com/questions/3330280",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/389799/"
] |
The problem is that you are printing out 2.4 million numbers to the screen in your C code. This is of course going to take a while because the numbers have to be converted into strings and then printed to the screen. Do you really need to print them all to the screen? What is your end goal here?
For a comparison, I tried just setting another array as each of the elements in a. This process took about .05 seconds in weave. I gave up on timing the printing of all elements to the screen after 30 seconds or so.
|
I really hope, you didn't run the loop with all the print statements, as Justin already noted. Besides that:
```
from scipy import weave
n1, n2, n3 = 200, 300, 400
def m1():
a = np.zeros((n1,n2,n3), int)
for i in xrange(n1):
for j in xrange(n2):
for k in xrange(n3):
a[i,j,k] = i*300*400 + j*400 + k
return a
def m2():
grid = np.ogrid[0:n1,0:n2,0:n3]
b = grid[0]*300*400 + grid[1]*400 + grid[2]
return b
def m3():
a = np.zeros((n1,n2,n3), int)
code = """
int rows = Na[0];
int cols = Na[1];
int depth = Na[2];
int val = 0;
for (int i=0; i<rows; i++) {
for (int j=0; j<cols; j++) {
for (int k=0; k<depth; k++) {
val = (i*cols + j)*depth + k;
a[val] = val;
}
}
}"""
weave.inline(code, ['a'])
return a
%timeit m1()
%timeit m2()
%timeit m3()
np.all(m1() == m2())
np.all(m2() == m3())
```
Gives me:
```
1 loops, best of 3: 19.6 s per loop
1 loops, best of 3: 248 ms per loop
10 loops, best of 3: 144 ms per loop
```
Which seems to be pretty reasonable. If you want to speed it up further, you probably want to start using your GPU, which is quite perfect for number crunching like that.
In this special case, you could even do:
```
def m4():
a = np.zeros((n1,n2,n3), int)
code = """
int rows = Na[0];
int cols = Na[1];
int depth = Na[2];
for (int i=0; i<rows*cols*depth; i++) {
a[i] = i;
}"""
weave.inline(code, ['a'])
return a
```
But this is not getting much better anymore, since `np.zeros()` already takes most of the time:
```
%timeit np.zeros((n1,n2,n3), int)
10 loops, best of 3: 113 ms per loop
```
|
74,091,484
|
I am trying to complete a project with javascript for which I need to import a library, however, npm is giving me errors whenever I try to use it.
the error:
```
npm WARN deprecated har-validator@5.1.5: this library is no longer supported
npm WARN deprecated uuid@3.4.0: Please upgrade to version 7 or higher. Older versions may use Math.random() in certain circumstances, which is known to be problematic. See https://v8.dev/blog/math-random for details.
npm WARN deprecated request@2.88.2: request has been deprecated, see https://github.com/request/request/issues/3142
npm ERR! code 1
npm ERR! path C:\Users\danie\OneDrive\Desktop\science fair\node_modules\gl
npm ERR! command failed
npm ERR! command C:\Windows\system32\cmd.exe /d /s /c prebuild-install || node-gyp rebuild
npm ERR! gyp info it worked if it ends with ok
npm ERR! gyp info using node-gyp@7.1.2
npm ERR! gyp info using node@16.18.0 | win32 | x64
npm ERR! gyp info find Python using Python version 3.10.8 found at "C:\Python310\python.exe"
npm ERR! gyp ERR! find VS
npm ERR! gyp ERR! find VS msvs_version was set from command line or npm config
npm ERR! gyp ERR! find VS - looking for Visual Studio version 2015
npm ERR! gyp ERR! find VS VCINSTALLDIR not set, not running in VS Command Prompt
npm ERR! gyp ERR! find VS unknown version "undefined" found at "D:\Applications\VS studio"
npm ERR! gyp ERR! find VS checking VS2019 (16.11.32929.386) found at:
npm ERR! gyp ERR! find VS "C:\Program Files (x86)\Microsoft Visual Studio\2019\BuildTools"
npm ERR! gyp ERR! find VS - found "Visual Studio C++ core features"
npm ERR! gyp ERR! find VS - found VC++ toolset: v142
npm ERR! gyp ERR! find VS - missing any Windows SDK
npm ERR! gyp ERR! find VS could not find a version of Visual Studio 2017 or newer to use
npm ERR! gyp ERR! find VS looking for Visual Studio 2015
npm ERR! gyp ERR! find VS - not found
npm ERR! gyp ERR! find VS not looking for VS2013 as it is only supported up to Node.js 8
npm ERR! gyp ERR! find VS
npm ERR! gyp ERR! find VS valid versions for msvs_version:
npm ERR! gyp ERR! find VS
npm ERR! gyp ERR! find VS **************************************************************
npm ERR! gyp ERR! find VS You need to install the latest version of Visual Studio
npm ERR! gyp ERR! find VS including the "Desktop development with C++" workload.
npm ERR! gyp ERR! find VS For more information consult the documentation at:
npm ERR! gyp ERR! find VS https://github.com/nodejs/node-gyp#on-windows
npm ERR! gyp ERR! find VS **************************************************************
npm ERR! gyp ERR! find VS
npm ERR! gyp ERR! configure error
npm ERR! gyp ERR! stack Error: Could not find any Visual Studio installation to use
npm ERR! gyp ERR! stack at VisualStudioFinder.fail (C:\Users\danie\OneDrive\Desktop\science fair\node_modules\node-gyp\lib\find-visualstudio.js:121:47)
npm ERR! gyp ERR! stack at C:\Users\danie\OneDrive\Desktop\science fair\node_modules\node-gyp\lib\find-visualstudio.js:74:16
npm ERR! gyp ERR! stack at VisualStudioFinder.findVisualStudio2013 (C:\Users\danie\OneDrive\Desktop\science fair\node_modules\node-gyp\lib\find-visualstudio.js:351:14)
npm ERR! gyp ERR! stack at C:\Users\danie\OneDrive\Desktop\science fair\node_modules\node-gyp\lib\find-visualstudio.js:70:14
npm ERR! gyp ERR! stack at C:\Users\danie\OneDrive\Desktop\science fair\node_modules\node-gyp\lib\find-visualstudio.js:372:16
npm ERR! gyp ERR! stack at C:\Users\danie\OneDrive\Desktop\science fair\node_modules\node-gyp\lib\util.js:54:7
npm ERR! gyp ERR! stack at C:\Users\danie\OneDrive\Desktop\science fair\node_modules\node-gyp\lib\util.js:33:16
npm ERR! gyp ERR! stack at ChildProcess.exithandler (node:child_process:410:5)
npm ERR! gyp ERR! stack at ChildProcess.emit (node:events:513:28)
npm ERR! gyp ERR! stack at maybeClose (node:internal/child_process:1100:16)
npm ERR! gyp ERR! System Windows_NT 10.0.19044
npm ERR! gyp ERR! command "C:\\Program Files\\nodejs\\node.exe" "C:\\Users\\danie\\OneDrive\\Desktop\\science fair\\node_modules\\node-gyp\\bin\\node-gyp.js" "rebuild"
npm ERR! gyp ERR! cwd C:\Users\danie\OneDrive\Desktop\science fair\node_modules\gl
npm ERR! gyp ERR! node -v v16.18.0
npm ERR! gyp ERR! node-gyp -v v7.1.2
npm ERR! gyp ERR! not ok
npm ERR! A complete log of this run can be found in:
npm ERR! C:\Users\danie\AppData\Local\npm-cache\_logs\2022-10-16T23_54_51_958Z-debug-0.log
```
I tried upwards of 20 solutions on this site and none worked for me. Does anyone have any ideas? I tried npm update, reinstalling npm, reinstalling node, etc. I believe it has something to do with visual studio as with this very similar question ([How can I fix "npm ERR! code1"](https://stackoverflow.com/questions/72033205/how-can-i-fix-npm-err-code1)), however, I don't understand what they mean with the answer or what I should do.
EDIT: Forgot to mention I have Visual Studio 2022 already installed on this device, but it is on my D: drive. I didn't think that was the problem but I am unsure.
|
2022/10/17
|
[
"https://Stackoverflow.com/questions/74091484",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/17289555/"
] |
You wrote that you need a *"function that sorts a List of integers"*, but in your code you pass a list **of lists** to `binsort`. This only makes sense if those sublists represent digits of a number. It makes no more sense when these digits are no single digits anymore. Either:
* You sort a list of integers like `[142, 642, 141]`, or
* You sort a list of lists of **digits**, like `[[1,4,2], [6,4,2], [1,4,1]]`, where each sublist is a digit-by-digit representation of an integer
Mixing both representations of integers makes little sense.
Here is how you would do it when using the first representation:
```
def binsort(a):
if not a: # Boundary case: list is empty
return a
bins = [[], a]
power = 1
while len(bins[0]) < len(a): # while there could be more digits to binsort by
binsTwo = [[] for _ in range(10)]
for bin in bins:
# Distribute the numbers in the new bins according to i-th digit (power)
for num in bin:
binsTwo[num // power % 10].append(num)
# Prepare for extracting next digit
power *= 10
bins = binsTwo
return bins[0]
```
If you use the second representation (list of lists of **single** digits), then your code is fine, provided you add a check for an empty input list, which is a boundary case.
```
def binsort(a):
if not a: # Boundary case: list is empty
return a
bins = [a]
for i in range(len(a[0]) - 1, -1, -1): # For each digit
binsTwo = [[] for _ in range(10)]
for bin in bins:
# Distribute the numbers in the new bins according to i-th digit
for digits in bin:
binsTwo[digits[i]].append(digits)
# Prepare for extracting next digit
bins = binsTwo
return [digits for bin in bins for digits in bin]
```
But again, you should not try to "solve" the case where you have sublists that consist of multi-digit numbers. Reconsider why you think you need support for that, because it seems like you have mixed two input formats -- as I explained above.
|
```
def bucketsort(a):
# Create empty buckets
buckets = [[] for _ in range(10)]
# Sort into buckets
for i in a:
buckets[i].append(i)
# Flatten buckets
a.clear()
for bucket in buckets:
a.extend(bucket)
return a
```
|
26,709,731
|
I'm writing a Google App Engine project in Python with Flask. Here's my directory structure for a Hello, World! app (contents of third party libraries ommitted for brevity's sake):
```
project_root/
flask/
jinja2/
markupsafe/
myapp/
__init__.py
simplejson/
werkzeug/
app.yaml
itsdangerous.py
main.py
```
Here's `main.py`:
```
from google.appengine.ext.webapp.util import run_wsigi_app
from myapp import app
run_wsgi_app(app)
```
And `myapp/__init__.py`:
```
from flask import Flask
app = Flask("myapp")
@app.route("/")
def hello():
return "Hello, World!"
```
Since Flask has so many dependencies and sub dependencies, I thought it would be nice to tidy up the directory structure by putting all the third party code in a subdirectory (say `project_root/lib`). Of course, then `sys.path` doesn't know where to find the libraries.
I've tried the solutions in [How do you modify sys.path in Google App Engine (Python)?](https://stackoverflow.com/questions/2354166/how-do-you-modify-sys-path-in-google-app-engine-python), but that doesn't seem to work. I've also tried changing `from flask import Flask` to `from lib/flask import Flask`, to no avail. Is there a good way to do this?
|
2014/11/03
|
[
"https://Stackoverflow.com/questions/26709731",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2752467/"
] |
try to add the full path to your lib, like here `sys.path[0:0] = ['/home/user/project_root/lib']`
|
I've actually run into this problem a number of time when writing my own Google App Engine® products before. My solution was to cat the files together to form one large python file. The command to do this is:
```
cat *.py
```
Of course, you may need to fix up the import statements inside the individual files. This can be done with a simple sed command to search for import statements in the files that match file names used in the cat command. But this is a relatively simple command compared to actually creating the huge python file (HPF, for short).
Another advantage of combining them all is increased speed: Because python doesn't need to go loading a whole bunch of useless tiny files all the time, it will run much faster. This means that the users of your Google App Engine® product will not need to wait for crazy long file load latency.
Of course, updating the libraries can be tricky. I recommend keeping the directory tree you used to cat the files together in order to easily update things in the future. All you need to do is copy the new files in and re-run the cat and sed commands. Think of it like pre-compiling your python library.
All the best!
-Milo
|
26,709,731
|
I'm writing a Google App Engine project in Python with Flask. Here's my directory structure for a Hello, World! app (contents of third party libraries ommitted for brevity's sake):
```
project_root/
flask/
jinja2/
markupsafe/
myapp/
__init__.py
simplejson/
werkzeug/
app.yaml
itsdangerous.py
main.py
```
Here's `main.py`:
```
from google.appengine.ext.webapp.util import run_wsigi_app
from myapp import app
run_wsgi_app(app)
```
And `myapp/__init__.py`:
```
from flask import Flask
app = Flask("myapp")
@app.route("/")
def hello():
return "Hello, World!"
```
Since Flask has so many dependencies and sub dependencies, I thought it would be nice to tidy up the directory structure by putting all the third party code in a subdirectory (say `project_root/lib`). Of course, then `sys.path` doesn't know where to find the libraries.
I've tried the solutions in [How do you modify sys.path in Google App Engine (Python)?](https://stackoverflow.com/questions/2354166/how-do-you-modify-sys-path-in-google-app-engine-python), but that doesn't seem to work. I've also tried changing `from flask import Flask` to `from lib/flask import Flask`, to no avail. Is there a good way to do this?
|
2014/11/03
|
[
"https://Stackoverflow.com/questions/26709731",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2752467/"
] |
Look at defining your path manipulation in appengine\_config.py This means any path manipulation only has to be performed once.
Then move your third party files in to a lib as per your suggestion.
Use a relative path 'lib' either by specifying sys.path
```
sys.path.insert(0,'./lib')
```
or use
```
import site
site.addsitedir("./lib")
```
Do not use absolute paths, as they won't be the same when you deploy your code.
Putting your lib first sometimes can be useful, especially if you don't want the google supplied version of webob.
|
I've actually run into this problem a number of time when writing my own Google App Engine® products before. My solution was to cat the files together to form one large python file. The command to do this is:
```
cat *.py
```
Of course, you may need to fix up the import statements inside the individual files. This can be done with a simple sed command to search for import statements in the files that match file names used in the cat command. But this is a relatively simple command compared to actually creating the huge python file (HPF, for short).
Another advantage of combining them all is increased speed: Because python doesn't need to go loading a whole bunch of useless tiny files all the time, it will run much faster. This means that the users of your Google App Engine® product will not need to wait for crazy long file load latency.
Of course, updating the libraries can be tricky. I recommend keeping the directory tree you used to cat the files together in order to easily update things in the future. All you need to do is copy the new files in and re-run the cat and sed commands. Think of it like pre-compiling your python library.
All the best!
-Milo
|
26,709,731
|
I'm writing a Google App Engine project in Python with Flask. Here's my directory structure for a Hello, World! app (contents of third party libraries ommitted for brevity's sake):
```
project_root/
flask/
jinja2/
markupsafe/
myapp/
__init__.py
simplejson/
werkzeug/
app.yaml
itsdangerous.py
main.py
```
Here's `main.py`:
```
from google.appengine.ext.webapp.util import run_wsigi_app
from myapp import app
run_wsgi_app(app)
```
And `myapp/__init__.py`:
```
from flask import Flask
app = Flask("myapp")
@app.route("/")
def hello():
return "Hello, World!"
```
Since Flask has so many dependencies and sub dependencies, I thought it would be nice to tidy up the directory structure by putting all the third party code in a subdirectory (say `project_root/lib`). Of course, then `sys.path` doesn't know where to find the libraries.
I've tried the solutions in [How do you modify sys.path in Google App Engine (Python)?](https://stackoverflow.com/questions/2354166/how-do-you-modify-sys-path-in-google-app-engine-python), but that doesn't seem to work. I've also tried changing `from flask import Flask` to `from lib/flask import Flask`, to no avail. Is there a good way to do this?
|
2014/11/03
|
[
"https://Stackoverflow.com/questions/26709731",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2752467/"
] |
Add a python file called appengine\_config.py with the following content.
```
import sys
import os.path
sys.path.insert(0, os.path.join(os.path.dirname(__file__), 'lib'))
```
As Tim Hoffman has mentioned the appengine\_config.py is called once when a new instance is started.
Now you can do what you intended. adding all the third-party libraries to the lib folder.
|
I've actually run into this problem a number of time when writing my own Google App Engine® products before. My solution was to cat the files together to form one large python file. The command to do this is:
```
cat *.py
```
Of course, you may need to fix up the import statements inside the individual files. This can be done with a simple sed command to search for import statements in the files that match file names used in the cat command. But this is a relatively simple command compared to actually creating the huge python file (HPF, for short).
Another advantage of combining them all is increased speed: Because python doesn't need to go loading a whole bunch of useless tiny files all the time, it will run much faster. This means that the users of your Google App Engine® product will not need to wait for crazy long file load latency.
Of course, updating the libraries can be tricky. I recommend keeping the directory tree you used to cat the files together in order to easily update things in the future. All you need to do is copy the new files in and re-run the cat and sed commands. Think of it like pre-compiling your python library.
All the best!
-Milo
|
26,709,731
|
I'm writing a Google App Engine project in Python with Flask. Here's my directory structure for a Hello, World! app (contents of third party libraries ommitted for brevity's sake):
```
project_root/
flask/
jinja2/
markupsafe/
myapp/
__init__.py
simplejson/
werkzeug/
app.yaml
itsdangerous.py
main.py
```
Here's `main.py`:
```
from google.appengine.ext.webapp.util import run_wsigi_app
from myapp import app
run_wsgi_app(app)
```
And `myapp/__init__.py`:
```
from flask import Flask
app = Flask("myapp")
@app.route("/")
def hello():
return "Hello, World!"
```
Since Flask has so many dependencies and sub dependencies, I thought it would be nice to tidy up the directory structure by putting all the third party code in a subdirectory (say `project_root/lib`). Of course, then `sys.path` doesn't know where to find the libraries.
I've tried the solutions in [How do you modify sys.path in Google App Engine (Python)?](https://stackoverflow.com/questions/2354166/how-do-you-modify-sys-path-in-google-app-engine-python), but that doesn't seem to work. I've also tried changing `from flask import Flask` to `from lib/flask import Flask`, to no avail. Is there a good way to do this?
|
2014/11/03
|
[
"https://Stackoverflow.com/questions/26709731",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2752467/"
] |
Look at defining your path manipulation in appengine\_config.py This means any path manipulation only has to be performed once.
Then move your third party files in to a lib as per your suggestion.
Use a relative path 'lib' either by specifying sys.path
```
sys.path.insert(0,'./lib')
```
or use
```
import site
site.addsitedir("./lib")
```
Do not use absolute paths, as they won't be the same when you deploy your code.
Putting your lib first sometimes can be useful, especially if you don't want the google supplied version of webob.
|
try to add the full path to your lib, like here `sys.path[0:0] = ['/home/user/project_root/lib']`
|
26,709,731
|
I'm writing a Google App Engine project in Python with Flask. Here's my directory structure for a Hello, World! app (contents of third party libraries ommitted for brevity's sake):
```
project_root/
flask/
jinja2/
markupsafe/
myapp/
__init__.py
simplejson/
werkzeug/
app.yaml
itsdangerous.py
main.py
```
Here's `main.py`:
```
from google.appengine.ext.webapp.util import run_wsigi_app
from myapp import app
run_wsgi_app(app)
```
And `myapp/__init__.py`:
```
from flask import Flask
app = Flask("myapp")
@app.route("/")
def hello():
return "Hello, World!"
```
Since Flask has so many dependencies and sub dependencies, I thought it would be nice to tidy up the directory structure by putting all the third party code in a subdirectory (say `project_root/lib`). Of course, then `sys.path` doesn't know where to find the libraries.
I've tried the solutions in [How do you modify sys.path in Google App Engine (Python)?](https://stackoverflow.com/questions/2354166/how-do-you-modify-sys-path-in-google-app-engine-python), but that doesn't seem to work. I've also tried changing `from flask import Flask` to `from lib/flask import Flask`, to no avail. Is there a good way to do this?
|
2014/11/03
|
[
"https://Stackoverflow.com/questions/26709731",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2752467/"
] |
Add a python file called appengine\_config.py with the following content.
```
import sys
import os.path
sys.path.insert(0, os.path.join(os.path.dirname(__file__), 'lib'))
```
As Tim Hoffman has mentioned the appengine\_config.py is called once when a new instance is started.
Now you can do what you intended. adding all the third-party libraries to the lib folder.
|
try to add the full path to your lib, like here `sys.path[0:0] = ['/home/user/project_root/lib']`
|
22,752,521
|
I'm trying to setup an application webserver using uWSGI + Nginx, which runs a Flask application using SQLAlchemy to communicate to a Postgres database.
When I make requests to the webserver, every other response will be a 500 error.
The error is:
```
Traceback (most recent call last):
File "/var/env/argos/lib/python3.3/site-packages/sqlalchemy/engine/base.py", line 867, in _execute_context
context)
File "/var/env/argos/lib/python3.3/site-packages/sqlalchemy/engine/default.py", line 388, in do_execute
cursor.execute(statement, parameters)
psycopg2.OperationalError: SSL error: decryption failed or bad record mac
The above exception was the direct cause of the following exception:
sqlalchemy.exc.OperationalError: (OperationalError) SSL error: decryption failed or bad record mac
```
The error is triggered by a simple `Flask-SQLAlchemy` method:
```
result = models.Event.query.get(id)
```
---
`uwsgi` is being managed by `supervisor`, which has a config:
```
[program:my_app]
command=/usr/bin/uwsgi --ini /etc/uwsgi/apps-enabled/myapp.ini --catch-exceptions
directory=/path/to/my/app
stopsignal=QUIT
autostart=true
autorestart=true
```
and `uwsgi`'s config looks like:
```
[uwsgi]
socket = /tmp/my_app.sock
logto = /var/log/my_app.log
plugins = python3
virtualenv = /path/to/my/venv
pythonpath = /path/to/my/app
wsgi-file = /path/to/my/app/application.py
callable = app
max-requests = 1000
chmod-socket = 666
chown-socket = www-data:www-data
master = true
processes = 2
no-orphans = true
log-date = true
uid = www-data
gid = www-data
```
The furthest that I can get is that it has something to do with uwsgi's forking. But beyond that I'm not clear on what needs to be done.
|
2014/03/31
|
[
"https://Stackoverflow.com/questions/22752521",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1097920/"
] |
The issue ended up being uwsgi's forking.
When working with multiple processes with a master process, uwsgi initializes the application in the master process and then copies the application over to each worker process. The problem is if you open a database connection when initializing your application, you then have multiple processes sharing the same connection, which causes the error above.
The solution is to set the `lazy` [configuration option for uwsgi](http://uwsgi-docs.readthedocs.org/en/latest/Options.html), which forces a complete loading of the application in each process:
>
> `lazy`
>
>
> Set lazy mode (load apps in workers instead of master).
>
>
> This option may have memory usage implications as Copy-on-Write semantics can not be used. When lazy is enabled, only workers will be reloaded by uWSGI’s reload signals; the master will remain alive. As such, uWSGI configuration changes are not picked up on reload by the master.
>
>
>
There's also a `lazy-apps` option:
>
> `lazy-apps`
>
>
> Load apps in each worker instead of the master.
>
>
> This option may have memory usage implications as Copy-on-Write semantics can not be used. Unlike lazy, this only affects the way applications are loaded, not master’s behavior on reload.
>
>
>
This uwsgi configuration ended up working for me:
```
[uwsgi]
socket = /tmp/my_app.sock
logto = /var/log/my_app.log
plugins = python3
virtualenv = /path/to/my/venv
pythonpath = /path/to/my/app
wsgi-file = /path/to/my/app/application.py
callable = app
max-requests = 1000
chmod-socket = 666
chown-socket = www-data:www-data
master = true
processes = 2
no-orphans = true
log-date = true
uid = www-data
gid = www-data
# the fix
lazy = true
lazy-apps = true
```
|
As an alternative you might dispose the engine. This is how I solved the problem.
Such issues may happen if there is a query during the creation of the app, that is, in the module that creates the app itself. If that states, the engine allocates a pool of connections and then uwsgi forks.
By invoking 'engine.dispose()', the connection pool itself is closed and new connections will come up as soon as someone starts making queries again. So if you do that at the end of the module where you create your app, new connections will be created after the UWSGI fork.
|
22,752,521
|
I'm trying to setup an application webserver using uWSGI + Nginx, which runs a Flask application using SQLAlchemy to communicate to a Postgres database.
When I make requests to the webserver, every other response will be a 500 error.
The error is:
```
Traceback (most recent call last):
File "/var/env/argos/lib/python3.3/site-packages/sqlalchemy/engine/base.py", line 867, in _execute_context
context)
File "/var/env/argos/lib/python3.3/site-packages/sqlalchemy/engine/default.py", line 388, in do_execute
cursor.execute(statement, parameters)
psycopg2.OperationalError: SSL error: decryption failed or bad record mac
The above exception was the direct cause of the following exception:
sqlalchemy.exc.OperationalError: (OperationalError) SSL error: decryption failed or bad record mac
```
The error is triggered by a simple `Flask-SQLAlchemy` method:
```
result = models.Event.query.get(id)
```
---
`uwsgi` is being managed by `supervisor`, which has a config:
```
[program:my_app]
command=/usr/bin/uwsgi --ini /etc/uwsgi/apps-enabled/myapp.ini --catch-exceptions
directory=/path/to/my/app
stopsignal=QUIT
autostart=true
autorestart=true
```
and `uwsgi`'s config looks like:
```
[uwsgi]
socket = /tmp/my_app.sock
logto = /var/log/my_app.log
plugins = python3
virtualenv = /path/to/my/venv
pythonpath = /path/to/my/app
wsgi-file = /path/to/my/app/application.py
callable = app
max-requests = 1000
chmod-socket = 666
chown-socket = www-data:www-data
master = true
processes = 2
no-orphans = true
log-date = true
uid = www-data
gid = www-data
```
The furthest that I can get is that it has something to do with uwsgi's forking. But beyond that I'm not clear on what needs to be done.
|
2014/03/31
|
[
"https://Stackoverflow.com/questions/22752521",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1097920/"
] |
The issue ended up being uwsgi's forking.
When working with multiple processes with a master process, uwsgi initializes the application in the master process and then copies the application over to each worker process. The problem is if you open a database connection when initializing your application, you then have multiple processes sharing the same connection, which causes the error above.
The solution is to set the `lazy` [configuration option for uwsgi](http://uwsgi-docs.readthedocs.org/en/latest/Options.html), which forces a complete loading of the application in each process:
>
> `lazy`
>
>
> Set lazy mode (load apps in workers instead of master).
>
>
> This option may have memory usage implications as Copy-on-Write semantics can not be used. When lazy is enabled, only workers will be reloaded by uWSGI’s reload signals; the master will remain alive. As such, uWSGI configuration changes are not picked up on reload by the master.
>
>
>
There's also a `lazy-apps` option:
>
> `lazy-apps`
>
>
> Load apps in each worker instead of the master.
>
>
> This option may have memory usage implications as Copy-on-Write semantics can not be used. Unlike lazy, this only affects the way applications are loaded, not master’s behavior on reload.
>
>
>
This uwsgi configuration ended up working for me:
```
[uwsgi]
socket = /tmp/my_app.sock
logto = /var/log/my_app.log
plugins = python3
virtualenv = /path/to/my/venv
pythonpath = /path/to/my/app
wsgi-file = /path/to/my/app/application.py
callable = app
max-requests = 1000
chmod-socket = 666
chown-socket = www-data:www-data
master = true
processes = 2
no-orphans = true
log-date = true
uid = www-data
gid = www-data
# the fix
lazy = true
lazy-apps = true
```
|
I am running a flask app using gunicorn on Heroku. My application started exhibiting this problem when I added the `--preload` option to my Procfile. When I removed that option, my application resumed functioning as normal.
|
22,752,521
|
I'm trying to setup an application webserver using uWSGI + Nginx, which runs a Flask application using SQLAlchemy to communicate to a Postgres database.
When I make requests to the webserver, every other response will be a 500 error.
The error is:
```
Traceback (most recent call last):
File "/var/env/argos/lib/python3.3/site-packages/sqlalchemy/engine/base.py", line 867, in _execute_context
context)
File "/var/env/argos/lib/python3.3/site-packages/sqlalchemy/engine/default.py", line 388, in do_execute
cursor.execute(statement, parameters)
psycopg2.OperationalError: SSL error: decryption failed or bad record mac
The above exception was the direct cause of the following exception:
sqlalchemy.exc.OperationalError: (OperationalError) SSL error: decryption failed or bad record mac
```
The error is triggered by a simple `Flask-SQLAlchemy` method:
```
result = models.Event.query.get(id)
```
---
`uwsgi` is being managed by `supervisor`, which has a config:
```
[program:my_app]
command=/usr/bin/uwsgi --ini /etc/uwsgi/apps-enabled/myapp.ini --catch-exceptions
directory=/path/to/my/app
stopsignal=QUIT
autostart=true
autorestart=true
```
and `uwsgi`'s config looks like:
```
[uwsgi]
socket = /tmp/my_app.sock
logto = /var/log/my_app.log
plugins = python3
virtualenv = /path/to/my/venv
pythonpath = /path/to/my/app
wsgi-file = /path/to/my/app/application.py
callable = app
max-requests = 1000
chmod-socket = 666
chown-socket = www-data:www-data
master = true
processes = 2
no-orphans = true
log-date = true
uid = www-data
gid = www-data
```
The furthest that I can get is that it has something to do with uwsgi's forking. But beyond that I'm not clear on what needs to be done.
|
2014/03/31
|
[
"https://Stackoverflow.com/questions/22752521",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1097920/"
] |
The issue ended up being uwsgi's forking.
When working with multiple processes with a master process, uwsgi initializes the application in the master process and then copies the application over to each worker process. The problem is if you open a database connection when initializing your application, you then have multiple processes sharing the same connection, which causes the error above.
The solution is to set the `lazy` [configuration option for uwsgi](http://uwsgi-docs.readthedocs.org/en/latest/Options.html), which forces a complete loading of the application in each process:
>
> `lazy`
>
>
> Set lazy mode (load apps in workers instead of master).
>
>
> This option may have memory usage implications as Copy-on-Write semantics can not be used. When lazy is enabled, only workers will be reloaded by uWSGI’s reload signals; the master will remain alive. As such, uWSGI configuration changes are not picked up on reload by the master.
>
>
>
There's also a `lazy-apps` option:
>
> `lazy-apps`
>
>
> Load apps in each worker instead of the master.
>
>
> This option may have memory usage implications as Copy-on-Write semantics can not be used. Unlike lazy, this only affects the way applications are loaded, not master’s behavior on reload.
>
>
>
This uwsgi configuration ended up working for me:
```
[uwsgi]
socket = /tmp/my_app.sock
logto = /var/log/my_app.log
plugins = python3
virtualenv = /path/to/my/venv
pythonpath = /path/to/my/app
wsgi-file = /path/to/my/app/application.py
callable = app
max-requests = 1000
chmod-socket = 666
chown-socket = www-data:www-data
master = true
processes = 2
no-orphans = true
log-date = true
uid = www-data
gid = www-data
# the fix
lazy = true
lazy-apps = true
```
|
Not sure whether to add this as an answer to this question or ask a separate question and put this as an answer there. I was getting this exact same error for reasons that are slightly different from the people who have posted and answered. In my setup, I using gunicorn as a wsgi for a Flask application. In this application, I was offloading some intense database operations off to a celery worker. The error would come from the celery worker.
From reading a lot of the answers here and looking at the psycopg2 as well as sqlalchemy session documentation, it became apparent to me that it is a bad idea to share an SQLAlchemy session between separate processes (the gunicorn worker and the sqlalchemy worker in my case).
What ended up solving this for me was creating a new session in the celery worker function so it used a new session each time it was called and also destroying the session after every web request so flask used a session per request. The overall solution looked like this:
`Flask_app.py`
```
@app.teardown_appcontext
def shutdown_session(exception=None):
session.close()
```
`celery_func.py`
```
@celery_app.task(bind=True, throws=(IntegrityError))
def access_db(self,entity_dict, tablename):
with Session() as session:
try:
session.add(ORM_obj)
session.commit()
except IntegrityError as e:
session.rollback()
print('primary key violated')
raise e
```
|
22,752,521
|
I'm trying to setup an application webserver using uWSGI + Nginx, which runs a Flask application using SQLAlchemy to communicate to a Postgres database.
When I make requests to the webserver, every other response will be a 500 error.
The error is:
```
Traceback (most recent call last):
File "/var/env/argos/lib/python3.3/site-packages/sqlalchemy/engine/base.py", line 867, in _execute_context
context)
File "/var/env/argos/lib/python3.3/site-packages/sqlalchemy/engine/default.py", line 388, in do_execute
cursor.execute(statement, parameters)
psycopg2.OperationalError: SSL error: decryption failed or bad record mac
The above exception was the direct cause of the following exception:
sqlalchemy.exc.OperationalError: (OperationalError) SSL error: decryption failed or bad record mac
```
The error is triggered by a simple `Flask-SQLAlchemy` method:
```
result = models.Event.query.get(id)
```
---
`uwsgi` is being managed by `supervisor`, which has a config:
```
[program:my_app]
command=/usr/bin/uwsgi --ini /etc/uwsgi/apps-enabled/myapp.ini --catch-exceptions
directory=/path/to/my/app
stopsignal=QUIT
autostart=true
autorestart=true
```
and `uwsgi`'s config looks like:
```
[uwsgi]
socket = /tmp/my_app.sock
logto = /var/log/my_app.log
plugins = python3
virtualenv = /path/to/my/venv
pythonpath = /path/to/my/app
wsgi-file = /path/to/my/app/application.py
callable = app
max-requests = 1000
chmod-socket = 666
chown-socket = www-data:www-data
master = true
processes = 2
no-orphans = true
log-date = true
uid = www-data
gid = www-data
```
The furthest that I can get is that it has something to do with uwsgi's forking. But beyond that I'm not clear on what needs to be done.
|
2014/03/31
|
[
"https://Stackoverflow.com/questions/22752521",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1097920/"
] |
As an alternative you might dispose the engine. This is how I solved the problem.
Such issues may happen if there is a query during the creation of the app, that is, in the module that creates the app itself. If that states, the engine allocates a pool of connections and then uwsgi forks.
By invoking 'engine.dispose()', the connection pool itself is closed and new connections will come up as soon as someone starts making queries again. So if you do that at the end of the module where you create your app, new connections will be created after the UWSGI fork.
|
I am running a flask app using gunicorn on Heroku. My application started exhibiting this problem when I added the `--preload` option to my Procfile. When I removed that option, my application resumed functioning as normal.
|
22,752,521
|
I'm trying to setup an application webserver using uWSGI + Nginx, which runs a Flask application using SQLAlchemy to communicate to a Postgres database.
When I make requests to the webserver, every other response will be a 500 error.
The error is:
```
Traceback (most recent call last):
File "/var/env/argos/lib/python3.3/site-packages/sqlalchemy/engine/base.py", line 867, in _execute_context
context)
File "/var/env/argos/lib/python3.3/site-packages/sqlalchemy/engine/default.py", line 388, in do_execute
cursor.execute(statement, parameters)
psycopg2.OperationalError: SSL error: decryption failed or bad record mac
The above exception was the direct cause of the following exception:
sqlalchemy.exc.OperationalError: (OperationalError) SSL error: decryption failed or bad record mac
```
The error is triggered by a simple `Flask-SQLAlchemy` method:
```
result = models.Event.query.get(id)
```
---
`uwsgi` is being managed by `supervisor`, which has a config:
```
[program:my_app]
command=/usr/bin/uwsgi --ini /etc/uwsgi/apps-enabled/myapp.ini --catch-exceptions
directory=/path/to/my/app
stopsignal=QUIT
autostart=true
autorestart=true
```
and `uwsgi`'s config looks like:
```
[uwsgi]
socket = /tmp/my_app.sock
logto = /var/log/my_app.log
plugins = python3
virtualenv = /path/to/my/venv
pythonpath = /path/to/my/app
wsgi-file = /path/to/my/app/application.py
callable = app
max-requests = 1000
chmod-socket = 666
chown-socket = www-data:www-data
master = true
processes = 2
no-orphans = true
log-date = true
uid = www-data
gid = www-data
```
The furthest that I can get is that it has something to do with uwsgi's forking. But beyond that I'm not clear on what needs to be done.
|
2014/03/31
|
[
"https://Stackoverflow.com/questions/22752521",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1097920/"
] |
As an alternative you might dispose the engine. This is how I solved the problem.
Such issues may happen if there is a query during the creation of the app, that is, in the module that creates the app itself. If that states, the engine allocates a pool of connections and then uwsgi forks.
By invoking 'engine.dispose()', the connection pool itself is closed and new connections will come up as soon as someone starts making queries again. So if you do that at the end of the module where you create your app, new connections will be created after the UWSGI fork.
|
Not sure whether to add this as an answer to this question or ask a separate question and put this as an answer there. I was getting this exact same error for reasons that are slightly different from the people who have posted and answered. In my setup, I using gunicorn as a wsgi for a Flask application. In this application, I was offloading some intense database operations off to a celery worker. The error would come from the celery worker.
From reading a lot of the answers here and looking at the psycopg2 as well as sqlalchemy session documentation, it became apparent to me that it is a bad idea to share an SQLAlchemy session between separate processes (the gunicorn worker and the sqlalchemy worker in my case).
What ended up solving this for me was creating a new session in the celery worker function so it used a new session each time it was called and also destroying the session after every web request so flask used a session per request. The overall solution looked like this:
`Flask_app.py`
```
@app.teardown_appcontext
def shutdown_session(exception=None):
session.close()
```
`celery_func.py`
```
@celery_app.task(bind=True, throws=(IntegrityError))
def access_db(self,entity_dict, tablename):
with Session() as session:
try:
session.add(ORM_obj)
session.commit()
except IntegrityError as e:
session.rollback()
print('primary key violated')
raise e
```
|
22,752,521
|
I'm trying to setup an application webserver using uWSGI + Nginx, which runs a Flask application using SQLAlchemy to communicate to a Postgres database.
When I make requests to the webserver, every other response will be a 500 error.
The error is:
```
Traceback (most recent call last):
File "/var/env/argos/lib/python3.3/site-packages/sqlalchemy/engine/base.py", line 867, in _execute_context
context)
File "/var/env/argos/lib/python3.3/site-packages/sqlalchemy/engine/default.py", line 388, in do_execute
cursor.execute(statement, parameters)
psycopg2.OperationalError: SSL error: decryption failed or bad record mac
The above exception was the direct cause of the following exception:
sqlalchemy.exc.OperationalError: (OperationalError) SSL error: decryption failed or bad record mac
```
The error is triggered by a simple `Flask-SQLAlchemy` method:
```
result = models.Event.query.get(id)
```
---
`uwsgi` is being managed by `supervisor`, which has a config:
```
[program:my_app]
command=/usr/bin/uwsgi --ini /etc/uwsgi/apps-enabled/myapp.ini --catch-exceptions
directory=/path/to/my/app
stopsignal=QUIT
autostart=true
autorestart=true
```
and `uwsgi`'s config looks like:
```
[uwsgi]
socket = /tmp/my_app.sock
logto = /var/log/my_app.log
plugins = python3
virtualenv = /path/to/my/venv
pythonpath = /path/to/my/app
wsgi-file = /path/to/my/app/application.py
callable = app
max-requests = 1000
chmod-socket = 666
chown-socket = www-data:www-data
master = true
processes = 2
no-orphans = true
log-date = true
uid = www-data
gid = www-data
```
The furthest that I can get is that it has something to do with uwsgi's forking. But beyond that I'm not clear on what needs to be done.
|
2014/03/31
|
[
"https://Stackoverflow.com/questions/22752521",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1097920/"
] |
I am running a flask app using gunicorn on Heroku. My application started exhibiting this problem when I added the `--preload` option to my Procfile. When I removed that option, my application resumed functioning as normal.
|
Not sure whether to add this as an answer to this question or ask a separate question and put this as an answer there. I was getting this exact same error for reasons that are slightly different from the people who have posted and answered. In my setup, I using gunicorn as a wsgi for a Flask application. In this application, I was offloading some intense database operations off to a celery worker. The error would come from the celery worker.
From reading a lot of the answers here and looking at the psycopg2 as well as sqlalchemy session documentation, it became apparent to me that it is a bad idea to share an SQLAlchemy session between separate processes (the gunicorn worker and the sqlalchemy worker in my case).
What ended up solving this for me was creating a new session in the celery worker function so it used a new session each time it was called and also destroying the session after every web request so flask used a session per request. The overall solution looked like this:
`Flask_app.py`
```
@app.teardown_appcontext
def shutdown_session(exception=None):
session.close()
```
`celery_func.py`
```
@celery_app.task(bind=True, throws=(IntegrityError))
def access_db(self,entity_dict, tablename):
with Session() as session:
try:
session.add(ORM_obj)
session.commit()
except IntegrityError as e:
session.rollback()
print('primary key violated')
raise e
```
|
28,511,485
|
I am not super experienced in writing code. Could someone help me out?
I am trying to write some simple python to work with some GPS coordinates. Basically I want it to check if the gps is +or- ten feet to my a point then print if it is true. Think I got that part figured out but I am not sure about this part. After the gps = the point I want it to only print I am here once until the next time it is true.
Basically set a 10 foot bubble around a gps coordinate, if I walk into that bubble print “I am here” Just once until I walk out and reenter the bubble. Does that make sense?
```
if "point_lat-10feet" <= gps_lat <= "point_lat+10" and "point_log-10" <= gps_log <= "point_log+10"
Print "You are at point 1"
```
Update
======
Finally had some time to work on this project, here is how it ended up
I used the package utm 0.4.0 to convert the lat and long to Utm north and east.
```
north, east, zone, band = utm.from_latlon(gpsc.fix.latitude, gpsc.fix.longitude)
northgoal, eastgoal, heading = (11111, 22222, 90)
# +- 50 degree
headerror = 50
#bubble size 10m
pointsize = 10
flag = False
if ((north - northgoal) ** 2) + ((east - eastgoal) ** 2) <= (pointsize ** 2) and ((heading - headerror) % 360) <= gpsc.fix.track <= ((heading + headerror) % 360):
if not flag :
print "Arriving at point 1" , north, east, gpsc.fix.track, gpsc.fix.speed
flag = True
else:
print "Still inside point 1"
else:
# print "Outside point 1"
flag = False
```
I have this in a while loop with a few other points. There Is probably a better way but hey this works.
Thanks again for all the help!
|
2015/02/14
|
[
"https://Stackoverflow.com/questions/28511485",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2798178/"
] |
Just wrap it in a condition check within the if.
Not language specific, but the general idea
```
if <in coord condition>
if <not printed condition>
print
set print condition
else
clear print condition
```
|
If your syntax &c were corrected (those strings are definitely wrong) you'd be checking for a square, not a circle.
Assuming `point_lat`, `gps_lat`, &c, are all numbers (not strings), and expressed in feet (and assuming `_log` is a typo for `_lon`, standing for "longitude"), you'd need:
```
import math
if math.hypot(point_lat - gps_lat, point_lon - gps_lon) < 10:
print("You are at point 1")
```
`math.hypot` applies Pythagoras' theorem to compute the hypothenuse of the right-angled triangle with the given other two sides -- which also happens to be the distance between the two points. "The two points are less than 10 feet apart" defines a **circle**, as desired.
If the latitudes and longitudes are not numbers measured in feet, you'll need to make them such, e.g converting from strings to `float` and/or using the appropriate scale factors.
|
28,511,485
|
I am not super experienced in writing code. Could someone help me out?
I am trying to write some simple python to work with some GPS coordinates. Basically I want it to check if the gps is +or- ten feet to my a point then print if it is true. Think I got that part figured out but I am not sure about this part. After the gps = the point I want it to only print I am here once until the next time it is true.
Basically set a 10 foot bubble around a gps coordinate, if I walk into that bubble print “I am here” Just once until I walk out and reenter the bubble. Does that make sense?
```
if "point_lat-10feet" <= gps_lat <= "point_lat+10" and "point_log-10" <= gps_log <= "point_log+10"
Print "You are at point 1"
```
Update
======
Finally had some time to work on this project, here is how it ended up
I used the package utm 0.4.0 to convert the lat and long to Utm north and east.
```
north, east, zone, band = utm.from_latlon(gpsc.fix.latitude, gpsc.fix.longitude)
northgoal, eastgoal, heading = (11111, 22222, 90)
# +- 50 degree
headerror = 50
#bubble size 10m
pointsize = 10
flag = False
if ((north - northgoal) ** 2) + ((east - eastgoal) ** 2) <= (pointsize ** 2) and ((heading - headerror) % 360) <= gpsc.fix.track <= ((heading + headerror) % 360):
if not flag :
print "Arriving at point 1" , north, east, gpsc.fix.track, gpsc.fix.speed
flag = True
else:
print "Still inside point 1"
else:
# print "Outside point 1"
flag = False
```
I have this in a while loop with a few other points. There Is probably a better way but hey this works.
Thanks again for all the help!
|
2015/02/14
|
[
"https://Stackoverflow.com/questions/28511485",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2798178/"
] |
First, you need to revise your check. If you want to check if you are within a 10-foot radius bubble, you need to use Pythagoras, otherwise you are checking if you are within a 20x20-foot square. For example, if you are 9 feet away from the point in both latitude and longitude, that means you are actually 12.73 feet away from your target point, which is outside your 10-foot bubble, but your code will produce a false positive.
The formula for a circle is `x^2 + y^2 = r^2`, where `x` and `y` are deltas along orthogonal axes, and `r` is your radius (10 feet.)
However, keep in mind that GPS coordinates (I assume we are talking geodetic, since you mention lat and long) are given in degrees, since geodetic coordinates are spherical. So, you will need to project your position as well as the target position into a Cartesian system such as UTM. Then you can revise your check as follows:
`if (x - target_x) ** 2 + (y - target_y) ** 2 <= 3.048 ** 2`
Where `x` is your X coordinate, `target_x` is the target X coordinate, etc. The value 3.048 is 10 feet expressed in meters. I used meters because UTM coordinates are always expressed in meters.
Next, if I understand you correctly, you only want to print a message once you've entered the bubble, and only print it again if you leave the bubble and re-enter. In that case, you simply need a flag that you set and unset as you enter and leave the desired area, respectively. Robert has it right, so you would have something like this:
```
# Initialize flag here once
printFlag = False
...
# Check if we're in the target area
if (x - target_x) ** 2 + (y - target_y) ** 2 <= 3.048 ** 2:
if not printFlag :
print "You are at point 1"
printFlag = True
else:
printFlag = False
```
This will print the message then, as long as you remain within the area, won't print the message again. However, there is one last thing to keep in mind about this: unassisted GPS is only good to about +/-3 meters, which is almost exactly equal to your radius. This means you have to be careful about how you perform your check, otherwise you will get a lot of "You are at point 1" messages when you get near the target area, as your apparent position can jump +/-3 meters in both axes, even if you're standing still. You may have to use a running average or something similar in order to improve your raw accuracy.
|
Just wrap it in a condition check within the if.
Not language specific, but the general idea
```
if <in coord condition>
if <not printed condition>
print
set print condition
else
clear print condition
```
|
28,511,485
|
I am not super experienced in writing code. Could someone help me out?
I am trying to write some simple python to work with some GPS coordinates. Basically I want it to check if the gps is +or- ten feet to my a point then print if it is true. Think I got that part figured out but I am not sure about this part. After the gps = the point I want it to only print I am here once until the next time it is true.
Basically set a 10 foot bubble around a gps coordinate, if I walk into that bubble print “I am here” Just once until I walk out and reenter the bubble. Does that make sense?
```
if "point_lat-10feet" <= gps_lat <= "point_lat+10" and "point_log-10" <= gps_log <= "point_log+10"
Print "You are at point 1"
```
Update
======
Finally had some time to work on this project, here is how it ended up
I used the package utm 0.4.0 to convert the lat and long to Utm north and east.
```
north, east, zone, band = utm.from_latlon(gpsc.fix.latitude, gpsc.fix.longitude)
northgoal, eastgoal, heading = (11111, 22222, 90)
# +- 50 degree
headerror = 50
#bubble size 10m
pointsize = 10
flag = False
if ((north - northgoal) ** 2) + ((east - eastgoal) ** 2) <= (pointsize ** 2) and ((heading - headerror) % 360) <= gpsc.fix.track <= ((heading + headerror) % 360):
if not flag :
print "Arriving at point 1" , north, east, gpsc.fix.track, gpsc.fix.speed
flag = True
else:
print "Still inside point 1"
else:
# print "Outside point 1"
flag = False
```
I have this in a while loop with a few other points. There Is probably a better way but hey this works.
Thanks again for all the help!
|
2015/02/14
|
[
"https://Stackoverflow.com/questions/28511485",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2798178/"
] |
Just wrap it in a condition check within the if.
Not language specific, but the general idea
```
if <in coord condition>
if <not printed condition>
print
set print condition
else
clear print condition
```
|
A pragmatic solution would be to have a starting point and calculate individual instances to see if it meets the condition.
For expediency let's import **pyproj** to calculate distance on a WGS-84 geoid.
```
#!/usr/bin/env python
""" banana """
from pyproj import Geod
def do_distance(lat1, lon1, lat2, lon2):
"""Simple, but accurate, distance"""
try:
geoid = Geod(ellps='WGS84')
bearing_to, bearing_fro, distance = geoid.inv(lon1, lat1, lon2, lat2)
distance *= 3.2808399 # because you wanted feet
if distance <= 10:
print('I am here')
else:
print('Still off {0:.0f} feet. Head {1:.0f}° from North'.format(distance, bearing_fro % 360))
except Exception as error:
print("Can't calculate because: ", error)
print(bearing_to % 360, bearing_fro % 360, distance)
lat1 = -15.559752
lon1 = -146.240726
lat2 = -15.561200
lon2 = -146.241407
dd = do_distance(lat1, lon1, lat2, lon2)
```
`lat1` and `lon1` would be your start, or *goal*, `lat2` and `lon2` would be the position to test if it meets condition.
|
28,511,485
|
I am not super experienced in writing code. Could someone help me out?
I am trying to write some simple python to work with some GPS coordinates. Basically I want it to check if the gps is +or- ten feet to my a point then print if it is true. Think I got that part figured out but I am not sure about this part. After the gps = the point I want it to only print I am here once until the next time it is true.
Basically set a 10 foot bubble around a gps coordinate, if I walk into that bubble print “I am here” Just once until I walk out and reenter the bubble. Does that make sense?
```
if "point_lat-10feet" <= gps_lat <= "point_lat+10" and "point_log-10" <= gps_log <= "point_log+10"
Print "You are at point 1"
```
Update
======
Finally had some time to work on this project, here is how it ended up
I used the package utm 0.4.0 to convert the lat and long to Utm north and east.
```
north, east, zone, band = utm.from_latlon(gpsc.fix.latitude, gpsc.fix.longitude)
northgoal, eastgoal, heading = (11111, 22222, 90)
# +- 50 degree
headerror = 50
#bubble size 10m
pointsize = 10
flag = False
if ((north - northgoal) ** 2) + ((east - eastgoal) ** 2) <= (pointsize ** 2) and ((heading - headerror) % 360) <= gpsc.fix.track <= ((heading + headerror) % 360):
if not flag :
print "Arriving at point 1" , north, east, gpsc.fix.track, gpsc.fix.speed
flag = True
else:
print "Still inside point 1"
else:
# print "Outside point 1"
flag = False
```
I have this in a while loop with a few other points. There Is probably a better way but hey this works.
Thanks again for all the help!
|
2015/02/14
|
[
"https://Stackoverflow.com/questions/28511485",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2798178/"
] |
First, you need to revise your check. If you want to check if you are within a 10-foot radius bubble, you need to use Pythagoras, otherwise you are checking if you are within a 20x20-foot square. For example, if you are 9 feet away from the point in both latitude and longitude, that means you are actually 12.73 feet away from your target point, which is outside your 10-foot bubble, but your code will produce a false positive.
The formula for a circle is `x^2 + y^2 = r^2`, where `x` and `y` are deltas along orthogonal axes, and `r` is your radius (10 feet.)
However, keep in mind that GPS coordinates (I assume we are talking geodetic, since you mention lat and long) are given in degrees, since geodetic coordinates are spherical. So, you will need to project your position as well as the target position into a Cartesian system such as UTM. Then you can revise your check as follows:
`if (x - target_x) ** 2 + (y - target_y) ** 2 <= 3.048 ** 2`
Where `x` is your X coordinate, `target_x` is the target X coordinate, etc. The value 3.048 is 10 feet expressed in meters. I used meters because UTM coordinates are always expressed in meters.
Next, if I understand you correctly, you only want to print a message once you've entered the bubble, and only print it again if you leave the bubble and re-enter. In that case, you simply need a flag that you set and unset as you enter and leave the desired area, respectively. Robert has it right, so you would have something like this:
```
# Initialize flag here once
printFlag = False
...
# Check if we're in the target area
if (x - target_x) ** 2 + (y - target_y) ** 2 <= 3.048 ** 2:
if not printFlag :
print "You are at point 1"
printFlag = True
else:
printFlag = False
```
This will print the message then, as long as you remain within the area, won't print the message again. However, there is one last thing to keep in mind about this: unassisted GPS is only good to about +/-3 meters, which is almost exactly equal to your radius. This means you have to be careful about how you perform your check, otherwise you will get a lot of "You are at point 1" messages when you get near the target area, as your apparent position can jump +/-3 meters in both axes, even if you're standing still. You may have to use a running average or something similar in order to improve your raw accuracy.
|
If your syntax &c were corrected (those strings are definitely wrong) you'd be checking for a square, not a circle.
Assuming `point_lat`, `gps_lat`, &c, are all numbers (not strings), and expressed in feet (and assuming `_log` is a typo for `_lon`, standing for "longitude"), you'd need:
```
import math
if math.hypot(point_lat - gps_lat, point_lon - gps_lon) < 10:
print("You are at point 1")
```
`math.hypot` applies Pythagoras' theorem to compute the hypothenuse of the right-angled triangle with the given other two sides -- which also happens to be the distance between the two points. "The two points are less than 10 feet apart" defines a **circle**, as desired.
If the latitudes and longitudes are not numbers measured in feet, you'll need to make them such, e.g converting from strings to `float` and/or using the appropriate scale factors.
|
28,511,485
|
I am not super experienced in writing code. Could someone help me out?
I am trying to write some simple python to work with some GPS coordinates. Basically I want it to check if the gps is +or- ten feet to my a point then print if it is true. Think I got that part figured out but I am not sure about this part. After the gps = the point I want it to only print I am here once until the next time it is true.
Basically set a 10 foot bubble around a gps coordinate, if I walk into that bubble print “I am here” Just once until I walk out and reenter the bubble. Does that make sense?
```
if "point_lat-10feet" <= gps_lat <= "point_lat+10" and "point_log-10" <= gps_log <= "point_log+10"
Print "You are at point 1"
```
Update
======
Finally had some time to work on this project, here is how it ended up
I used the package utm 0.4.0 to convert the lat and long to Utm north and east.
```
north, east, zone, band = utm.from_latlon(gpsc.fix.latitude, gpsc.fix.longitude)
northgoal, eastgoal, heading = (11111, 22222, 90)
# +- 50 degree
headerror = 50
#bubble size 10m
pointsize = 10
flag = False
if ((north - northgoal) ** 2) + ((east - eastgoal) ** 2) <= (pointsize ** 2) and ((heading - headerror) % 360) <= gpsc.fix.track <= ((heading + headerror) % 360):
if not flag :
print "Arriving at point 1" , north, east, gpsc.fix.track, gpsc.fix.speed
flag = True
else:
print "Still inside point 1"
else:
# print "Outside point 1"
flag = False
```
I have this in a while loop with a few other points. There Is probably a better way but hey this works.
Thanks again for all the help!
|
2015/02/14
|
[
"https://Stackoverflow.com/questions/28511485",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2798178/"
] |
First, you need to revise your check. If you want to check if you are within a 10-foot radius bubble, you need to use Pythagoras, otherwise you are checking if you are within a 20x20-foot square. For example, if you are 9 feet away from the point in both latitude and longitude, that means you are actually 12.73 feet away from your target point, which is outside your 10-foot bubble, but your code will produce a false positive.
The formula for a circle is `x^2 + y^2 = r^2`, where `x` and `y` are deltas along orthogonal axes, and `r` is your radius (10 feet.)
However, keep in mind that GPS coordinates (I assume we are talking geodetic, since you mention lat and long) are given in degrees, since geodetic coordinates are spherical. So, you will need to project your position as well as the target position into a Cartesian system such as UTM. Then you can revise your check as follows:
`if (x - target_x) ** 2 + (y - target_y) ** 2 <= 3.048 ** 2`
Where `x` is your X coordinate, `target_x` is the target X coordinate, etc. The value 3.048 is 10 feet expressed in meters. I used meters because UTM coordinates are always expressed in meters.
Next, if I understand you correctly, you only want to print a message once you've entered the bubble, and only print it again if you leave the bubble and re-enter. In that case, you simply need a flag that you set and unset as you enter and leave the desired area, respectively. Robert has it right, so you would have something like this:
```
# Initialize flag here once
printFlag = False
...
# Check if we're in the target area
if (x - target_x) ** 2 + (y - target_y) ** 2 <= 3.048 ** 2:
if not printFlag :
print "You are at point 1"
printFlag = True
else:
printFlag = False
```
This will print the message then, as long as you remain within the area, won't print the message again. However, there is one last thing to keep in mind about this: unassisted GPS is only good to about +/-3 meters, which is almost exactly equal to your radius. This means you have to be careful about how you perform your check, otherwise you will get a lot of "You are at point 1" messages when you get near the target area, as your apparent position can jump +/-3 meters in both axes, even if you're standing still. You may have to use a running average or something similar in order to improve your raw accuracy.
|
A pragmatic solution would be to have a starting point and calculate individual instances to see if it meets the condition.
For expediency let's import **pyproj** to calculate distance on a WGS-84 geoid.
```
#!/usr/bin/env python
""" banana """
from pyproj import Geod
def do_distance(lat1, lon1, lat2, lon2):
"""Simple, but accurate, distance"""
try:
geoid = Geod(ellps='WGS84')
bearing_to, bearing_fro, distance = geoid.inv(lon1, lat1, lon2, lat2)
distance *= 3.2808399 # because you wanted feet
if distance <= 10:
print('I am here')
else:
print('Still off {0:.0f} feet. Head {1:.0f}° from North'.format(distance, bearing_fro % 360))
except Exception as error:
print("Can't calculate because: ", error)
print(bearing_to % 360, bearing_fro % 360, distance)
lat1 = -15.559752
lon1 = -146.240726
lat2 = -15.561200
lon2 = -146.241407
dd = do_distance(lat1, lon1, lat2, lon2)
```
`lat1` and `lon1` would be your start, or *goal*, `lat2` and `lon2` would be the position to test if it meets condition.
|
32,884,277
|
The code is:
```
import sys
execfile('test.py')
```
In test.py I have:
```
import zipfile
with zipfile.ZipFile('test.jar', 'r') as z:
z.extractall("C:\testfolder")
```
This code produces:
```
AttributeError ( ZipFile instance has no attribute '__exit__' ) # edited
```
The code from "test.py" works when run from python idle.
I am running python v2.7.10
|
2015/10/01
|
[
"https://Stackoverflow.com/questions/32884277",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3438538/"
] |
I made my code on python 2.7 but when I put it on my server which use 2.6 I have this error :
`AttributeError: ZipFile instance has no attribute '__exit__'`
For solve this problems I use Sebastian's answer on this post :
[Making Python 2.7 code run with Python 2.6](https://stackoverflow.com/questions/21268470/making-python-2-7-code-run-with-python-2-6/21268586#21268586)
```
import contextlib
def unzip(source, target):
with contextlib.closing(zipfile.ZipFile(source , "r")) as z:
z.extractall(target)
print "Extracted : " + source + " to: " + target
```
Like he said :
>
> contextlib.closing does exactly what the missing `__exit__` method on
> the ZipFile would be supposed to do. Namely, call the close method
>
>
>
|
According to the Python documentation, [`ZipFile.extractall()`](https://docs.python.org/2/library/zipfile.html#zipfile.ZipFile.extractall) was added in version 2.6. I expect that you'll find that you are running a different, older (pre 2.6), version of Python than that which idle is using. You can find out which version with this:
```
import sys
print sys.version
```
and the location of the running interpreter can be obtained with
```
print sys.executable
```
The title of your question supports the likelihood that an old version of Python is being executed because the `with` statement/context managers (classes with a `__exit__()` method) were not introduced until 2.6 (well 2.5 if explicitly enabled).
|
41,991,602
|
I have been trying to make a program that runs over a dictionary which makes anagrams from an inputted string. I found most of the code to this problem here [Algorithm to generate anagrams](https://stackoverflow.com/questions/55210/algorithm-to-generate-anagrams) but i need to have the program only output a line of max amount of words instead of only using words of a set length (MIN\_WORD\_SIZE).
If the input was: python Anagram.py "Finding Anagrams" 3 dictionary.txt.
The output would be "Gaming fans nadir" because it only uses 3 words at most to create the anagram.
?
|
2017/02/01
|
[
"https://Stackoverflow.com/questions/41991602",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7503133/"
] |
This is a slight work around, however if you only want to *filter* out words that have a length shorter than a specified argument, then you could make a change in your programs main.
Your program currently says:
```
for word in words.anagram(letters):
print word
```
You could change your program to say:
```
for word in words.anagram(letters):
if len(word.split()) <= int(argv[2]):
print word
```
Not the most elegant answer but, I hope this helps!
|
I don't like to make programs `for` programmers, usually.
To the best of my understanding:
```
import random
word = "scramble"
scramble = list(word)
for i in range(len(scramble)):
char = scramble.pop(i)
scramble.insert(random.randint(0,len(word)),char)
print(''.join(scramble))
```
This will scramble the `word` variable.
Hope this helps!
Thanks :)
|
4,145,331
|
I am tring to call the cmd command "move" from python.
```
cmd1 = ["move", spath , npath]
startupinfo = subprocess.STARTUPINFO()
startupinfo.dwFlags |= subprocess.STARTF_USESHOWWINDOW
p = subprocess.Popen(cmd1, startupinfo=startupinfo)
```
While the comammand works in the cmd. I can move files. With this python code i get:
>
> WindowsError: [Error 2] The system
> cannot find the file specified
>
>
>
Spath and npath, are absolute paths to folders, so being in another directory should not matter.
**[edit]**
Responding to Tim's answear: The how do i move a folder?
|
2010/11/10
|
[
"https://Stackoverflow.com/questions/4145331",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/354399/"
] |
`move` is built-in into the `cmd` shell, so it's not a file command that you can call this way.
You could use [`shutil.move()`](http://docs.python.org/library/shutil.html#shutil.move), but this "forgets" all alternate data stream, ACLs etc.
|
try to use `cmd1 = ["cmd", "/c", "move", spath, npath]`
|
74,053,822
|
I want to calculate QTD values from different columns based on months in a pandas dataframe.
Code:
```
data = {'month': ['April', 'May', 'June', 'July', 'August', 'September', 'October', 'November', 'December', 'January', 'February', 'March'],
'kpi': ['sales', 'sales quantity', 'sales', 'sales', 'sales', 'sales', 'sales', 'sales quantity', 'sales', 'sales', 'sales', 'sales'],
're': [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12],
're3+9': [10, 20, 30, 40, 50, 60, 70, 80, 90, 10, 10, 20],
're6+6': [5, 10, 15, 20, 25, 30, 35, 40, 45, 50, 55, 60],
're9+3': [2, 4, 6, 8, 10, 12, 14, 16, 20, 10, 10, 20],
're_o' : [1, 1, 1, 11, 11, 11, 12, 12, 12, 13, 13, 13]
}
# Create DataFrame
df = pd.DataFrame(data)
g = pd.to_datetime(df['month'], format='%B').dt.to_period('Q')
if (df['month'].isin(['April', 'May', 'June'])):
df['Q-Total'] = df.groupby([g,'kpi'])['re'].cumsum()
elif (df['month'].isin(['July', 'August', 'September'])):
df['Q-Total'] = df.groupby([g, 'kpi'])['re3+9'].cumsum()
elif (df['month'].isin(['October', 'November', 'December'])):
df['Q-Total'] = df.groupby([g, 'kpi'])['re6+6'].cumsum()
elif (df['month'].isin(['January', 'February', 'March'])):
df['Q-Total'] = df.groupby([g, 'kpi'])['re9+3'].cumsum()
else:
print("zero")
```
My required output is given below:
```
month kpi re re3+9 re6+6 re9+3 re_o Q-Total
0 April sales 1 10 5 2 1 1
1 May sales quantity 2 20 10 4 1 2
2 June sales 3 30 15 6 1 4
3 July sales 4 40 20 8 11 40
4 August sales 5 50 25 10 11 90
5 September sales 6 60 30 12 11 150
6 October sales 7 70 35 14 12 35
7 November sales quantity 8 80 40 16 12 40
8 December sales 9 90 45 20 12 80
9 January sales 10 10 50 10 13 10
10 February sales 11 10 55 10 13 20
11 March sales 12 20 60 20 13 40
```
Here there is four columns named re,re3+9,re6+6,re9+3 for taking the cumulative sum values.I want to calculate cumulative sum based on the below conditions:
1. If the months are April,May and June, cumulative sum will be taken only from column re
2. If the months are July,August and September , cumulative sum will be taken only from re3+9
3. If the months are October,November and December , cumulative sum will be taken only from re6+6
4. If the months are January,February and March,Cumulative sum will be taken only from re9+3
But I got an error like below , when I run the code:
```
Traceback (most recent call last):
File "/home/a/p/s.py", line 54, in <module>
if (df['month'].isin(['April', 'May', 'June'])):
File "/home/a/anaconda3/envs/p/lib/python3.9/site-packages/pandas/core/generic.py", line 1527, in __nonzero__
raise ValueError(
ValueError: The truth value of a Series is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all().
```
Can anyone suggest a solution to solve this issue?
|
2022/10/13
|
[
"https://Stackoverflow.com/questions/74053822",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/16134366/"
] |
Use [`numpy.select`](https://numpy.org/doc/stable/reference/generated/numpy.select.html) for new column and then use [`GroupBy.cumsum`](http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.core.groupby.GroupBy.cumsum.html):
```
g = pd.to_datetime(df['month'], format='%B').dt.to_period('Q')
m1 = df['month'].isin(['April', 'May', 'June'])
m2 = df['month'].isin(['July', 'August', 'September'])
m3 = df['month'].isin(['October', 'November', 'December'])
m4 = df['month'].isin(['January', 'February', 'March'])
df['Q-Total'] = np.select([m1, m2, m3, m4],
[df['re'], df['re3+9'], df['re6+6'], df['re9+3']], default=0)
df['Q-Total'] = df.groupby([g,'kpi'])['Q-Total'].cumsum()
print (df)
month kpi re re3+9 re6+6 re9+3 re_o Q-Total
0 April sales 1 10 5 2 1 1
1 May sales quantity 2 20 10 4 1 2
2 June sales 3 30 15 6 1 4
3 July sales 4 40 20 8 11 40
4 August sales 5 50 25 10 11 90
5 September sales 6 60 30 12 11 150
6 October sales 7 70 35 14 12 35
7 November sales quantity 8 80 40 16 12 40
8 December sales 9 90 45 20 12 80
9 January sales 10 10 50 10 13 10
10 February sales 11 10 55 10 13 20
11 March sales 12 20 60 20 13 40
```
|
You can use a dictionary to map the quarter to your columns, then [indexing lookup](https://pandas.pydata.org/docs/user_guide/indexing.html#indexing-lookup) and [`groupby.cumsum`](https://pandas.pydata.org/docs/reference/api/pandas.core.groupby.DataFrameGroupBy.cumsum.html):
```
quarters = {1: 're9+3', 2: 're', 3: 're3+9', 4: 're6+6'}
col = pd.to_datetime(df['month'], format='%B').dt.quarter.map(quarters)
idx, cols = pd.factorize(col)
df['Q-total'] = (
pd.Series(df.reindex(cols, axis=1).to_numpy()[np.arange(len(df)), idx],
index=df.index)
.groupby([col, df['kpi']]).cumsum()
)
```
output:
```
month kpi re re3+9 re6+6 re9+3 re_o Q-total
0 April sales 1 10 5 2 1 1
1 May sales quantity 2 20 10 4 1 2
2 June sales 3 30 15 6 1 4
3 July sales 4 40 20 8 11 40
4 August sales 5 50 25 10 11 90
5 September sales 6 60 30 12 11 150
6 October sales 7 70 35 14 12 35
7 November sales quantity 8 80 40 16 12 40
8 December sales 9 90 45 20 12 80
9 January sales 10 10 50 10 13 10
10 February sales 11 10 55 10 13 20
11 March sales 12 20 60 20 13 40
```
|
46,756,555
|
I made a dictionary using python
```
dictionary = {'key':'a', 'key':'b'})
print(dictionary)
print (dictionary.get("key"))
```
When run this code then shows the last value of dictionary, Is there any way to access the first value of dictionary if keys of both elements of dictionary are same.
|
2017/10/15
|
[
"https://Stackoverflow.com/questions/46756555",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4450400/"
] |
It seems I ran into existing [issue](https://github.com/spring-projects/spring-boot/issues/10647)
So I used the solution provided by [@wilkinsona](https://github.com/wilkinsona) to resolve it. I added `configuration` with `mainClass` and project was successfully built.
```
<build>
<plugins>
<plugin>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-maven-plugin</artifactId>
<configuration>
<mainClass>com.lapots.breed.hero.journey.web.HeroJourneyWebApplication</mainClass>
</configuration>
</plugin>
</plugins>
</build>
```
|
You can use this way . You add a configuration to pom but this way only works in the compiler. It doesnt work real world. If you want to use .jar or .war . It will not work.
**You must add com.lapots.breed.hero.journey file under java file.** Then It always works
|
46,756,555
|
I made a dictionary using python
```
dictionary = {'key':'a', 'key':'b'})
print(dictionary)
print (dictionary.get("key"))
```
When run this code then shows the last value of dictionary, Is there any way to access the first value of dictionary if keys of both elements of dictionary are same.
|
2017/10/15
|
[
"https://Stackoverflow.com/questions/46756555",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4450400/"
] |
It seems I ran into existing [issue](https://github.com/spring-projects/spring-boot/issues/10647)
So I used the solution provided by [@wilkinsona](https://github.com/wilkinsona) to resolve it. I added `configuration` with `mainClass` and project was successfully built.
```
<build>
<plugins>
<plugin>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-maven-plugin</artifactId>
<configuration>
<mainClass>com.lapots.breed.hero.journey.web.HeroJourneyWebApplication</mainClass>
</configuration>
</plugin>
</plugins>
</build>
```
|
I had a similar problem.
```
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary for springboot-kafka 0.0.1-SNAPSHOT:
[INFO]
[INFO] springboot-kafka ................................... SUCCESS [ 1.016 s]
[INFO] springboot-kafka.model ............................. FAILURE [ 1.862 s]
[INFO] springboot-kafka.producer .......................... SKIPPED
[INFO] springboot-kafka.consumer .......................... SKIPPED
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 3.252 s
[INFO] Finished at: 2020-10-07T20:58:34+03:30
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.springframework.boot:spring-boot-maven-plugin:2.3.4.RELEASE:repackage (repackage) on project springboot-kafka.model: Execution repackage of goal org.springframework.boot:spring-boot-maven-plugin:2.3.4.RELEASE:repackage failed: Unable to find main class -> [Help 1]
org.apache.maven.lifecycle.LifecycleExecutionException: Failed to execute goal org.springframework.boot:spring-boot-maven-plugin:2.3.4.RELEASE:repackage (repackage) on project springboot-kafka.model: Execution repackage of goal org.springframework.boot:spring-boot-maven-plugin:2.3.4.RELEASE:repackage failed: Unable to find main class
at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:215)
at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:156)
at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:148)
at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:117)
at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:81)
at org.apache.maven.lifecycle.internal.builder.singlethreaded.SingleThreadedBuilder.build(SingleThreadedBuilder.java:56)
at org.apache.maven.lifecycle.internal.LifecycleStarter.execute(LifecycleStarter.java:128)
at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:305)
at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:192)
at org.apache.maven.DefaultMaven.execute(DefaultMaven.java:105)
at org.apache.maven.cli.MavenCli.execute(MavenCli.java:957)
at org.apache.maven.cli.MavenCli.doMain(MavenCli.java:289)
at org.apache.maven.cli.MavenCli.main(MavenCli.java:193)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at org.codehaus.plexus.classworlds.launcher.Launcher.launchEnhanced(Launcher.java:282)
at org.codehaus.plexus.classworlds.launcher.Launcher.launch(Launcher.java:225)
at org.codehaus.plexus.classworlds.launcher.Launcher.mainWithExitCode(Launcher.java:406)
at org.codehaus.plexus.classworlds.launcher.Launcher.main(Launcher.java:347)
Caused by: org.apache.maven.plugin.PluginExecutionException: Execution repackage of goal org.springframework.boot:spring-boot-maven-plugin:2.3.4.RELEASE:repackage failed: Unable to find main class
at org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo(DefaultBuildPluginManager.java:148)
at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:210)
... 20 more
Caused by: java.lang.IllegalStateException: Unable to find main class
at org.springframework.util.Assert.state(Assert.java:76)
at org.springframework.boot.loader.tools.Packager.addMainAndStartAttributes(Packager.java:249)
at org.springframework.boot.loader.tools.Packager.buildManifest(Packager.java:231)
at org.springframework.boot.loader.tools.Packager.write(Packager.java:174)
at org.springframework.boot.loader.tools.Repackager.repackage(Repackager.java:135)
at org.springframework.boot.loader.tools.Repackager.repackage(Repackager.java:122)
at org.springframework.boot.maven.RepackageMojo.repackage(RepackageMojo.java:175)
at org.springframework.boot.maven.RepackageMojo.execute(RepackageMojo.java:165)
at org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo(DefaultBuildPluginManager.java:137)
... 21 more
[ERROR]
[ERROR]
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/PluginExecutionException
[ERROR]
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR] mvn <args> -rf :springboot-kafka.model
```
My main pom.xml looks like this
```
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<parent>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-parent</artifactId>
<version>2.3.4.RELEASE</version>
<relativePath /> <!-- lookup parent from repository -->
</parent>
<groupId>com.example</groupId>
<artifactId>springboot-kafka</artifactId>
<version>0.0.1-SNAPSHOT</version>
<name>springboot-kafka</name>
<description>Demo project for Spring Boot</description>
<packaging>pom</packaging>
<properties>
<java.version>1.8</java.version>
</properties>
<dependencies>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-web</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-devtools</artifactId>
<optional>true</optional>
</dependency>
</dependencies>
<build>
<plugins>
<plugin>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-maven-plugin</artifactId>
</plugin>
</plugins>
</build>
<repositories>
<repository>
<id>spring-milestones</id>
<name>Spring Milestones</name>
<url>https://repo.spring.io/milestone</url>
</repository>
</repositories>
<pluginRepositories>
<pluginRepository>
<id>spring-milestones</id>
<name>Spring Milestones</name>
<url>https://repo.spring.io/milestone</url>
</pluginRepository>
</pluginRepositories>
<modules>
<module>springboot-kafka.producer</module>
<module>springboot-kafka.consumer</module>
<module>springboot-kafka.model</module>
</modules>
</project>
```
I put `<plugins>` tag in `<pluginManagement>` tage according to the following code, and my problme solved.
```
<build>
<pluginManagement>
<plugins>
<plugin>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-maven-plugin</artifactId>
</plugin>
</plugins>
</pluginManagement>
</build>
```
|
46,756,555
|
I made a dictionary using python
```
dictionary = {'key':'a', 'key':'b'})
print(dictionary)
print (dictionary.get("key"))
```
When run this code then shows the last value of dictionary, Is there any way to access the first value of dictionary if keys of both elements of dictionary are same.
|
2017/10/15
|
[
"https://Stackoverflow.com/questions/46756555",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4450400/"
] |
It seems I ran into existing [issue](https://github.com/spring-projects/spring-boot/issues/10647)
So I used the solution provided by [@wilkinsona](https://github.com/wilkinsona) to resolve it. I added `configuration` with `mainClass` and project was successfully built.
```
<build>
<plugins>
<plugin>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-maven-plugin</artifactId>
<configuration>
<mainClass>com.lapots.breed.hero.journey.web.HeroJourneyWebApplication</mainClass>
</configuration>
</plugin>
</plugins>
</build>
```
|
I put a configuration (tag) to pom.xml with Reference to main class. That solves the problem.
```html
<build>
<plugins>
<plugin>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-maven-plugin</artifactId>
<configuration>
<mainClass>com.example.springbootdocker.SpringBootDockerApplication</mainClass>
</configuration>
</plugin>
</plugins>
</build>
```
|
46,756,555
|
I made a dictionary using python
```
dictionary = {'key':'a', 'key':'b'})
print(dictionary)
print (dictionary.get("key"))
```
When run this code then shows the last value of dictionary, Is there any way to access the first value of dictionary if keys of both elements of dictionary are same.
|
2017/10/15
|
[
"https://Stackoverflow.com/questions/46756555",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4450400/"
] |
I had a similar problem.
```
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary for springboot-kafka 0.0.1-SNAPSHOT:
[INFO]
[INFO] springboot-kafka ................................... SUCCESS [ 1.016 s]
[INFO] springboot-kafka.model ............................. FAILURE [ 1.862 s]
[INFO] springboot-kafka.producer .......................... SKIPPED
[INFO] springboot-kafka.consumer .......................... SKIPPED
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 3.252 s
[INFO] Finished at: 2020-10-07T20:58:34+03:30
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.springframework.boot:spring-boot-maven-plugin:2.3.4.RELEASE:repackage (repackage) on project springboot-kafka.model: Execution repackage of goal org.springframework.boot:spring-boot-maven-plugin:2.3.4.RELEASE:repackage failed: Unable to find main class -> [Help 1]
org.apache.maven.lifecycle.LifecycleExecutionException: Failed to execute goal org.springframework.boot:spring-boot-maven-plugin:2.3.4.RELEASE:repackage (repackage) on project springboot-kafka.model: Execution repackage of goal org.springframework.boot:spring-boot-maven-plugin:2.3.4.RELEASE:repackage failed: Unable to find main class
at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:215)
at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:156)
at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:148)
at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:117)
at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:81)
at org.apache.maven.lifecycle.internal.builder.singlethreaded.SingleThreadedBuilder.build(SingleThreadedBuilder.java:56)
at org.apache.maven.lifecycle.internal.LifecycleStarter.execute(LifecycleStarter.java:128)
at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:305)
at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:192)
at org.apache.maven.DefaultMaven.execute(DefaultMaven.java:105)
at org.apache.maven.cli.MavenCli.execute(MavenCli.java:957)
at org.apache.maven.cli.MavenCli.doMain(MavenCli.java:289)
at org.apache.maven.cli.MavenCli.main(MavenCli.java:193)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at org.codehaus.plexus.classworlds.launcher.Launcher.launchEnhanced(Launcher.java:282)
at org.codehaus.plexus.classworlds.launcher.Launcher.launch(Launcher.java:225)
at org.codehaus.plexus.classworlds.launcher.Launcher.mainWithExitCode(Launcher.java:406)
at org.codehaus.plexus.classworlds.launcher.Launcher.main(Launcher.java:347)
Caused by: org.apache.maven.plugin.PluginExecutionException: Execution repackage of goal org.springframework.boot:spring-boot-maven-plugin:2.3.4.RELEASE:repackage failed: Unable to find main class
at org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo(DefaultBuildPluginManager.java:148)
at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:210)
... 20 more
Caused by: java.lang.IllegalStateException: Unable to find main class
at org.springframework.util.Assert.state(Assert.java:76)
at org.springframework.boot.loader.tools.Packager.addMainAndStartAttributes(Packager.java:249)
at org.springframework.boot.loader.tools.Packager.buildManifest(Packager.java:231)
at org.springframework.boot.loader.tools.Packager.write(Packager.java:174)
at org.springframework.boot.loader.tools.Repackager.repackage(Repackager.java:135)
at org.springframework.boot.loader.tools.Repackager.repackage(Repackager.java:122)
at org.springframework.boot.maven.RepackageMojo.repackage(RepackageMojo.java:175)
at org.springframework.boot.maven.RepackageMojo.execute(RepackageMojo.java:165)
at org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo(DefaultBuildPluginManager.java:137)
... 21 more
[ERROR]
[ERROR]
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/PluginExecutionException
[ERROR]
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR] mvn <args> -rf :springboot-kafka.model
```
My main pom.xml looks like this
```
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<parent>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-parent</artifactId>
<version>2.3.4.RELEASE</version>
<relativePath /> <!-- lookup parent from repository -->
</parent>
<groupId>com.example</groupId>
<artifactId>springboot-kafka</artifactId>
<version>0.0.1-SNAPSHOT</version>
<name>springboot-kafka</name>
<description>Demo project for Spring Boot</description>
<packaging>pom</packaging>
<properties>
<java.version>1.8</java.version>
</properties>
<dependencies>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-web</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-devtools</artifactId>
<optional>true</optional>
</dependency>
</dependencies>
<build>
<plugins>
<plugin>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-maven-plugin</artifactId>
</plugin>
</plugins>
</build>
<repositories>
<repository>
<id>spring-milestones</id>
<name>Spring Milestones</name>
<url>https://repo.spring.io/milestone</url>
</repository>
</repositories>
<pluginRepositories>
<pluginRepository>
<id>spring-milestones</id>
<name>Spring Milestones</name>
<url>https://repo.spring.io/milestone</url>
</pluginRepository>
</pluginRepositories>
<modules>
<module>springboot-kafka.producer</module>
<module>springboot-kafka.consumer</module>
<module>springboot-kafka.model</module>
</modules>
</project>
```
I put `<plugins>` tag in `<pluginManagement>` tage according to the following code, and my problme solved.
```
<build>
<pluginManagement>
<plugins>
<plugin>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-maven-plugin</artifactId>
</plugin>
</plugins>
</pluginManagement>
</build>
```
|
You can use this way . You add a configuration to pom but this way only works in the compiler. It doesnt work real world. If you want to use .jar or .war . It will not work.
**You must add com.lapots.breed.hero.journey file under java file.** Then It always works
|
46,756,555
|
I made a dictionary using python
```
dictionary = {'key':'a', 'key':'b'})
print(dictionary)
print (dictionary.get("key"))
```
When run this code then shows the last value of dictionary, Is there any way to access the first value of dictionary if keys of both elements of dictionary are same.
|
2017/10/15
|
[
"https://Stackoverflow.com/questions/46756555",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4450400/"
] |
I put a configuration (tag) to pom.xml with Reference to main class. That solves the problem.
```html
<build>
<plugins>
<plugin>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-maven-plugin</artifactId>
<configuration>
<mainClass>com.example.springbootdocker.SpringBootDockerApplication</mainClass>
</configuration>
</plugin>
</plugins>
</build>
```
|
You can use this way . You add a configuration to pom but this way only works in the compiler. It doesnt work real world. If you want to use .jar or .war . It will not work.
**You must add com.lapots.breed.hero.journey file under java file.** Then It always works
|
46,756,555
|
I made a dictionary using python
```
dictionary = {'key':'a', 'key':'b'})
print(dictionary)
print (dictionary.get("key"))
```
When run this code then shows the last value of dictionary, Is there any way to access the first value of dictionary if keys of both elements of dictionary are same.
|
2017/10/15
|
[
"https://Stackoverflow.com/questions/46756555",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4450400/"
] |
I had a similar problem.
```
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary for springboot-kafka 0.0.1-SNAPSHOT:
[INFO]
[INFO] springboot-kafka ................................... SUCCESS [ 1.016 s]
[INFO] springboot-kafka.model ............................. FAILURE [ 1.862 s]
[INFO] springboot-kafka.producer .......................... SKIPPED
[INFO] springboot-kafka.consumer .......................... SKIPPED
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 3.252 s
[INFO] Finished at: 2020-10-07T20:58:34+03:30
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.springframework.boot:spring-boot-maven-plugin:2.3.4.RELEASE:repackage (repackage) on project springboot-kafka.model: Execution repackage of goal org.springframework.boot:spring-boot-maven-plugin:2.3.4.RELEASE:repackage failed: Unable to find main class -> [Help 1]
org.apache.maven.lifecycle.LifecycleExecutionException: Failed to execute goal org.springframework.boot:spring-boot-maven-plugin:2.3.4.RELEASE:repackage (repackage) on project springboot-kafka.model: Execution repackage of goal org.springframework.boot:spring-boot-maven-plugin:2.3.4.RELEASE:repackage failed: Unable to find main class
at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:215)
at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:156)
at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:148)
at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:117)
at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:81)
at org.apache.maven.lifecycle.internal.builder.singlethreaded.SingleThreadedBuilder.build(SingleThreadedBuilder.java:56)
at org.apache.maven.lifecycle.internal.LifecycleStarter.execute(LifecycleStarter.java:128)
at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:305)
at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:192)
at org.apache.maven.DefaultMaven.execute(DefaultMaven.java:105)
at org.apache.maven.cli.MavenCli.execute(MavenCli.java:957)
at org.apache.maven.cli.MavenCli.doMain(MavenCli.java:289)
at org.apache.maven.cli.MavenCli.main(MavenCli.java:193)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at org.codehaus.plexus.classworlds.launcher.Launcher.launchEnhanced(Launcher.java:282)
at org.codehaus.plexus.classworlds.launcher.Launcher.launch(Launcher.java:225)
at org.codehaus.plexus.classworlds.launcher.Launcher.mainWithExitCode(Launcher.java:406)
at org.codehaus.plexus.classworlds.launcher.Launcher.main(Launcher.java:347)
Caused by: org.apache.maven.plugin.PluginExecutionException: Execution repackage of goal org.springframework.boot:spring-boot-maven-plugin:2.3.4.RELEASE:repackage failed: Unable to find main class
at org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo(DefaultBuildPluginManager.java:148)
at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:210)
... 20 more
Caused by: java.lang.IllegalStateException: Unable to find main class
at org.springframework.util.Assert.state(Assert.java:76)
at org.springframework.boot.loader.tools.Packager.addMainAndStartAttributes(Packager.java:249)
at org.springframework.boot.loader.tools.Packager.buildManifest(Packager.java:231)
at org.springframework.boot.loader.tools.Packager.write(Packager.java:174)
at org.springframework.boot.loader.tools.Repackager.repackage(Repackager.java:135)
at org.springframework.boot.loader.tools.Repackager.repackage(Repackager.java:122)
at org.springframework.boot.maven.RepackageMojo.repackage(RepackageMojo.java:175)
at org.springframework.boot.maven.RepackageMojo.execute(RepackageMojo.java:165)
at org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo(DefaultBuildPluginManager.java:137)
... 21 more
[ERROR]
[ERROR]
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/PluginExecutionException
[ERROR]
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR] mvn <args> -rf :springboot-kafka.model
```
My main pom.xml looks like this
```
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<parent>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-parent</artifactId>
<version>2.3.4.RELEASE</version>
<relativePath /> <!-- lookup parent from repository -->
</parent>
<groupId>com.example</groupId>
<artifactId>springboot-kafka</artifactId>
<version>0.0.1-SNAPSHOT</version>
<name>springboot-kafka</name>
<description>Demo project for Spring Boot</description>
<packaging>pom</packaging>
<properties>
<java.version>1.8</java.version>
</properties>
<dependencies>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-web</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-devtools</artifactId>
<optional>true</optional>
</dependency>
</dependencies>
<build>
<plugins>
<plugin>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-maven-plugin</artifactId>
</plugin>
</plugins>
</build>
<repositories>
<repository>
<id>spring-milestones</id>
<name>Spring Milestones</name>
<url>https://repo.spring.io/milestone</url>
</repository>
</repositories>
<pluginRepositories>
<pluginRepository>
<id>spring-milestones</id>
<name>Spring Milestones</name>
<url>https://repo.spring.io/milestone</url>
</pluginRepository>
</pluginRepositories>
<modules>
<module>springboot-kafka.producer</module>
<module>springboot-kafka.consumer</module>
<module>springboot-kafka.model</module>
</modules>
</project>
```
I put `<plugins>` tag in `<pluginManagement>` tage according to the following code, and my problme solved.
```
<build>
<pluginManagement>
<plugins>
<plugin>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-maven-plugin</artifactId>
</plugin>
</plugins>
</pluginManagement>
</build>
```
|
I put a configuration (tag) to pom.xml with Reference to main class. That solves the problem.
```html
<build>
<plugins>
<plugin>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-maven-plugin</artifactId>
<configuration>
<mainClass>com.example.springbootdocker.SpringBootDockerApplication</mainClass>
</configuration>
</plugin>
</plugins>
</build>
```
|
60,927,188
|
Having recently upgraded a Django project from 2.x to 3.x, I noticed that the `mysql.connector.django` backend (from `mysql-connector-python`) no longer works. The last version of Django that it works with is 2.2.11. It breaks with 3.0. I am using `mysql-connector-python==8.0.19`.
When running `manage.py runserver`, the following error occurs:
```
django.core.exceptions.ImproperlyConfigured: 'mysql.connector.django' isn't an available database backend.
Try using 'django.db.backends.XXX', where XXX is one of:
'mysql', 'oracle', 'postgresql', 'sqlite3'
```
I am aware that this is not an official Django backend but I have to use it on this project for reasons beyond my control.
I am 80% sure this is an issue with the library but I'm just looking to see if there is anything that can be done to resolve it beyond waiting for an update.
UPDATE:
`mysql.connector.django` now works with Django 3+.
|
2020/03/30
|
[
"https://Stackoverflow.com/questions/60927188",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6619548/"
] |
For `Django 3.0` and `Django 3.1` I managed to have it working with `mysql-connector-python 8.0.22`. See this <https://dev.mysql.com/doc/relnotes/connector-python/en/news-8-0-22.html>.
|
Connector/Python still supports Python 2.7, which was dropped by Django 3.
We are currently working on adding support for Django 3, stay tunned.
|
60,927,188
|
Having recently upgraded a Django project from 2.x to 3.x, I noticed that the `mysql.connector.django` backend (from `mysql-connector-python`) no longer works. The last version of Django that it works with is 2.2.11. It breaks with 3.0. I am using `mysql-connector-python==8.0.19`.
When running `manage.py runserver`, the following error occurs:
```
django.core.exceptions.ImproperlyConfigured: 'mysql.connector.django' isn't an available database backend.
Try using 'django.db.backends.XXX', where XXX is one of:
'mysql', 'oracle', 'postgresql', 'sqlite3'
```
I am aware that this is not an official Django backend but I have to use it on this project for reasons beyond my control.
I am 80% sure this is an issue with the library but I'm just looking to see if there is anything that can be done to resolve it beyond waiting for an update.
UPDATE:
`mysql.connector.django` now works with Django 3+.
|
2020/03/30
|
[
"https://Stackoverflow.com/questions/60927188",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6619548/"
] |
Connector/Python still supports Python 2.7, which was dropped by Django 3.
We are currently working on adding support for Django 3, stay tunned.
|
in settings.py change database engine this way
'ENGINE': 'django.db.backends.mysql'
|
60,927,188
|
Having recently upgraded a Django project from 2.x to 3.x, I noticed that the `mysql.connector.django` backend (from `mysql-connector-python`) no longer works. The last version of Django that it works with is 2.2.11. It breaks with 3.0. I am using `mysql-connector-python==8.0.19`.
When running `manage.py runserver`, the following error occurs:
```
django.core.exceptions.ImproperlyConfigured: 'mysql.connector.django' isn't an available database backend.
Try using 'django.db.backends.XXX', where XXX is one of:
'mysql', 'oracle', 'postgresql', 'sqlite3'
```
I am aware that this is not an official Django backend but I have to use it on this project for reasons beyond my control.
I am 80% sure this is an issue with the library but I'm just looking to see if there is anything that can be done to resolve it beyond waiting for an update.
UPDATE:
`mysql.connector.django` now works with Django 3+.
|
2020/03/30
|
[
"https://Stackoverflow.com/questions/60927188",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6619548/"
] |
For `Django 3.0` and `Django 3.1` I managed to have it working with `mysql-connector-python 8.0.22`. See this <https://dev.mysql.com/doc/relnotes/connector-python/en/news-8-0-22.html>.
|
in settings.py change database engine this way
'ENGINE': 'django.db.backends.mysql'
|
11,144,245
|
I realize that this is a known [problem](http://python.6.n6.nabble.com/Internationalization-and-caching-not-working-td84364.html), [problem](http://www.mail-archive.com/django-users@googlegroups.com/msg63780.html) but I still have not found and adequate solution.
I want to use the @cache\_page for a some views in my Django apps, like so:
```
@cache_page(24 * 60 * 60)
def some_view(request):
...
```
The problem is that I am also using i18n with a language switcher to switch languages of each page. So, if I turn on the caching I do not get the results I expect. It seems I get whatever was the last cached page.
I have tried this:
```
@cache_page(24 * 60 * 60)
@vary_on_headers('Content-Language', 'Accept-Language')
def some_view(request):
...
```
**EDIT** ...and this:
```
@cache_page(24 * 60 * 60)
@vary_on_cookie
def some_view(request):
...
```
**END EDIT**
But I get the same results.
Of course, if I remove the caching everything works as expected.
Any help would be MUCH appreciated.
|
2012/06/21
|
[
"https://Stackoverflow.com/questions/11144245",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/791335/"
] |
1. How long is a string? It depends on how the server is configured. And "application" could be a single form or hundreds. Only a test can tell. In general: build a [high performance server](http://www.wissel.net/blog/d6plinks/SHWL-7RB3P5) preferably with 64Bit architecture and lots of RAM. Make that RAM [available for the JVM](http://xpageswiki.com/web/youatnotes/wiki-xpages.nsf/dx/Memory_Usage_and_Performance). If the applications use attachments, use DAOS, put it on a separate disk - and of course make sure you have the latest version of Domino (8.5.3FP1 at time of this writing)
2. There is the [XPages Toolbox](http://www.openntf.org/internal/home.nsf/project.xsp?action=openDocument&name=xpages%20toolbox) that includes a memory and CPU profiler.
3. It depends on the type of application. Clever use of the scopes for caching, Expression Language and beans instead of SSJS. You leak memory whey you forget `.recycle`. Hire an experienced lead developer and [read the book](http://www.ibmpressbooks.com/bookstore/product.asp?isbn=0132486318) also [the other one](http://www.ibmpressbooks.com/bookstore/product.asp?isbn=0132901811) and [two](http://www.ibmpressbooks.com/bookstore/product.asp?isbn=0132943050). Consider threading off longer running code, so users don't need to wait.
4. Depends on your needs. The general lessons of Domino development apply when it comes to db operations, so FTSearch over DBSearch, scope usage over @DBColumn for parameters. EL over SSJS.
5. Typical errors include: all code in the XPages -> use script libraries. Too much @dblookup, @dbcolumn instead of scope. Validation in buttons instead of validators. Violation of decomposition principles. Forgetting to use .recycle(). Designing applications "like old Notes screens" instead of single page interaction. Too little use of partial refresh. No use of caching. Too little object orientation (crating function graves in script libraries).
6. This is a summary of question 1-5, nothing new to answer
7. When clustering Domino servers for XPages and putting a load balancer in front, the load balancer needs to be configured to keep a session on the same server, so partial refreshes and Ajax calls reach the server that has the component tree rendered for that user.
|
1. It depends on the server setup, I have i.e XPage extranet with 12000 registered users spanning over aprox 20 XPage applications. That runs on 1 Windows 2003 server with 4GB Ram and quad core cpu. Data aount is about 60GB over these 20 applications. No Daos, no beans just SSJS. Performance is excellent. So when I upgrade this installation to 64 bit and Daos the application will scale when more. So 64Bit and Lots of Ram is the key to alot of users.
2. I haven't done anything around this
3. Make sure to recyle when you do document loops, Use the openntf.org debug toolbar it will save alot of time before we have a debugger for XPages.
4. Always think when you are doing things this will be done by several users, so try to cut down number of lookup or getElementByKey. Try to use ViewNavigator when you can.
5. It all depends on how many users that uses the system concurrent. If you have 10000 - 15000 users concurrent then you have to look at what the applications does and how many users will use the same application at the same time.
Thats my insights into the question
|
62,552,693
|
I'm a newbie when it comes to Python. I have some python code in an azure sql notebook which gets run by a scheduled job. The job notebook runs a couple of sql notebooks. If the 1st notebook errors I want an exception to be thrown so that the scheduled job shows as failed and I don't want the subsequent sql notebook to run to run. The python code is as follows
```
%python
try:
dbutils.notebook.run("/01. SMETS1Mig/" + dbutils.widgets.get("env_parent_directory") + "/02 Processing Curated Staging/02 Build - Parameterised/STA_1A - CS note Issued", 6000, {
"env_ingest_db": dbutils.widgets.get("env_ingest_db")
, "env_stg_db": dbutils.widgets.get("env_stg_db")
, "env_tech_db": dbutils.widgets.get("env_tech_db")
})
except :
print ' Failure in STA_1S'
try:
dbutils.notebook.run("/01. SMETS1Mig/" + dbutils.widgets.get("env_parent_directory") + "/02 Processing Curated Staging/02 Build - Parameterised/Output CS Notes", 6000, {
"env_ingest_db": dbutils.widgets.get("env_ingest_db")
, "env_stg_db": dbutils.widgets.get("env_stg_db")
, "env_tech_db": dbutils.widgets.get("env_tech_db")
})
except :
print ' Error in Output CS Notes'
```
Am I heading in the right direction?
What's the best way to achieve this?
Many Thanks in advance
|
2020/06/24
|
[
"https://Stackoverflow.com/questions/62552693",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11989075/"
] |
**Option 1**: Simply don't catch the exception. Just run the first command without the `try` ... `except`, and let the exception bubble up in the normal way. If it is raised (i.e. thrown), then the second command (or anything after it) will not be run. You would then only catch exceptions with the second command.
```
dbutils.notebook.run(...first invocation...) # exceptions not caught - will bubble
try:
dbutils.notebook.run(...second invocation...)
except Exception as exc:
print(' Error in Output CS Notes: {}'.format(exc))
```
**Option 2**: If you want to do some kind of handling of your own, then do catch the exception, but re-`raise` it (or another exception) within the `except` block.
```
try:
dbutils.notebook.run(...first invocation...)
except Exception as exc:
print(' Failure in STA_1S: {}'.format(exc))
raise exc
# or for example: raise RuntimeError
# ... then do as before with second invocation...
```
(You can also use `raise` without any arguments instead of `raise exc`, as it will default to re-raising the original exception.)
|
Sorry - I did not read the question carefully enough.
My original answer is not what you are looking for.
I wrote:
>
> Add a `return` statement:
>
>
>
> ```
> try:
> # ...
> except Exception as error:
> print f'Failure in STA_1S ({error})'
> return
>
> ```
>
>
But you not only want to leave the function, but actually end it's execution with an error code.
What you are looking for is [`sys.exit()`](https://docs.python.org/3/library/sys.html#sys.exit):
>
> Exit from Python. This is implemented by raising the SystemExit
> exception, so cleanup actions specified by finally clauses of try
> statements are honored, and it is possible to intercept the exit
> attempt at an outer level.
>
>
> The optional argument arg can be an integer giving the exit status
> (defaulting to zero), or another type of object. If it is an integer,
> zero is considered “successful termination” and any nonzero value is
> considered “abnormal termination” by shells and the like. Most systems
> require it to be in the range 0–127, and produce undefined results
> otherwise. Some systems have a convention for assigning specific
> meanings to specific exit codes, but these are generally
> underdeveloped; Unix programs generally use 2 for command line syntax
> errors and 1 for all other kind of errors. If another type of object
> is passed, None is equivalent to passing zero, and any other object is
> printed to stderr and results in an exit code of 1. In particular,
> sys.exit("some error message") is a quick way to exit a program when
> an error occurs.
>
>
>
So, here is my revised answer:
```
import sys
try:
# ...
except Exception as error:
sys.exit(f'Failure in STA_1S ({error})')
```
|
62,552,693
|
I'm a newbie when it comes to Python. I have some python code in an azure sql notebook which gets run by a scheduled job. The job notebook runs a couple of sql notebooks. If the 1st notebook errors I want an exception to be thrown so that the scheduled job shows as failed and I don't want the subsequent sql notebook to run to run. The python code is as follows
```
%python
try:
dbutils.notebook.run("/01. SMETS1Mig/" + dbutils.widgets.get("env_parent_directory") + "/02 Processing Curated Staging/02 Build - Parameterised/STA_1A - CS note Issued", 6000, {
"env_ingest_db": dbutils.widgets.get("env_ingest_db")
, "env_stg_db": dbutils.widgets.get("env_stg_db")
, "env_tech_db": dbutils.widgets.get("env_tech_db")
})
except :
print ' Failure in STA_1S'
try:
dbutils.notebook.run("/01. SMETS1Mig/" + dbutils.widgets.get("env_parent_directory") + "/02 Processing Curated Staging/02 Build - Parameterised/Output CS Notes", 6000, {
"env_ingest_db": dbutils.widgets.get("env_ingest_db")
, "env_stg_db": dbutils.widgets.get("env_stg_db")
, "env_tech_db": dbutils.widgets.get("env_tech_db")
})
except :
print ' Error in Output CS Notes'
```
Am I heading in the right direction?
What's the best way to achieve this?
Many Thanks in advance
|
2020/06/24
|
[
"https://Stackoverflow.com/questions/62552693",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11989075/"
] |
Set a flag that shows whether the first notebook succeeded or not:
```
first_notebook_succeeded = False # assume failure
try:
dbutils.notebook.run("/01. SMETS1Mig/" + dbutils.widgets.get("env_parent_directory") + "/02 Processing Curated Staging/02 Build - Parameterised/STA_1A - CS note Issued", 6000, {
"env_ingest_db": dbutils.widgets.get("env_ingest_db")
, "env_stg_db": dbutils.widgets.get("env_stg_db")
, "env_tech_db": dbutils.widgets.get("env_tech_db")
})
except :
print ' Failure in STA_1S'
else:
first_notebook_succeeded = True
if first_notebook_succeeded: # only run if first notebook ran OK
try:
dbutils.notebook.run("/01. SMETS1Mig/" + dbutils.widgets.get("env_parent_directory") + "/02 Processing Curated Staging/02 Build - Parameterised/Output CS Notes", 6000, {
"env_ingest_db": dbutils.widgets.get("env_ingest_db")
, "env_stg_db": dbutils.widgets.get("env_stg_db")
, "env_tech_db": dbutils.widgets.get("env_tech_db")
})
except :
print ' Error in Output CS Notes'
```
You can, of course, also set `first_notebook_succeeded` initially to `True` and then set it to `False` if the first notebook gets an exception.
|
Sorry - I did not read the question carefully enough.
My original answer is not what you are looking for.
I wrote:
>
> Add a `return` statement:
>
>
>
> ```
> try:
> # ...
> except Exception as error:
> print f'Failure in STA_1S ({error})'
> return
>
> ```
>
>
But you not only want to leave the function, but actually end it's execution with an error code.
What you are looking for is [`sys.exit()`](https://docs.python.org/3/library/sys.html#sys.exit):
>
> Exit from Python. This is implemented by raising the SystemExit
> exception, so cleanup actions specified by finally clauses of try
> statements are honored, and it is possible to intercept the exit
> attempt at an outer level.
>
>
> The optional argument arg can be an integer giving the exit status
> (defaulting to zero), or another type of object. If it is an integer,
> zero is considered “successful termination” and any nonzero value is
> considered “abnormal termination” by shells and the like. Most systems
> require it to be in the range 0–127, and produce undefined results
> otherwise. Some systems have a convention for assigning specific
> meanings to specific exit codes, but these are generally
> underdeveloped; Unix programs generally use 2 for command line syntax
> errors and 1 for all other kind of errors. If another type of object
> is passed, None is equivalent to passing zero, and any other object is
> printed to stderr and results in an exit code of 1. In particular,
> sys.exit("some error message") is a quick way to exit a program when
> an error occurs.
>
>
>
So, here is my revised answer:
```
import sys
try:
# ...
except Exception as error:
sys.exit(f'Failure in STA_1S ({error})')
```
|
62,552,693
|
I'm a newbie when it comes to Python. I have some python code in an azure sql notebook which gets run by a scheduled job. The job notebook runs a couple of sql notebooks. If the 1st notebook errors I want an exception to be thrown so that the scheduled job shows as failed and I don't want the subsequent sql notebook to run to run. The python code is as follows
```
%python
try:
dbutils.notebook.run("/01. SMETS1Mig/" + dbutils.widgets.get("env_parent_directory") + "/02 Processing Curated Staging/02 Build - Parameterised/STA_1A - CS note Issued", 6000, {
"env_ingest_db": dbutils.widgets.get("env_ingest_db")
, "env_stg_db": dbutils.widgets.get("env_stg_db")
, "env_tech_db": dbutils.widgets.get("env_tech_db")
})
except :
print ' Failure in STA_1S'
try:
dbutils.notebook.run("/01. SMETS1Mig/" + dbutils.widgets.get("env_parent_directory") + "/02 Processing Curated Staging/02 Build - Parameterised/Output CS Notes", 6000, {
"env_ingest_db": dbutils.widgets.get("env_ingest_db")
, "env_stg_db": dbutils.widgets.get("env_stg_db")
, "env_tech_db": dbutils.widgets.get("env_tech_db")
})
except :
print ' Error in Output CS Notes'
```
Am I heading in the right direction?
What's the best way to achieve this?
Many Thanks in advance
|
2020/06/24
|
[
"https://Stackoverflow.com/questions/62552693",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11989075/"
] |
**Option 1**: Simply don't catch the exception. Just run the first command without the `try` ... `except`, and let the exception bubble up in the normal way. If it is raised (i.e. thrown), then the second command (or anything after it) will not be run. You would then only catch exceptions with the second command.
```
dbutils.notebook.run(...first invocation...) # exceptions not caught - will bubble
try:
dbutils.notebook.run(...second invocation...)
except Exception as exc:
print(' Error in Output CS Notes: {}'.format(exc))
```
**Option 2**: If you want to do some kind of handling of your own, then do catch the exception, but re-`raise` it (or another exception) within the `except` block.
```
try:
dbutils.notebook.run(...first invocation...)
except Exception as exc:
print(' Failure in STA_1S: {}'.format(exc))
raise exc
# or for example: raise RuntimeError
# ... then do as before with second invocation...
```
(You can also use `raise` without any arguments instead of `raise exc`, as it will default to re-raising the original exception.)
|
Set a flag that shows whether the first notebook succeeded or not:
```
first_notebook_succeeded = False # assume failure
try:
dbutils.notebook.run("/01. SMETS1Mig/" + dbutils.widgets.get("env_parent_directory") + "/02 Processing Curated Staging/02 Build - Parameterised/STA_1A - CS note Issued", 6000, {
"env_ingest_db": dbutils.widgets.get("env_ingest_db")
, "env_stg_db": dbutils.widgets.get("env_stg_db")
, "env_tech_db": dbutils.widgets.get("env_tech_db")
})
except :
print ' Failure in STA_1S'
else:
first_notebook_succeeded = True
if first_notebook_succeeded: # only run if first notebook ran OK
try:
dbutils.notebook.run("/01. SMETS1Mig/" + dbutils.widgets.get("env_parent_directory") + "/02 Processing Curated Staging/02 Build - Parameterised/Output CS Notes", 6000, {
"env_ingest_db": dbutils.widgets.get("env_ingest_db")
, "env_stg_db": dbutils.widgets.get("env_stg_db")
, "env_tech_db": dbutils.widgets.get("env_tech_db")
})
except :
print ' Error in Output CS Notes'
```
You can, of course, also set `first_notebook_succeeded` initially to `True` and then set it to `False` if the first notebook gets an exception.
|
11,026,205
|
I'm currently concatenating adjacent cells in excel to repeat common HTML elements and divs - it feels like I've gone down a strange excel path in developing my webpage, and I was wondering if an experienced web designer could let me know how I might accomplish my goals for the site with a more conventional method (aiming to use python and mysql).
1. I have about 40 images in my site. On this page I want to see them all lined up in a grid so I have three divs next to each other on each row, all floating left.
2. Instead of manually typing all the code needed for each row of images, I started concatenating repeating portions of the code with distinct portions of the code. I took the four div classes and separated the code that would need to change for each image (src="XXX" and "XXX").
Example:
```
> Column D Column E Column F
> '1 <div> <img src=" filename.jpg "></div>'
```
The formula to generate my HTML looks like this:
= D1 & E1 & F1
I'm sure it would be easier to create a MySQL database with the filepaths and attributes saved for each of my images, so I could look through the data with a scripting language. Can anyone offer their advice or a quick script for automating the html generation?
|
2012/06/14
|
[
"https://Stackoverflow.com/questions/11026205",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1152532/"
] |
Wow, that sounds really painful.
If all you have is 40 images that you want to generate HTML for, and the rest of your site is static, it may be simplest just to have a single text file with each line containing an image file path. Then, use Python to look at each line, generate the appropriate HTML, and concatenate it.
If your site has more complex interaction requirements, then Django could be the way to go. Django is a great Python web app framework, and it supports MySQL, as well as a number of different db backends.
|
You could keep these and only these images in their own directory and then use simple shell scripts to generate that section of static html.
Assuming you already put the files in place maybe like this:
```
cp <all_teh_kitteh_images> images/grid
```
This command will generate html
```
for file in images/grid/*.jpg ; do echo "<div><img src=\"$file\"></div>" ; done
```
Oh, I'm sorry, I missed the python part of your question (IMO MySQL is overkill, you have no relations, don't use a relational database) Here is the same thing in python.
```
import glob
for file in glob.glob('images/grid/*.jpg'):
print "<div><img src=\"%s\"></div>" % file
```
|
24,294,015
|
I tried to install psycopg2 for python2.6 but my system also has 2.7 installed.
I did
```
>sudo -E pip install psycopg2
Downloading/unpacking psycopg2
Downloading psycopg2-2.5.3.tar.gz (690kB): 690kB downloaded
Running setup.py (path:/private/var/folders/yw/qn50zv_151bfq2vxhc60l8s5vkymm3/T/pip_build_root/psycopg2/setup.py) egg_info for package psycopg2
Installing collected packages: psycopg2
Running setup.py install for psycopg2
building 'psycopg2._psycopg' extension
cc -DNDEBUG -g -fwrapv -Os -Wall -Wstrict-prototypes -Qunused-arguments -Qunused-arguments -arch x86_64 -arch i386 -pipe -DPSYCOPG_DEFAULT_PYDATETIME=1 -DPSYCOPG_VERSION="2.5.3 (dt dec pq3 ext)" -DPSYCOPG_EXTENSIONS=1 -DPSYCOPG_NEW_BOOLEAN=1 -DHAVE_PQFREEMEM=1 -DPG_VERSION_HEX=0x09010B -DPSYCOPG_EXTENSIONS=1 -DPSYCOPG_NEW_BOOLEAN=1 -DHAVE_PQFREEMEM=1 -I/System/Library/Frameworks/Python.framework/Versions/2.7/include/python2.7 -I. -I/usr/local/Cellar/postgresql91/9.1.11/include -I/usr/local/Cellar/postgresql91/9.1.11/include/server -c psycopg/psycopgmodule.c -o build/temp.macosx-10.9-intel-2.7/psycopg/psycopgmodule.o
...
```
You can see that pip is associating psycopg2 with python2.7 during the install process. This pip install completed with a reported success result but when I ran a 2.6 script after that that imports psycopg2, it couldn't find it:
```
import psycopg2
ImportError: No module named psycopg2
```
How do I install psycopg2 for python2.6 when there is also 2.7 installed?
|
2014/06/18
|
[
"https://Stackoverflow.com/questions/24294015",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1312080/"
] |
You can use [shoryuken](https://github.com/phstc/shoryuken/).
It will consume your messages continuously until your queue has messages.
```
shoryuken -r your_worker.rb -C shoryuken.yml \
-l log/shoryuken.log -p shoryuken.pid -d
```
|
As you've probably already discovered, there isn't one obvious **right way™** to handle this kind of thing. It depends a lot on what work you do for each job, the size of your app and infrastrucure, and your personal preferences on APIs, message queuing philosophies, and architecture.
That said, I'd probably lean towards option 2 based on your description. Sidekiq and delayed\_job don't speak SQS, and while you could teach them with something like [sidekiq-sqs](https://github.com/jmoses/sidekiq-sqs), it sounds like you might outgrow them pretty quick. Unless you need your Rails environment available to your workers, you'd have better luck separating your queue consumers into distinct applications, which makes it easy to scale horizontally just by starting more processes. It also allows you to further decouple the workers from your Rails app, which can make things easier to deploy and administer.
Option 3 is a non-starter IMO. You'll want to have a daemon running to process jobs as they come in, and if rake has to load your environment on each job, things are going to get sloooow.
|
46,582,577
|
I need help. I'm trying to filter out and write to another csv file that consists of data which collected after 10769s in the column elapsed\_seconds together with the acceleration magnitude. However, I'm getting KeyError: 0...
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
data = pd.read_csv(accelDataPath)
data.columns = ['t', 'x', 'y', 'z']
# calculate the magnitude of acceleration
data['m'] = np.sqrt(data['x']**2 + data['y']**2 + data['z']**2)
data['datetime'] = pd.DatetimeIndex(pd.to_datetime(data['t'], unit = 'ms').dt.tz_localize('UTC').dt.tz_convert('US/Eastern'))
data['elapsed_seconds'] = (data['datetime'] - data['datetime'].iloc[0]).dt.total_seconds()
i=0
csv = open("filteredData.csv", "w+")
csv.write("Event at, Magnitude \n")
while (i < len(data[data.elapsed_seconds > 10769])):
csv.write(str(data[data.elapsed_seconds > 10769][i]) + ", " + str(data[data.m][i]) + "\n")
csv.close()
```
Error that I am getting is:
```
Traceback (most recent call last):
File "C:\Users\Desktop\AnalyzingData.py", line 37, in <module>
csv.write(str(data[data.elapsed_seconds > 10769][i]) + ", " + str(data[data.m][i]) + "\n")
File "C:\python\lib\site-packages\pandas\core\frame.py", line 1964, in __getitem__
return self._getitem_column(key)
File "C:\python\lib\site-packages\pandas\core\frame.py", line 1971, in _getitem_column
return self._get_item_cache(key)
File "C:\python\lib\site-packages\pandas\core\generic.py", line 1645, in _get_item_cache
values = self._data.get(item)
File "C:\python\lib\site-packages\pandas\core\internals.py", line 3590, in get
loc = self.items.get_loc(item)
File "C:\python\lib\site-packages\pandas\core\indexes\base.py", line 2444, in get_loc
return self._engine.get_loc(self._maybe_cast_indexer(key))
File "pandas\_libs\index.pyx", line 132, in pandas._libs.index.IndexEngine.get_loc (pandas\_libs\index.c:5280)
File "pandas\_libs\index.pyx", line 154, in pandas._libs.index.IndexEngine.get_loc (pandas\_libs\index.c:5126)
File "pandas\_libs\hashtable_class_helper.pxi", line 1210, in pandas._libs.hashtable.PyObjectHashTable.get_item (pandas\_libs\hashtable.c:20523)
File "pandas\_libs\hashtable_class_helper.pxi", line 1218, in pandas._libs.hashtable.PyObjectHashTable.get_item (pandas\_libs\hashtable.c:20477)
KeyError: 0
```
|
2017/10/05
|
[
"https://Stackoverflow.com/questions/46582577",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2995019/"
] |
change this line
```
csv.write(
str(data[data.elapsed_seconds > 10769][i]) + ", " + str(data[data.m][i]) + "\n"
)
```
To this:
```
csv.write(
str(data[data.elapsed_seconds > 10769].iloc[i]) + ", " + str(data[data.m].iloc[i]) +"\n"
)
```
Also, notice that you are not increasing `i`, like this `i += 1`, in the while loop.
---
Or, better, use `df.to_csv` as follows:
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
data = pd.read_csv(accelDataPath)
data.columns = ['t', 'x', 'y', 'z']
# calculate the magnitude of acceleration
data['m'] = np.sqrt(data['x']**2 + data['y']**2 + data['z']**2)
data['datetime'] = pd.DatetimeIndex(pd.to_datetime(data['t'], unit = 'ms').dt.tz_localize('UTC').dt.tz_convert('US/Eastern'))
data['elapsed_seconds'] = (data['datetime'] - data['datetime'].iloc[0]).dt.total_seconds()
# write to csv using data.to_csv
data[data.elapsed_seconds > 10769][['elapsed_seconds', 'm']].to_csv("filteredData.csv",
sep=",",
index=False)
```
|
I had the same issue, which was resolved by following this [recommendation](https://github.com/influxdata/influxdb-python/issues/497). In short, instead of `df[0]`, do an explicit `df['columnname']`
|
72,429,361
|
Hello I am beginner programmer in python and I am having trouble with this code. It is a rock paper scissors game I am not finished yet but it is supposed to print "I win" if the user does not pick rock when the program picks scissors. But when the program picks paper and the user picks rock it does not print "I win". I would like some help thanks. EDIT - answer was to add indent on else before "i win" thank you everybody.
```
import random
ask0 = input("Rock, Paper, or Scissors? ")
list0 = ["Rock", "Paper", "Scissors"]
r0 = (random.choice(list0))
print("I pick " + r0)
if ask0 == r0:
print("Tie")
elif ask0 == ("Rock"):
if r0 == ("Scissors"):
print("You Win")
else:
print("I win")
```
|
2022/05/30
|
[
"https://Stackoverflow.com/questions/72429361",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/19223648/"
] |
I had same issue with "testImplementation 'io.cucumber:cucumber-java8:7.3.3'" (gradle).
I changed to "testImplementation 'io.cucumber:cucumber-java8:7.0.0'" and when I changed it and ran again test then I got a correct error message about the real problem that was "Unrecognized field 'programId'" (for example) and then I can fix the problem (that was a missing field in my class).
|
I installed Jackson JDK8 repository in my pom.xml file and it fixed this problem:
<https://mvnrepository.com/artifact/com.fasterxml.jackson.datatype/jackson-datatype-jdk8>
|
71,979,765
|
If I have python list like
```py
pyList=[‘x@x.x’,’y@y.y’]
```
And I want it to convert it to json array and add `{}` around every object, it should be like that :
```py
arrayJson=[{“email”:”x@x.x”},{“ email”:”y@y.y”}]
```
any idea how to do that ?
|
2022/04/23
|
[
"https://Stackoverflow.com/questions/71979765",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/18918903/"
] |
You can achieve this by using built-in [json](https://docs.python.org/3/library/json.html#json.dumps) module
```
import json
arrayJson = json.dumps([{"email": item} for item in pyList])
```
|
Try to Google this kind of stuff first. :)
```
import json
array = [1, 2, 3]
jsonArray = json.dumps(array)
```
By the way, the result you asked for can not be achieved with the list you provided.
You need to use python dictionaries to get json objects. The conversion is like below
```
Python -> JSON
list -> array
dictionary -> object
```
And here is the link to the docs
<https://docs.python.org/3/library/json.html>
|
71,979,765
|
If I have python list like
```py
pyList=[‘x@x.x’,’y@y.y’]
```
And I want it to convert it to json array and add `{}` around every object, it should be like that :
```py
arrayJson=[{“email”:”x@x.x”},{“ email”:”y@y.y”}]
```
any idea how to do that ?
|
2022/04/23
|
[
"https://Stackoverflow.com/questions/71979765",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/18918903/"
] |
You can achieve this by using built-in [json](https://docs.python.org/3/library/json.html#json.dumps) module
```
import json
arrayJson = json.dumps([{"email": item} for item in pyList])
```
|
pip install jsonwhatever.
You should try it, you can put anything on it
```
from jsonwhatever import jsonwhatever as jw
pyList=['x@x.x','y@y.y']
jsonwe = jw.JsonWhatEver()
mytr = jsonwe.jsonwhatever('my_custom_list', pyList)
print(mytr)
```
|
34,328,759
|
I'd like to put together a command that will print out a string of 32 hexadecimal digits. I've got a Python script that works:
```
python -c 'import random ; print "".join(map(lambda t: format(t, "02X"), [random.randrange(256) for x in range(16)]))'
```
This generates output like:
```
6EF6B30F9E557F948C402C89002C7C8A
```
Which is what I need.
On a Mac, I can even do this:
```
uuidgen | tr -d '-'
```
However, I don't have access to the more sophisticated scripting languages ruby and python, and I won't be on a Mac (so no uuidgen). I need to stick with more bash'ish tools like sed, awk, /dev/random because I'm on a limited platform. Is there a way to do this?
|
2015/12/17
|
[
"https://Stackoverflow.com/questions/34328759",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1002430/"
] |
If you are looking for a single command and have openssl installed, see below. Generate random 16 bytes (32 hex symbols) and encode in hex (also -base64 is supported).
```
openssl rand -hex 16
```
|
If you want to generate output of **arbitrary length, including even/odd number of characters**:
```sh
cat /dev/urandom | hexdump --no-squeezing -e '/1 "%x"' | head -c 31
```
Or to maximize efficiency over readability/composeability:
```
hexdump --no-squeezing -e '/1 "%x"' -n 15 /dev/urandom
```
|
34,328,759
|
I'd like to put together a command that will print out a string of 32 hexadecimal digits. I've got a Python script that works:
```
python -c 'import random ; print "".join(map(lambda t: format(t, "02X"), [random.randrange(256) for x in range(16)]))'
```
This generates output like:
```
6EF6B30F9E557F948C402C89002C7C8A
```
Which is what I need.
On a Mac, I can even do this:
```
uuidgen | tr -d '-'
```
However, I don't have access to the more sophisticated scripting languages ruby and python, and I won't be on a Mac (so no uuidgen). I need to stick with more bash'ish tools like sed, awk, /dev/random because I'm on a limited platform. Is there a way to do this?
|
2015/12/17
|
[
"https://Stackoverflow.com/questions/34328759",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1002430/"
] |
If you have `hexdump` then:
```
hexdump -vn16 -e'4/4 "%08X" 1 "\n"' /dev/urandom
```
should do the job.
**Explanation:**
* `-v` to print all data (by default `hexdump` replaces repetition by `*`).
* `-n16` to consume 16 bytes of input (32 hex digits = 16 bytes).
* `4/4 "%08X"` to iterate four times, consume 4 bytes per iteration and print the corresponding 32 bits value as 8 hex digits, with leading zeros, if needed.
* `1 "\n"` to end with a single newline.
|
If you want to generate output of **arbitrary length, including even/odd number of characters**:
```sh
cat /dev/urandom | hexdump --no-squeezing -e '/1 "%x"' | head -c 31
```
Or to maximize efficiency over readability/composeability:
```
hexdump --no-squeezing -e '/1 "%x"' -n 15 /dev/urandom
```
|
34,328,759
|
I'd like to put together a command that will print out a string of 32 hexadecimal digits. I've got a Python script that works:
```
python -c 'import random ; print "".join(map(lambda t: format(t, "02X"), [random.randrange(256) for x in range(16)]))'
```
This generates output like:
```
6EF6B30F9E557F948C402C89002C7C8A
```
Which is what I need.
On a Mac, I can even do this:
```
uuidgen | tr -d '-'
```
However, I don't have access to the more sophisticated scripting languages ruby and python, and I won't be on a Mac (so no uuidgen). I need to stick with more bash'ish tools like sed, awk, /dev/random because I'm on a limited platform. Is there a way to do this?
|
2015/12/17
|
[
"https://Stackoverflow.com/questions/34328759",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1002430/"
] |
If you have `hexdump` then:
```
hexdump -vn16 -e'4/4 "%08X" 1 "\n"' /dev/urandom
```
should do the job.
**Explanation:**
* `-v` to print all data (by default `hexdump` replaces repetition by `*`).
* `-n16` to consume 16 bytes of input (32 hex digits = 16 bytes).
* `4/4 "%08X"` to iterate four times, consume 4 bytes per iteration and print the corresponding 32 bits value as 8 hex digits, with leading zeros, if needed.
* `1 "\n"` to end with a single newline.
|
Here is a version not using `dev/random`:
```
awk -v len=32 'BEGIN {
srand('$RANDOM');
while(len--) {
n=int(rand()*16);
printf("%c", n+(n>9 ? 55 : 48));
};}'
```
|
34,328,759
|
I'd like to put together a command that will print out a string of 32 hexadecimal digits. I've got a Python script that works:
```
python -c 'import random ; print "".join(map(lambda t: format(t, "02X"), [random.randrange(256) for x in range(16)]))'
```
This generates output like:
```
6EF6B30F9E557F948C402C89002C7C8A
```
Which is what I need.
On a Mac, I can even do this:
```
uuidgen | tr -d '-'
```
However, I don't have access to the more sophisticated scripting languages ruby and python, and I won't be on a Mac (so no uuidgen). I need to stick with more bash'ish tools like sed, awk, /dev/random because I'm on a limited platform. Is there a way to do this?
|
2015/12/17
|
[
"https://Stackoverflow.com/questions/34328759",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1002430/"
] |
Here are a few more options, all of which have the nice property of providing an obvious and easy way to directly select the length of the output string. In all the cases below, changing the '32' to your desired string length is all you need to do.
```
#works in bash and busybox, but not in ksh
tr -dc 'A-F0-9' < /dev/urandom | head -c32
#works in bash and ksh, but not in busybox
tr -dc 'A-F0-9' < /dev/urandom | dd status=none bs=1 count=32
#works in bash, ksh, AND busybox! w00t!
tr -dc 'A-F0-9' < /dev/urandom | dd bs=1 count=32 2>/dev/null
```
EDIT: Tested in different shells.
|
If you want to generate output of **arbitrary length, including even/odd number of characters**:
```sh
cat /dev/urandom | hexdump --no-squeezing -e '/1 "%x"' | head -c 31
```
Or to maximize efficiency over readability/composeability:
```
hexdump --no-squeezing -e '/1 "%x"' -n 15 /dev/urandom
```
|
34,328,759
|
I'd like to put together a command that will print out a string of 32 hexadecimal digits. I've got a Python script that works:
```
python -c 'import random ; print "".join(map(lambda t: format(t, "02X"), [random.randrange(256) for x in range(16)]))'
```
This generates output like:
```
6EF6B30F9E557F948C402C89002C7C8A
```
Which is what I need.
On a Mac, I can even do this:
```
uuidgen | tr -d '-'
```
However, I don't have access to the more sophisticated scripting languages ruby and python, and I won't be on a Mac (so no uuidgen). I need to stick with more bash'ish tools like sed, awk, /dev/random because I'm on a limited platform. Is there a way to do this?
|
2015/12/17
|
[
"https://Stackoverflow.com/questions/34328759",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1002430/"
] |
If you have `hexdump` then:
```
hexdump -vn16 -e'4/4 "%08X" 1 "\n"' /dev/urandom
```
should do the job.
**Explanation:**
* `-v` to print all data (by default `hexdump` replaces repetition by `*`).
* `-n16` to consume 16 bytes of input (32 hex digits = 16 bytes).
* `4/4 "%08X"` to iterate four times, consume 4 bytes per iteration and print the corresponding 32 bits value as 8 hex digits, with leading zeros, if needed.
* `1 "\n"` to end with a single newline.
|
Here are a few more options, all of which have the nice property of providing an obvious and easy way to directly select the length of the output string. In all the cases below, changing the '32' to your desired string length is all you need to do.
```
#works in bash and busybox, but not in ksh
tr -dc 'A-F0-9' < /dev/urandom | head -c32
#works in bash and ksh, but not in busybox
tr -dc 'A-F0-9' < /dev/urandom | dd status=none bs=1 count=32
#works in bash, ksh, AND busybox! w00t!
tr -dc 'A-F0-9' < /dev/urandom | dd bs=1 count=32 2>/dev/null
```
EDIT: Tested in different shells.
|
34,328,759
|
I'd like to put together a command that will print out a string of 32 hexadecimal digits. I've got a Python script that works:
```
python -c 'import random ; print "".join(map(lambda t: format(t, "02X"), [random.randrange(256) for x in range(16)]))'
```
This generates output like:
```
6EF6B30F9E557F948C402C89002C7C8A
```
Which is what I need.
On a Mac, I can even do this:
```
uuidgen | tr -d '-'
```
However, I don't have access to the more sophisticated scripting languages ruby and python, and I won't be on a Mac (so no uuidgen). I need to stick with more bash'ish tools like sed, awk, /dev/random because I'm on a limited platform. Is there a way to do this?
|
2015/12/17
|
[
"https://Stackoverflow.com/questions/34328759",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1002430/"
] |
If you are looking for a single command and have openssl installed, see below. Generate random 16 bytes (32 hex symbols) and encode in hex (also -base64 is supported).
```
openssl rand -hex 16
```
|
Try:
```
xxd -u -l 16 -p /dev/urandom
```
Example output:
```
C298212CD8B55F2E193FFA16165E95E3
```
And to convert it back to binary:
```
echo -n C298212CD8B55F2E193FFA16165E95E3 | xxd -r -p
```
|
34,328,759
|
I'd like to put together a command that will print out a string of 32 hexadecimal digits. I've got a Python script that works:
```
python -c 'import random ; print "".join(map(lambda t: format(t, "02X"), [random.randrange(256) for x in range(16)]))'
```
This generates output like:
```
6EF6B30F9E557F948C402C89002C7C8A
```
Which is what I need.
On a Mac, I can even do this:
```
uuidgen | tr -d '-'
```
However, I don't have access to the more sophisticated scripting languages ruby and python, and I won't be on a Mac (so no uuidgen). I need to stick with more bash'ish tools like sed, awk, /dev/random because I'm on a limited platform. Is there a way to do this?
|
2015/12/17
|
[
"https://Stackoverflow.com/questions/34328759",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1002430/"
] |
If you are looking for a single command and have openssl installed, see below. Generate random 16 bytes (32 hex symbols) and encode in hex (also -base64 is supported).
```
openssl rand -hex 16
```
|
Here are a few more options, all of which have the nice property of providing an obvious and easy way to directly select the length of the output string. In all the cases below, changing the '32' to your desired string length is all you need to do.
```
#works in bash and busybox, but not in ksh
tr -dc 'A-F0-9' < /dev/urandom | head -c32
#works in bash and ksh, but not in busybox
tr -dc 'A-F0-9' < /dev/urandom | dd status=none bs=1 count=32
#works in bash, ksh, AND busybox! w00t!
tr -dc 'A-F0-9' < /dev/urandom | dd bs=1 count=32 2>/dev/null
```
EDIT: Tested in different shells.
|
34,328,759
|
I'd like to put together a command that will print out a string of 32 hexadecimal digits. I've got a Python script that works:
```
python -c 'import random ; print "".join(map(lambda t: format(t, "02X"), [random.randrange(256) for x in range(16)]))'
```
This generates output like:
```
6EF6B30F9E557F948C402C89002C7C8A
```
Which is what I need.
On a Mac, I can even do this:
```
uuidgen | tr -d '-'
```
However, I don't have access to the more sophisticated scripting languages ruby and python, and I won't be on a Mac (so no uuidgen). I need to stick with more bash'ish tools like sed, awk, /dev/random because I'm on a limited platform. Is there a way to do this?
|
2015/12/17
|
[
"https://Stackoverflow.com/questions/34328759",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1002430/"
] |
If you have `hexdump` then:
```
hexdump -vn16 -e'4/4 "%08X" 1 "\n"' /dev/urandom
```
should do the job.
**Explanation:**
* `-v` to print all data (by default `hexdump` replaces repetition by `*`).
* `-n16` to consume 16 bytes of input (32 hex digits = 16 bytes).
* `4/4 "%08X"` to iterate four times, consume 4 bytes per iteration and print the corresponding 32 bits value as 8 hex digits, with leading zeros, if needed.
* `1 "\n"` to end with a single newline.
|
Try:
```
xxd -u -l 16 -p /dev/urandom
```
Example output:
```
C298212CD8B55F2E193FFA16165E95E3
```
And to convert it back to binary:
```
echo -n C298212CD8B55F2E193FFA16165E95E3 | xxd -r -p
```
|
34,328,759
|
I'd like to put together a command that will print out a string of 32 hexadecimal digits. I've got a Python script that works:
```
python -c 'import random ; print "".join(map(lambda t: format(t, "02X"), [random.randrange(256) for x in range(16)]))'
```
This generates output like:
```
6EF6B30F9E557F948C402C89002C7C8A
```
Which is what I need.
On a Mac, I can even do this:
```
uuidgen | tr -d '-'
```
However, I don't have access to the more sophisticated scripting languages ruby and python, and I won't be on a Mac (so no uuidgen). I need to stick with more bash'ish tools like sed, awk, /dev/random because I'm on a limited platform. Is there a way to do this?
|
2015/12/17
|
[
"https://Stackoverflow.com/questions/34328759",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1002430/"
] |
If you are looking for a single command and have openssl installed, see below. Generate random 16 bytes (32 hex symbols) and encode in hex (also -base64 is supported).
```
openssl rand -hex 16
```
|
you can also use `od` command like this
```
od -N32 -x < /dev/urandom | head -n1 | cut -b9- | sed 's/ //gi'
```
good luck
|
34,328,759
|
I'd like to put together a command that will print out a string of 32 hexadecimal digits. I've got a Python script that works:
```
python -c 'import random ; print "".join(map(lambda t: format(t, "02X"), [random.randrange(256) for x in range(16)]))'
```
This generates output like:
```
6EF6B30F9E557F948C402C89002C7C8A
```
Which is what I need.
On a Mac, I can even do this:
```
uuidgen | tr -d '-'
```
However, I don't have access to the more sophisticated scripting languages ruby and python, and I won't be on a Mac (so no uuidgen). I need to stick with more bash'ish tools like sed, awk, /dev/random because I'm on a limited platform. Is there a way to do this?
|
2015/12/17
|
[
"https://Stackoverflow.com/questions/34328759",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1002430/"
] |
Here are a few more options, all of which have the nice property of providing an obvious and easy way to directly select the length of the output string. In all the cases below, changing the '32' to your desired string length is all you need to do.
```
#works in bash and busybox, but not in ksh
tr -dc 'A-F0-9' < /dev/urandom | head -c32
#works in bash and ksh, but not in busybox
tr -dc 'A-F0-9' < /dev/urandom | dd status=none bs=1 count=32
#works in bash, ksh, AND busybox! w00t!
tr -dc 'A-F0-9' < /dev/urandom | dd bs=1 count=32 2>/dev/null
```
EDIT: Tested in different shells.
|
you can also use `od` command like this
```
od -N32 -x < /dev/urandom | head -n1 | cut -b9- | sed 's/ //gi'
```
good luck
|
62,175,053
|
I'm working on Windows 7 64bits with Anaconda 3. On my environment Nifti, I have installed Tensorflow 2.1.0, Keras 2.3.1 and Python 3.7.7.
On Visual Studio Code there is a problem with all of these imports:
```
from tensorflow.python.keras.models import Model
from tensorflow.keras.layers import Input, Dense, Conv2D, Conv2DTranspose, UpSampling2D, MaxPooling2D, Flatten, ZeroPadding2D
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from tensorflow.keras.optimizers import Adam
```
I get these errors:
```
No name 'python' in module 'tensorflow'
Unable to import 'tensorflow.python.keras.models'
Unable to import 'tensorflow.keras.layers'
Unable to import 'tensorflow.keras.preprocessing.image'
Unable to import 'tensorflow.keras.optimizers'
```
Visual Studio Code is using the same anaconda environment: `D:\Users\VansFannel\Programs\anaconda3\envs\nifti`. I have checked it on "Python: Select Interpreter command" option in Visual Studio.
If I do this on a CMD shell with nifti environment activate, `python -c 'from tensorflow.python.keras.models import Model`, I don't get any error.
If I do with **iPython**:
```
from tensorflow.python.keras.models import Model
```
I don't get any error either.
I have checked `python.pythonpath` settings, and it points to: `D:\Users\VansFannel\Programs\anaconda3\envs\nifti`
And in the bottom left corner I can see:
[](https://i.stack.imgur.com/eruXC.png)
When I open a new Terminal on Visual Studio Code, I get these messages:
>
>
> ```
> Microsoft Windows [Versión 6.1.7601]
> Copyright (c) 2009 Microsoft Corporation. Reservados todos los derechos.
>
> D:\Sources\Repos\University\TFM\PruebasPython\Nifty>D:/Usuarios/VansFannel/Programs/anaconda3/Scripts/activate
>
> (base) D:\Sources\Repos\University\TFM\PruebasPython\Nifty>conda activate nifti
>
> (nifti) D:\Sources\Repos\University\TFM\PruebasPython\Nifty>
>
> ```
>
>
If I run the code in Visual Studio Code with `Ctrl. + F5`, **it runs** without any errors although it displays the errors on the `Problems` tab.
With pyCharm, I don't get any errors.
How an I fix this problem?
|
2020/06/03
|
[
"https://Stackoverflow.com/questions/62175053",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/68571/"
] |
I had tried your code, and my suggestion is to switch from pylint to other lintings. Keep away from that stupid linting. Maybe you can take a try of flake8:
"python.linting.pylintEnabled": false,
"python.linting.flake8Enabled": true,
This problem occurs because the pylint can't search the path of Anaconda, as 'tensorflow' only can be installed through conda, you only can choose the environment which created by Anaconda. But pylint can't follow the change of the environment to change the search path, so it will prompt import error. Stupid linting.
|
**If you're using Anaconda Virtual Environment**
1. Close all **Open Terminals** in VS Code
2. Open a new terminal
3. Write the path to your activate folder of Anaconda in Terminal
>
> Example: `E:/Softwares/AnacondaFolder/Scripts/activate`
>
>
>
This should now show **(base)** written at the start of your folder path
4. Now, conda activate
>
> Example: `conda activate Nifti`
>
>
>
This should now show **(Nifti)** written at the start of your folder path
Now, if your import something, VS Code will recognize it.
|
14,737,499
|
I followed the top solution to Flattening an irregular list of lists in python ([Flatten (an irregular) list of lists](https://stackoverflow.com/questions/2158395/flatten-an-irregular-list-of-lists-in-python)) using the following code:
```
def flatten(l):
for el in l:
if isinstance(el, collections.Iterable) and not isinstance(el, basestring):
for sub in flatten(el):
yield sub
else:
yield el
L = [[[1, 2, 3], [4, 5]], 6]
L=flatten(L)
print L
```
And got the following output:
"generator object flatten at 0x100494460"
I'm not sure which packages I need to import or syntax I need to change to get this to work for me.
|
2013/02/06
|
[
"https://Stackoverflow.com/questions/14737499",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1009215/"
] |
Add a `data-select` attribute to those links:
```
<a href="#contact" data-select="mother@mail.com">Send a mail to mother!</a>
<a href="#contact" data-select="father@mail.com">Send a mail to father!</a>
<a href="#contact" data-select="sister@mail.com">Send a mail to sister!</a>
<a href="#contact" data-select="brother@mail.com">Send a mail to brother!</a>
```
Then use the value of the clicked link to set the value of the `select` element:
```js
var $select = $('#recipient');
$('a[href="#contact"]').click(function () {
$select.val( $(this).data('select') );
});
```
Here's the fiddle: <http://jsfiddle.net/Dw6Yv/>
---
If you don't want to add those `data-select` attributes to your markup, you can use this:
```js
var $select = $('#recipient'),
$links = $('a[href="#contact"]');
$links.click(function () {
$select.prop('selectedIndex', $links.index(this) );
});
```
Here's the fiddle: <http://jsfiddle.net/Bxz24/>
Just keep in mind that this will require your links to be in the exact same order as the `select` options.
|
If the order is allways the same you can do this
```
$("a").click(function(){
$("#recipient").prop("selectedIndex", $(this).index());
});
```
Otherwise do this by defining the index on the link:
```
<a href="#contact" data-index="0">Send a mail to mother!</a>
<a href="#contact" data-index="1">Send a mail to father!</a>
<a href="#contact" data-index="2">Send a mail to sister!</a>
<a href="#contact" data-index="3">Send a mail to brother!</a>
<form id="contact">
<select id="recipient">
<option value="mother@mail.com">Mother</option>
<option value="father@mail.com">Father</option>
<option value="sister@mail.com">Sister</option>
<option value="brother@mail.com">Brother</option>
</select>
</form>
$("a").click(function(){
$("#recipient").prop("selectedIndex", $(this).data("index"));
});
```
|
14,737,499
|
I followed the top solution to Flattening an irregular list of lists in python ([Flatten (an irregular) list of lists](https://stackoverflow.com/questions/2158395/flatten-an-irregular-list-of-lists-in-python)) using the following code:
```
def flatten(l):
for el in l:
if isinstance(el, collections.Iterable) and not isinstance(el, basestring):
for sub in flatten(el):
yield sub
else:
yield el
L = [[[1, 2, 3], [4, 5]], 6]
L=flatten(L)
print L
```
And got the following output:
"generator object flatten at 0x100494460"
I'm not sure which packages I need to import or syntax I need to change to get this to work for me.
|
2013/02/06
|
[
"https://Stackoverflow.com/questions/14737499",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1009215/"
] |
If the order is allways the same you can do this
```
$("a").click(function(){
$("#recipient").prop("selectedIndex", $(this).index());
});
```
Otherwise do this by defining the index on the link:
```
<a href="#contact" data-index="0">Send a mail to mother!</a>
<a href="#contact" data-index="1">Send a mail to father!</a>
<a href="#contact" data-index="2">Send a mail to sister!</a>
<a href="#contact" data-index="3">Send a mail to brother!</a>
<form id="contact">
<select id="recipient">
<option value="mother@mail.com">Mother</option>
<option value="father@mail.com">Father</option>
<option value="sister@mail.com">Sister</option>
<option value="brother@mail.com">Brother</option>
</select>
</form>
$("a").click(function(){
$("#recipient").prop("selectedIndex", $(this).data("index"));
});
```
|
Also [there is another way](http://fiddle.jshell.net/FWgan/) using `label` option, which is sure that works also without html5 `data`.
|
14,737,499
|
I followed the top solution to Flattening an irregular list of lists in python ([Flatten (an irregular) list of lists](https://stackoverflow.com/questions/2158395/flatten-an-irregular-list-of-lists-in-python)) using the following code:
```
def flatten(l):
for el in l:
if isinstance(el, collections.Iterable) and not isinstance(el, basestring):
for sub in flatten(el):
yield sub
else:
yield el
L = [[[1, 2, 3], [4, 5]], 6]
L=flatten(L)
print L
```
And got the following output:
"generator object flatten at 0x100494460"
I'm not sure which packages I need to import or syntax I need to change to get this to work for me.
|
2013/02/06
|
[
"https://Stackoverflow.com/questions/14737499",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1009215/"
] |
Add a `data-select` attribute to those links:
```
<a href="#contact" data-select="mother@mail.com">Send a mail to mother!</a>
<a href="#contact" data-select="father@mail.com">Send a mail to father!</a>
<a href="#contact" data-select="sister@mail.com">Send a mail to sister!</a>
<a href="#contact" data-select="brother@mail.com">Send a mail to brother!</a>
```
Then use the value of the clicked link to set the value of the `select` element:
```js
var $select = $('#recipient');
$('a[href="#contact"]').click(function () {
$select.val( $(this).data('select') );
});
```
Here's the fiddle: <http://jsfiddle.net/Dw6Yv/>
---
If you don't want to add those `data-select` attributes to your markup, you can use this:
```js
var $select = $('#recipient'),
$links = $('a[href="#contact"]');
$links.click(function () {
$select.prop('selectedIndex', $links.index(this) );
});
```
Here's the fiddle: <http://jsfiddle.net/Bxz24/>
Just keep in mind that this will require your links to be in the exact same order as the `select` options.
|
Also [there is another way](http://fiddle.jshell.net/FWgan/) using `label` option, which is sure that works also without html5 `data`.
|
50,925,488
|
so I have some problems with my dictionaries in python. For example I have dictionary like below:
```
d1 = {123456:xyz, 892019:kjl, 102930491:{[plm,kop]}
d2= {xyz:987, kjl: 0902, plm: 019240, kop:09829}
```
And I would like to have nested dictionary that looks something like that.
```
d={123456 :{xyz:987}, 892019:{kjl:0902}, 102930491:{plm:019240,kop:09829}}
```
is this possible? I was searching for nested dictionaries but nothing works for me.
|
2018/06/19
|
[
"https://Stackoverflow.com/questions/50925488",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9961204/"
] |
If you have two DNS names that can be used as a active/active or active/passive for your API endpoints, you can add them to a Traffic Manager profile and set the routing method you want to use. As indicated in an earlier answer, use only the DNS name and not the protocol identifier (http/https) when you add an endpoint to a Traffic manager profile
|
Traffic manager only wants the DNS name (FQDN) for external endpoints not the protocol. So drop the http: or https: from your API management address and it will accept that as an external endpoint.
Or is your problem not with adding the endpoint, but with the health endpoint monitoring? That can happen as the endpoint for the API Management gateway will return a 404 by default as it does not have a publicly exposed default page.
|
45,199,083
|
I am new to python and I had no difficulty with one example of learning try and except blocks:
```
try:
2 + "s"
except TypeError:
print "There was a type error!"
```
Which outputs what one would expect:
```
There was a type error!
```
However, when trying to catch a syntax error like this:
```
try:
print 'Hello
except SyntaxError:
print "There was a syntax error!"
finally:
print "Finally, this was printed"
```
I would ironically get the EOL syntax error. I was trying this a few times in the jupyter notebook environment and only when I moved over to a terminal in VIM did it make sense to me that the compiler was interpreting the except and finally code blocks as the rest of the incomplete string.
My question is how would one go about syntax error handling in this format? Or is there a more efficient (pythonic?) way of going about this?
It might not be something that one really comes across but it would be interesting to know if there was a clean workaround.
Thanks!
|
2017/07/19
|
[
"https://Stackoverflow.com/questions/45199083",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5941556/"
] |
The reason you can not use a try/except block to capture SyntaxErrors is that these errors happen before your code executes.
High level steps of Python code execution
1. Python interpreter translates the Python code into executable instructions. (Syntax Error Raised)
2. Instructions are executed. (Try/Except block executed)
Since the error happens during step 1 you can not use a try/except to intercept them since it is only executed in step 2.
|
The answer is easy cake:
The `SyntaxError` nullifies the `except` and `finally` statement because they are inside of a string.
|
44,503,913
|
This is literally day 1 of python for me. I've coded in VBA, Java, and Swift in the past, but I am having a particularly hard time following guides online for coding a pdf scraper. Since I have no idea what I am doing, I keep running into a wall every time I want to test out some of the code I've found online.
**Basic Info**
* Windows 7 64bit
* python 3.6.0
* Spyder3
* I have many of the pdf related code packages (PyPDF2, pdfminer, pdfquery, pdfwrw, etc)
**Goals**
To create something in python that allows me to convert PDFs from a folder into an excel file (ideallY) OR a text file (from which I will use VBA to convert).
**Issues**
Every time I try some sample code from guides i've found online, I always run into syntax errors on the lines where I am calling the pdf that I want to test the code on. Some guide links and error examples below. Should I be putting my test.pdf into the same file as the .py file?
* [How to scrape tables in thousands of PDF files?](https://stackoverflow.com/questions/25125178/how-to-scrape-tables-in-thousands-of-pdf-files)
+ I got an invalid syntax error due to "for" on the last line
* PDFMiner guide ([Link](https://stackoverflow.com/questions/36902496/python-pdfminer-pdf-to-csv))
```html
runfile('C:/Users/U587208/Desktop/pdffolder/pdfminer.py', wdir='C:/Users/U587208/Desktop/pdffolder')
File "C:/Users/U587208/Desktop/pdffolder/pdfminer.py", line 79
print pdf_to_csv('test.pdf', separator, threshold)
^
SyntaxError: invalid syntax
```
|
2017/06/12
|
[
"https://Stackoverflow.com/questions/44503913",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8149767/"
] |
It seems that the tutorials you are following make use of python 2. There are usually few noticable differences, the the biggest is that in python 3, print became a funtion so
```
print()
```
I would recomment either changing you version of python or finding a tutorial for python 3. Hope this helps
|
Here
[Pdfminer python 3.5](https://stackoverflow.com/questions/39854841/pdfminer-python-3-5/40877143#40877143) an example, how to extract informations from a PDF.
But it does not solve the problem with tables you want to export to Excel. Commercial products are probably better in doing that...
|
44,503,913
|
This is literally day 1 of python for me. I've coded in VBA, Java, and Swift in the past, but I am having a particularly hard time following guides online for coding a pdf scraper. Since I have no idea what I am doing, I keep running into a wall every time I want to test out some of the code I've found online.
**Basic Info**
* Windows 7 64bit
* python 3.6.0
* Spyder3
* I have many of the pdf related code packages (PyPDF2, pdfminer, pdfquery, pdfwrw, etc)
**Goals**
To create something in python that allows me to convert PDFs from a folder into an excel file (ideallY) OR a text file (from which I will use VBA to convert).
**Issues**
Every time I try some sample code from guides i've found online, I always run into syntax errors on the lines where I am calling the pdf that I want to test the code on. Some guide links and error examples below. Should I be putting my test.pdf into the same file as the .py file?
* [How to scrape tables in thousands of PDF files?](https://stackoverflow.com/questions/25125178/how-to-scrape-tables-in-thousands-of-pdf-files)
+ I got an invalid syntax error due to "for" on the last line
* PDFMiner guide ([Link](https://stackoverflow.com/questions/36902496/python-pdfminer-pdf-to-csv))
```html
runfile('C:/Users/U587208/Desktop/pdffolder/pdfminer.py', wdir='C:/Users/U587208/Desktop/pdffolder')
File "C:/Users/U587208/Desktop/pdffolder/pdfminer.py", line 79
print pdf_to_csv('test.pdf', separator, threshold)
^
SyntaxError: invalid syntax
```
|
2017/06/12
|
[
"https://Stackoverflow.com/questions/44503913",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8149767/"
] |
It seems that the tutorials you are following make use of python 2. There are usually few noticable differences, the the biggest is that in python 3, print became a funtion so
```
print()
```
I would recomment either changing you version of python or finding a tutorial for python 3. Hope this helps
|
I am trying to do this exact same thing! I have been able to convert my pdf to text however the formatting is extremely random and messy and I need the tables to stay in tact to be able to write them into excel data sheets. I am now attempting to convert to XML to see if it will be easier to extract from. If I get anywhere on this I will let you know :)
btw, use python 2 if you're going to use pdfminer. Here's some help with pdfminer <https://media.readthedocs.org/pdf/pdfminer-docs/latest/pdfminer-docs.pdf>
|
61,365,111
|
friends.
Yesterday I used the below python piece of code to retrieve some comments on youtube videos sucessfully:
```
!pip install --upgrade google-api-python-client
import os
import googleapiclient.discovery
DEVELOPER_KEY = "my_key"
YOUTUBE_API_SERVICE_NAME = "youtube"
YOUTUBE_API_VERSION = "v3"
youtube = googleapiclient.discovery.build(YOUTUBE_API_SERVICE_NAME, YOUTUBE_API_VERSION, developerKey=DEVELOPER_KEY)
youtube
```
It seems that the build function is suddenly not working. I have even refreshed the API, but in Google Colab I keep receiving the following error message:
```
UnknownApiNameOrVersion Traceback (most recent call last)
<ipython-input-21-064a9ae417b9> in <module>()
13
14
---> 15 youtube = googleapiclient.discovery.build(YOUTUBE_API_SERVICE_NAME, YOUTUBE_API_VERSION, developerKey=DEVELOPER_KEY)
16 youtube
17
1 frames
/usr/local/lib/python3.6/dist-packages/googleapiclient/discovery.py in build(serviceName, version, http, discoveryServiceUrl, developerKey, model, requestBuilder, credentials, cache_discovery, cache, client_options)
241 raise e
242
--> 243 raise UnknownApiNameOrVersion("name: %s version: %s" % (serviceName, version))
244
245
UnknownApiNameOrVersion: name: youtube version: V3
```
If anyone could help. I´m using this type of authentication because I dont know to put the credentials file in google drive and open it in Colab. But it worked yesterday:
[Results for yesterday´s run](https://i.stack.imgur.com/DKZOg.png)
Thank you very much in advance. And sorry for anything, Im new in the community.
Regards
|
2020/04/22
|
[
"https://Stackoverflow.com/questions/61365111",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13380927/"
] |
The problem is on the server side as discussed [here](https://github.com/googleapis/google-api-python-client/issues/882). Until the server problem is fixed, this solution may help (as suggested by [@busunkim96](https://github.com/googleapis/google-api-python-client/issues/882#issuecomment-618514530)):
First, download this json file: <https://www.googleapis.com/discovery/v1/apis/youtube/v3/rest>
Then:
```py
import json
from googleapiclient import discovery
# Path to the json file you downloaded:
path_json = '/path/to/file/rest'
with open(path_json) as f:
service = json.load(f)
# Replace with your actual API key:
api_key = 'your API key'
yt = discovery.build_from_document(service,
developerKey=api_key)
# Make a request to see whether this works:
request = yt.search().list(part='snippet',
channelId='UCYO_jab_esuFRV4b17AJtAw',
publishedAfter='2020-02-01T00:00:00.000Z',
publishedBefore='2020-04-23T00:00:00.000Z',
order='date',
type='video',
maxResults=50)
response = request.execute()
```
|
I was able to resolve this issue by putting making putting `static_discovery=False` into the build command
Examples:
**Previous Code**
`self.youtube = googleapiclient.discovery.build(API_SERVICE_NAME, API_VERSION, credentials=creds`
**New Code**
`self.youtube = googleapiclient.discovery.build(API_SERVICE_NAME, API_VERSION, credentials=creds, static_discovery=False)`
For some reason this issue only arised when I compiled my program using Github Actions
|
47,043,606
|
I created and saved simple nn in tensorflow:
```
import tensorflow as tf
import numpy as np
x = tf.placeholder(tf.float32, [1, 1],name='input_placeholder')
y = tf.placeholder(tf.float32, [1, 1],name='input_placeholder')
W = tf.get_variable('W', [1, 1])
layer = tf.matmul(x, W, name='layer')
loss = tf.subtract(y,layer)
train_step = tf.train.AdagradOptimizer(0.1).minimize(loss, name='train_step')
all_saver = tf.train.Saver()
sess = tf.Session()
sess.run(tf.global_variables_initializer())
x_test = np.zeros((1, 1))
y_test = np.zeros((1, 1))
some_output = sess.run([train_step],feed_dict = {x:x_test,y:y_test})
save_path = r'C:\Temp\tf_exp\save_folder\test'
all_saver.save(sess,save_path)
```
Then I took all files in `C:\Temp\tf_exp\save_folder\` and moved them (exactly moved not copied) to `C:\Temp\tf_exp\restore_folder`. The files that I moved are:
```
checkpoint
test.data-00000-of-00001
test.index
test.meta
```
Then I tried to restore nn from new location:
```
meta_path = r'C:\Temp\tf_exp\restore_folder\test.meta'
checkpoint_path = r'C:\Temp\tf_exp\restore_folder\\'
print(checkpoint_path)
new_all_saver = tf.train.import_meta_graph(meta_path)
sess=tf.Session()
new_all_saver.restore(sess, tf.train.latest_checkpoint(checkpoint_path))
graph = tf.get_default_graph()
layer= graph.get_tensor_by_name('layer:0')
x=graph.get_tensor_by_name('input_placeholder:0')
```
Here is error that restore code generated:
```
C:\Temp\tf_exp\restore_folder\\
ERROR:tensorflow:Couldn't match files for checkpoint C:\Temp\tf_exp\save_folder\test
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-2-9af4e683fc4b> in <module>()
5 new_all_saver = tf.train.import_meta_graph(meta_path)
6 sess=tf.Session()
----> 7 new_all_saver.restore(sess, tf.train.latest_checkpoint(checkpoint_path))
8 graph = tf.get_default_graph()
9 layer= graph.get_tensor_by_name('layer:0')
~\AppData\Local\conda\conda\envs\tensorflow\lib\site-packages\tensorflow\python\training\saver.py in restore(self, sess, save_path)
1555 return
1556 if save_path is None:
-> 1557 raise ValueError("Can't load save_path when it is None.")
1558 logging.info("Restoring parameters from %s", save_path)
1559 sess.run(self.saver_def.restore_op_name,
ValueError: Can't load save_path when it is None.
```
How can I avoid it? What is the proper way of moving files around?
**Update:**
As I am searching for the answer, it looks like using relative path is the way to go. But I am not sure how to use relative path. Should I change Python's current working directory to where I save model data?
|
2017/10/31
|
[
"https://Stackoverflow.com/questions/47043606",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1700890/"
] |
Just add `save_relative_paths=True` when creating `tf.train.Saver()`:
```
# original code: all_saver = tf.train.Saver()
all_saver = tf.train.Saver(save_relative_paths=True)
```
Please refer to [official doc](https://www.tensorflow.org/api_docs/python/tf/train/Saver) for more details.
|
You could try to restore by:
```
with tf.Session() as sess:
saver = tf.train.import_meta_graph(/path/to/test.meta)
saver.restore(sess, "path/to/checkpoints/test")
```
In this case, because you put the name of the checkpoint is "test" so you got 3 files:
```
test.data-00000-of-00001
test.index
test.meta
```
Therefore, when you restore, you need to put the path to checkpoint folder + "**/test**". The system will automatically load the corresponding data and index files.
|
47,043,606
|
I created and saved simple nn in tensorflow:
```
import tensorflow as tf
import numpy as np
x = tf.placeholder(tf.float32, [1, 1],name='input_placeholder')
y = tf.placeholder(tf.float32, [1, 1],name='input_placeholder')
W = tf.get_variable('W', [1, 1])
layer = tf.matmul(x, W, name='layer')
loss = tf.subtract(y,layer)
train_step = tf.train.AdagradOptimizer(0.1).minimize(loss, name='train_step')
all_saver = tf.train.Saver()
sess = tf.Session()
sess.run(tf.global_variables_initializer())
x_test = np.zeros((1, 1))
y_test = np.zeros((1, 1))
some_output = sess.run([train_step],feed_dict = {x:x_test,y:y_test})
save_path = r'C:\Temp\tf_exp\save_folder\test'
all_saver.save(sess,save_path)
```
Then I took all files in `C:\Temp\tf_exp\save_folder\` and moved them (exactly moved not copied) to `C:\Temp\tf_exp\restore_folder`. The files that I moved are:
```
checkpoint
test.data-00000-of-00001
test.index
test.meta
```
Then I tried to restore nn from new location:
```
meta_path = r'C:\Temp\tf_exp\restore_folder\test.meta'
checkpoint_path = r'C:\Temp\tf_exp\restore_folder\\'
print(checkpoint_path)
new_all_saver = tf.train.import_meta_graph(meta_path)
sess=tf.Session()
new_all_saver.restore(sess, tf.train.latest_checkpoint(checkpoint_path))
graph = tf.get_default_graph()
layer= graph.get_tensor_by_name('layer:0')
x=graph.get_tensor_by_name('input_placeholder:0')
```
Here is error that restore code generated:
```
C:\Temp\tf_exp\restore_folder\\
ERROR:tensorflow:Couldn't match files for checkpoint C:\Temp\tf_exp\save_folder\test
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-2-9af4e683fc4b> in <module>()
5 new_all_saver = tf.train.import_meta_graph(meta_path)
6 sess=tf.Session()
----> 7 new_all_saver.restore(sess, tf.train.latest_checkpoint(checkpoint_path))
8 graph = tf.get_default_graph()
9 layer= graph.get_tensor_by_name('layer:0')
~\AppData\Local\conda\conda\envs\tensorflow\lib\site-packages\tensorflow\python\training\saver.py in restore(self, sess, save_path)
1555 return
1556 if save_path is None:
-> 1557 raise ValueError("Can't load save_path when it is None.")
1558 logging.info("Restoring parameters from %s", save_path)
1559 sess.run(self.saver_def.restore_op_name,
ValueError: Can't load save_path when it is None.
```
How can I avoid it? What is the proper way of moving files around?
**Update:**
As I am searching for the answer, it looks like using relative path is the way to go. But I am not sure how to use relative path. Should I change Python's current working directory to where I save model data?
|
2017/10/31
|
[
"https://Stackoverflow.com/questions/47043606",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1700890/"
] |
Just add `save_relative_paths=True` when creating `tf.train.Saver()`:
```
# original code: all_saver = tf.train.Saver()
all_saver = tf.train.Saver(save_relative_paths=True)
```
Please refer to [official doc](https://www.tensorflow.org/api_docs/python/tf/train/Saver) for more details.
|
You could try to open the checkpoint file on notepad and edit it:
```
model_checkpoint_path: "Name-of-saver"
all_model_checkpoint_paths: "Name-of-saver"
```
|
45,385,832
|
I downloaded nitroshare.tar.gz file as i want to install it on my Kali Linux.
I followed up instructions as said on there github page to install it & downloaded all the packages that are required for the installation.
When i use cmake command it is showing me this error:
```
Unknown cmake command qt5_wrap_ui
```
Here is `CMakeList.txt` file:
```
cmake_minimum_required(VERSION 3.7)
configure_file(config.h.in "${CMAKE_CURRENT_BINARY_DIR}/config.h")
set(SRC
application/aboutdialog.cpp
application/application.cpp
application/splashdialog.cpp
bundle/bundle.cpp
device/device.cpp
device/devicedialog.cpp
device/devicelistener.cpp
device/devicemodel.cpp
icon/icon.cpp
icon/trayicon.cpp
settings/settings.cpp
settings/settingsdialog.cpp
transfer/transfer.cpp
transfer/transfermodel.cpp
transfer/transferreceiver.cpp
transfer/transfersender.cpp
transfer/transferserver.cpp
transfer/transferwindow.cpp
util/json.cpp
util/platform.cpp
main.cpp
)
if(WIN32)
set(SRC ${SRC} data/resource.rc)
endif()
if(APPLE)
set(SRC ${SRC}
data/icon/nitroshare.icns
transfer/transferwindow.mm
)
set_source_files_properties("data/icon/nitroshare.icns" PROPERTIES
MACOSX_PACKAGE_LOCATION Resources
)
endif()
if(QHttpEngine_FOUND)
set(SRC ${SRC}
api/apihandler.cpp
api/apiserver.cpp
)
endif()
if(APPINDICATOR_FOUND)
set(SRC ${SRC}
icon/indicatoricon.cpp
)
endif()
qt5_wrap_ui(UI
application/aboutdialog.ui
application/splashdialog.ui
device/devicedialog.ui
settings/settingsdialog.ui
transfer/transferwindow.ui
)
# Update the TS files from the source directory
file(GLOB TS data/ts/*.ts)
add_custom_target(ts
COMMAND "${Qt5_LUPDATE_EXECUTABLE}"
-silent
-locations relative
"${CMAKE_CURRENT_SOURCE_DIR}"
-ts ${TS}
)
# Create a directory for the compiled QM files
file(MAKE_DIRECTORY "${CMAKE_CURRENT_BINARY_DIR}/data/qm")
# Create commands for each of the translation files
set(QM_XML)
set(QM)
foreach(_ts_path ${TS})
get_filename_component(_ts_name "${_ts_path}" NAME)
get_filename_component(_ts_name_we "${_ts_path}" NAME_WE)
set(_qm_name "${_ts_name_we}.qm")
set(_qm_path "${CMAKE_CURRENT_BINARY_DIR}/data/qm/${_qm_name}")
set(QM_XML "${QM_XML}<file>qm/${_qm_name}</file>")
list(APPEND QM "${_qm_path}")
add_custom_command(OUTPUT "${_qm_path}"
COMMAND "${Qt5_LRELEASE_EXECUTABLE}"
-compress
-removeidentical
-silent
"${_ts_path}"
-qm "${_qm_path}"
COMMENT "Generating ${_ts_name}"
)
endforeach()
# Create a target for compiling all of the translation files
add_custom_target(qm DEPENDS ${QM})
# Configure the i18n resource file
set(QRC_QM "${CMAKE_CURRENT_BINARY_DIR}/data/resource_qm.qrc")
configure_file(data/resource_qm.qrc.in "${QRC_QM}")
qt5_add_resources(QRC
data/resource.qrc
"${QRC_QM}"
)
add_executable(nitroshare WIN32 MACOSX_BUNDLE ${SRC} ${UI} ${QRC})
target_compile_features(nitroshare PRIVATE
cxx_lambdas
cxx_nullptr
cxx_strong_enums
cxx_uniform_initialization
)
# In order to support Retina, a special tag is required in Info.plist.
if(APPLE)
set_target_properties(nitroshare PROPERTIES
MACOSX_BUNDLE_INFO_PLIST "${CMAKE_CURRENT_SOURCE_DIR}/Info.plist.in"
MACOSX_BUNDLE_ICON_FILE "nitroshare.icns"
MACOSX_BUNDLE_GUI_IDENTIFIER "com.NathanOsman.NitroShare"
MACOSX_BUNDLE_BUNDLE_NAME "NitroShare"
)
target_link_libraries(nitroshare "-framework ApplicationServices")
endif()
target_link_libraries(nitroshare Qt5::Widgets Qt5::Network Qt5::Svg)
if(Qt5WinExtras_FOUND)
target_link_libraries(nitroshare Qt5::WinExtras)
endif()
if(Qt5MacExtras_FOUND)
target_link_libraries(nitroshare Qt5::MacExtras)
endif()
if(QHttpEngine_FOUND)
target_link_libraries(nitroshare QHttpEngine)
endif()
if(APPINDICATOR_FOUND)
target_include_directories(nitroshare PRIVATE ${APPINDICATOR_INCLUDE_DIRS})
target_link_libraries(nitroshare ${APPINDICATOR_LIBRARIES})
endif()
include(GNUInstallDirs)
install(TARGETS nitroshare
RUNTIME DESTINATION ${CMAKE_INSTALL_BINDIR}
BUNDLE DESTINATION .
)
if(WIN32)
# If windeployqt is available, use it immediately after the build completes to
# ensure that the dependencies are available.
include(DeployQt)
if(WINDEPLOYQT_EXECUTABLE)
windeployqt(nitroshare)
endif()
# If QHttpEngine was used, include it in the installer
if(QHttpEngine_FOUND)
add_custom_command(TARGET nitroshare POST_BUILD
COMMAND "${CMAKE_COMMAND}" -E
copy_if_different "$<TARGET_FILE:QHttpEngine>" "$<TARGET_FILE_DIR:nitroshare>"
COMMENT "Copying QHttpEngine..."
)
endif()
# If Inno Setup is available, provide a target for building an EXE installer.
find_package(InnoSetup)
if(INNOSETUP_EXECUTABLE)
configure_file(dist/setup.iss.in "${CMAKE_CURRENT_BINARY_DIR}/setup.iss")
add_custom_target(exe
COMMAND "${INNOSETUP_EXECUTABLE}"
/Q
/DTARGET_FILE_NAME="$<TARGET_FILE_NAME:nitroshare>"
"${CMAKE_CURRENT_BINARY_DIR}/setup.iss"
DEPENDS nitroshare
COMMENT "Building installer..."
)
endif()
endif()
if(APPLE)
# If QHttpEngine was used, copy the dynlib over and correct its RPATH
if(QHttpEngine_FOUND)
add_custom_command(TARGET nitroshare POST_BUILD
COMMAND "${CMAKE_COMMAND}" -E
make_directory "$<TARGET_FILE_DIR:nitroshare>/../Frameworks"
COMMAND "${CMAKE_COMMAND}" -E
copy_if_different
"$<TARGET_FILE:QHttpEngine>"
"$<TARGET_FILE_DIR:nitroshare>/../Frameworks/$<TARGET_SONAME_FILE_NAME:QHttpEngine>"
COMMAND install_name_tool
-change
"$<TARGET_SONAME_FILE_NAME:QHttpEngine>"
"@rpath/$<TARGET_SONAME_FILE_NAME:QHttpEngine>"
"$<TARGET_FILE:nitroshare>"
COMMENT "Copying QHttpEngine and correcting RPATH..."
)
endif()
# If macdeployqt is available, use it immediately after the build completes to
# copy the Qt frameworks into the application bundle.
include(DeployQt)
if(MACDEPLOYQT_EXECUTABLE)
macdeployqt(nitroshare)
endif()
# Create a target for building a DMG that contains the application bundle and a
# symlink to the /Applications folder.
set(sym "${CMAKE_BINARY_DIR}/out/Applications")
set(dmg "${CMAKE_BINARY_DIR}/nitroshare-${PROJECT_VERSION}-osx.dmg")
add_custom_target(dmg
COMMAND rm -f "${sym}" "${dmg}"
COMMAND ln -s /Applications "${sym}"
COMMAND hdiutil create
-srcfolder "${CMAKE_BINARY_DIR}/out"
-volname "${PROJECT_NAME}"
-fs HFS+
-size 30m
"${dmg}"
DEPENDS nitroshare
COMMENT "Building disk image..."
)
endif()
# On Linux, include the icons, manpage, .desktop file, and file manager extensions when installing.
if(${CMAKE_SYSTEM_NAME} MATCHES "Linux")
install(DIRECTORY "${CMAKE_CURRENT_SOURCE_DIR}/dist/icons"
DESTINATION "${CMAKE_INSTALL_DATAROOTDIR}"
)
install(FILES "${CMAKE_CURRENT_SOURCE_DIR}/dist/nitroshare.1"
DESTINATION "${CMAKE_INSTALL_MANDIR}/man1"
)
install(FILES "${CMAKE_CURRENT_SOURCE_DIR}/dist/nitroshare.desktop"
DESTINATION "${CMAKE_INSTALL_DATAROOTDIR}/applications"
)
# Configure and install Python extensions for Nautilus-derived file managers
foreach(_file_manager Nautilus Nemo Caja)
string(TOLOWER ${_file_manager} _file_manager_lower)
set(_py_filename "${CMAKE_CURRENT_BINARY_DIR}/dist/nitroshare_${_file_manager_lower}.py")
configure_file("${CMAKE_CURRENT_SOURCE_DIR}/dist/nitroshare.py.in" "${_py_filename}")
install(FILES "${_py_filename}"
DESTINATION "${CMAKE_INSTALL_DATAROOTDIR}/${_file_manager_lower}-python/extensions"
RENAME nitroshare.py
)
endforeach()
endif()
```
|
2017/07/29
|
[
"https://Stackoverflow.com/questions/45385832",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8282220/"
] |
CMake doesn't know the function `qt5_wrap_ui` because you did not import Qt5 or any of the functions it defines.
Before calling `qt5_wrap_ui`, add this:
```
find_package(Qt5 COMPONENTS Widgets REQUIRED)
```
|
I used this line
find\_package(Qt5Widgets)
In my Cmakelist.txt file
I found this on their Document Page of Qt5
|
16,300,464
|
I have two classes in a python source file.
```
class A:
def func(self):
pass
class B:
def func(self):
pass
def run(self):
self.func()
```
When my cursor is in class B's `'self.func()'` line, if press `CTRL`+`]`, it goes to class A's func method. But instead I would like it to go B's func method.
|
2013/04/30
|
[
"https://Stackoverflow.com/questions/16300464",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/323000/"
] |
The `<C-]>` command jumps to the *first* tag match, but it also takes a `[count]` to jump to another one.
Alternatively, you can use the `g<C-]>` command, which (like the `:tjump` Ex command) will list all matches and query you for where you want to jump to (when there are multiple matches).
|
Have a look at [Jedi-Vim](https://github.com/davidhalter/jedi-vim). It defines a new “go to definition” command, that will properly handle those situations.
|
3,040,716
|
I have a list of regexes in python, and a string. Is there an elegant way to check if the at least one regex in the list matches the string? By elegant, I mean something better than simply looping through all of the regexes and checking them against the string and stopping if a match is found.
Basically, I had this code:
```
list = ['something','another','thing','hello']
string = 'hi'
if string in list:
pass # do something
else:
pass # do something else
```
Now I would like to have some regular expressions in the list, rather than just strings, and I am wondering if there is an elegant solution to check for a match to replace `if string in list:`.
Thanks in advance.
|
2010/06/14
|
[
"https://Stackoverflow.com/questions/3040716",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/363078/"
] |
```
import re
regexes = [
"foo.*",
"bar.*",
"qu*x"
]
# Make a regex that matches if any of our regexes match.
combined = "(" + ")|(".join(regexes) + ")"
if re.match(combined, mystring):
print "Some regex matched!"
```
|
```
import re
regexes = [
# your regexes here
re.compile('hi'),
# re.compile(...),
# re.compile(...),
# re.compile(...),
]
mystring = 'hi'
if any(regex.match(mystring) for regex in regexes):
print 'Some regex matched!'
```
|
3,040,716
|
I have a list of regexes in python, and a string. Is there an elegant way to check if the at least one regex in the list matches the string? By elegant, I mean something better than simply looping through all of the regexes and checking them against the string and stopping if a match is found.
Basically, I had this code:
```
list = ['something','another','thing','hello']
string = 'hi'
if string in list:
pass # do something
else:
pass # do something else
```
Now I would like to have some regular expressions in the list, rather than just strings, and I am wondering if there is an elegant solution to check for a match to replace `if string in list:`.
Thanks in advance.
|
2010/06/14
|
[
"https://Stackoverflow.com/questions/3040716",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/363078/"
] |
```
import re
regexes = [
# your regexes here
re.compile('hi'),
# re.compile(...),
# re.compile(...),
# re.compile(...),
]
mystring = 'hi'
if any(regex.match(mystring) for regex in regexes):
print 'Some regex matched!'
```
|
A mix of both Ned's and Nosklo's answers. Works guaranteed for any length of list... hope you enjoy
```
import re
raw_lst = ["foo.*",
"bar.*",
"(Spam.{0,3}){1,3}"]
reg_lst = []
for raw_regex in raw_lst:
reg_lst.append(re.compile(raw_regex))
mystring = "Spam, Spam, Spam!"
if any(compiled_reg.match(mystring) for compiled_reg in reg_lst):
print("something matched")
```
|
3,040,716
|
I have a list of regexes in python, and a string. Is there an elegant way to check if the at least one regex in the list matches the string? By elegant, I mean something better than simply looping through all of the regexes and checking them against the string and stopping if a match is found.
Basically, I had this code:
```
list = ['something','another','thing','hello']
string = 'hi'
if string in list:
pass # do something
else:
pass # do something else
```
Now I would like to have some regular expressions in the list, rather than just strings, and I am wondering if there is an elegant solution to check for a match to replace `if string in list:`.
Thanks in advance.
|
2010/06/14
|
[
"https://Stackoverflow.com/questions/3040716",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/363078/"
] |
```
import re
regexes = [
# your regexes here
re.compile('hi'),
# re.compile(...),
# re.compile(...),
# re.compile(...),
]
mystring = 'hi'
if any(regex.match(mystring) for regex in regexes):
print 'Some regex matched!'
```
|
If you loop over the strings, the time complexity would be O(n). A better approach would be combine those regexes as a regex-trie.
|
3,040,716
|
I have a list of regexes in python, and a string. Is there an elegant way to check if the at least one regex in the list matches the string? By elegant, I mean something better than simply looping through all of the regexes and checking them against the string and stopping if a match is found.
Basically, I had this code:
```
list = ['something','another','thing','hello']
string = 'hi'
if string in list:
pass # do something
else:
pass # do something else
```
Now I would like to have some regular expressions in the list, rather than just strings, and I am wondering if there is an elegant solution to check for a match to replace `if string in list:`.
Thanks in advance.
|
2010/06/14
|
[
"https://Stackoverflow.com/questions/3040716",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/363078/"
] |
```
import re
regexes = [
# your regexes here
re.compile('hi'),
# re.compile(...),
# re.compile(...),
# re.compile(...),
]
mystring = 'hi'
if any(regex.match(mystring) for regex in regexes):
print 'Some regex matched!'
```
|
Here's what I went for based on the other answers:
```
raw_list = ["some_regex","some_regex","some_regex","some_regex"]
reg_list = map(re.compile, raw_list)
mystring = "some_string"
if any(regex.match(mystring) for regex in reg_list):
print("matched")
```
|
3,040,716
|
I have a list of regexes in python, and a string. Is there an elegant way to check if the at least one regex in the list matches the string? By elegant, I mean something better than simply looping through all of the regexes and checking them against the string and stopping if a match is found.
Basically, I had this code:
```
list = ['something','another','thing','hello']
string = 'hi'
if string in list:
pass # do something
else:
pass # do something else
```
Now I would like to have some regular expressions in the list, rather than just strings, and I am wondering if there is an elegant solution to check for a match to replace `if string in list:`.
Thanks in advance.
|
2010/06/14
|
[
"https://Stackoverflow.com/questions/3040716",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/363078/"
] |
```
import re
regexes = [
"foo.*",
"bar.*",
"qu*x"
]
# Make a regex that matches if any of our regexes match.
combined = "(" + ")|(".join(regexes) + ")"
if re.match(combined, mystring):
print "Some regex matched!"
```
|
A mix of both Ned's and Nosklo's answers. Works guaranteed for any length of list... hope you enjoy
```
import re
raw_lst = ["foo.*",
"bar.*",
"(Spam.{0,3}){1,3}"]
reg_lst = []
for raw_regex in raw_lst:
reg_lst.append(re.compile(raw_regex))
mystring = "Spam, Spam, Spam!"
if any(compiled_reg.match(mystring) for compiled_reg in reg_lst):
print("something matched")
```
|
3,040,716
|
I have a list of regexes in python, and a string. Is there an elegant way to check if the at least one regex in the list matches the string? By elegant, I mean something better than simply looping through all of the regexes and checking them against the string and stopping if a match is found.
Basically, I had this code:
```
list = ['something','another','thing','hello']
string = 'hi'
if string in list:
pass # do something
else:
pass # do something else
```
Now I would like to have some regular expressions in the list, rather than just strings, and I am wondering if there is an elegant solution to check for a match to replace `if string in list:`.
Thanks in advance.
|
2010/06/14
|
[
"https://Stackoverflow.com/questions/3040716",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/363078/"
] |
```
import re
regexes = [
"foo.*",
"bar.*",
"qu*x"
]
# Make a regex that matches if any of our regexes match.
combined = "(" + ")|(".join(regexes) + ")"
if re.match(combined, mystring):
print "Some regex matched!"
```
|
If you loop over the strings, the time complexity would be O(n). A better approach would be combine those regexes as a regex-trie.
|
3,040,716
|
I have a list of regexes in python, and a string. Is there an elegant way to check if the at least one regex in the list matches the string? By elegant, I mean something better than simply looping through all of the regexes and checking them against the string and stopping if a match is found.
Basically, I had this code:
```
list = ['something','another','thing','hello']
string = 'hi'
if string in list:
pass # do something
else:
pass # do something else
```
Now I would like to have some regular expressions in the list, rather than just strings, and I am wondering if there is an elegant solution to check for a match to replace `if string in list:`.
Thanks in advance.
|
2010/06/14
|
[
"https://Stackoverflow.com/questions/3040716",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/363078/"
] |
```
import re
regexes = [
"foo.*",
"bar.*",
"qu*x"
]
# Make a regex that matches if any of our regexes match.
combined = "(" + ")|(".join(regexes) + ")"
if re.match(combined, mystring):
print "Some regex matched!"
```
|
Here's what I went for based on the other answers:
```
raw_list = ["some_regex","some_regex","some_regex","some_regex"]
reg_list = map(re.compile, raw_list)
mystring = "some_string"
if any(regex.match(mystring) for regex in reg_list):
print("matched")
```
|
3,040,716
|
I have a list of regexes in python, and a string. Is there an elegant way to check if the at least one regex in the list matches the string? By elegant, I mean something better than simply looping through all of the regexes and checking them against the string and stopping if a match is found.
Basically, I had this code:
```
list = ['something','another','thing','hello']
string = 'hi'
if string in list:
pass # do something
else:
pass # do something else
```
Now I would like to have some regular expressions in the list, rather than just strings, and I am wondering if there is an elegant solution to check for a match to replace `if string in list:`.
Thanks in advance.
|
2010/06/14
|
[
"https://Stackoverflow.com/questions/3040716",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/363078/"
] |
A mix of both Ned's and Nosklo's answers. Works guaranteed for any length of list... hope you enjoy
```
import re
raw_lst = ["foo.*",
"bar.*",
"(Spam.{0,3}){1,3}"]
reg_lst = []
for raw_regex in raw_lst:
reg_lst.append(re.compile(raw_regex))
mystring = "Spam, Spam, Spam!"
if any(compiled_reg.match(mystring) for compiled_reg in reg_lst):
print("something matched")
```
|
If you loop over the strings, the time complexity would be O(n). A better approach would be combine those regexes as a regex-trie.
|
3,040,716
|
I have a list of regexes in python, and a string. Is there an elegant way to check if the at least one regex in the list matches the string? By elegant, I mean something better than simply looping through all of the regexes and checking them against the string and stopping if a match is found.
Basically, I had this code:
```
list = ['something','another','thing','hello']
string = 'hi'
if string in list:
pass # do something
else:
pass # do something else
```
Now I would like to have some regular expressions in the list, rather than just strings, and I am wondering if there is an elegant solution to check for a match to replace `if string in list:`.
Thanks in advance.
|
2010/06/14
|
[
"https://Stackoverflow.com/questions/3040716",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/363078/"
] |
Here's what I went for based on the other answers:
```
raw_list = ["some_regex","some_regex","some_regex","some_regex"]
reg_list = map(re.compile, raw_list)
mystring = "some_string"
if any(regex.match(mystring) for regex in reg_list):
print("matched")
```
|
A mix of both Ned's and Nosklo's answers. Works guaranteed for any length of list... hope you enjoy
```
import re
raw_lst = ["foo.*",
"bar.*",
"(Spam.{0,3}){1,3}"]
reg_lst = []
for raw_regex in raw_lst:
reg_lst.append(re.compile(raw_regex))
mystring = "Spam, Spam, Spam!"
if any(compiled_reg.match(mystring) for compiled_reg in reg_lst):
print("something matched")
```
|
3,040,716
|
I have a list of regexes in python, and a string. Is there an elegant way to check if the at least one regex in the list matches the string? By elegant, I mean something better than simply looping through all of the regexes and checking them against the string and stopping if a match is found.
Basically, I had this code:
```
list = ['something','another','thing','hello']
string = 'hi'
if string in list:
pass # do something
else:
pass # do something else
```
Now I would like to have some regular expressions in the list, rather than just strings, and I am wondering if there is an elegant solution to check for a match to replace `if string in list:`.
Thanks in advance.
|
2010/06/14
|
[
"https://Stackoverflow.com/questions/3040716",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/363078/"
] |
Here's what I went for based on the other answers:
```
raw_list = ["some_regex","some_regex","some_regex","some_regex"]
reg_list = map(re.compile, raw_list)
mystring = "some_string"
if any(regex.match(mystring) for regex in reg_list):
print("matched")
```
|
If you loop over the strings, the time complexity would be O(n). A better approach would be combine those regexes as a regex-trie.
|
71,649,019
|
I've went through multiple excamples of data classification however either I didn't get it or those are not applicable in my case.
I have a list of values:
```
values = [-130,-110,-90,-80,-60,-40]
```
All I need to do is to classify `input` integer value to appropriate "bin".
E.g., `input=-97` should get `index 1` (between -110, to -90) , while `input=-140` should get 0.
I don't want to do it `if...elif else style` since values could be dynamic.
What would be the most pythonic way to do it? I guess no pandas/numpy is necessary
Regards
|
2022/03/28
|
[
"https://Stackoverflow.com/questions/71649019",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11191256/"
] |
One of your `classes`-props is a `boolean`. You cannot push a `boolean` (true/false) to `className`.
You could `console.log(classes)`, then you will see, which prop causes the warning.
|
It means at least one of the className values is boolean instead of string. we can not say anything more with this piece of code.
|
71,649,019
|
I've went through multiple excamples of data classification however either I didn't get it or those are not applicable in my case.
I have a list of values:
```
values = [-130,-110,-90,-80,-60,-40]
```
All I need to do is to classify `input` integer value to appropriate "bin".
E.g., `input=-97` should get `index 1` (between -110, to -90) , while `input=-140` should get 0.
I don't want to do it `if...elif else style` since values could be dynamic.
What would be the most pythonic way to do it? I guess no pandas/numpy is necessary
Regards
|
2022/03/28
|
[
"https://Stackoverflow.com/questions/71649019",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11191256/"
] |
One of your `classes`-props is a `boolean`. You cannot push a `boolean` (true/false) to `className`.
You could `console.log(classes)`, then you will see, which prop causes the warning.
|
I got the same error when i didn't give a value to className attribute like below, probably one of your variable is null or boolean etc.
```
<img className src={...} .../>
```
|
71,649,019
|
I've went through multiple excamples of data classification however either I didn't get it or those are not applicable in my case.
I have a list of values:
```
values = [-130,-110,-90,-80,-60,-40]
```
All I need to do is to classify `input` integer value to appropriate "bin".
E.g., `input=-97` should get `index 1` (between -110, to -90) , while `input=-140` should get 0.
I don't want to do it `if...elif else style` since values could be dynamic.
What would be the most pythonic way to do it? I guess no pandas/numpy is necessary
Regards
|
2022/03/28
|
[
"https://Stackoverflow.com/questions/71649019",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11191256/"
] |
It means at least one of the className values is boolean instead of string. we can not say anything more with this piece of code.
|
I got the same error when i didn't give a value to className attribute like below, probably one of your variable is null or boolean etc.
```
<img className src={...} .../>
```
|
47,508,790
|
I have the following JSON saved in a text file called test.xlsx.txt. The JSON is as follows:
```
{"RECONCILIATION": {0: "Successful"}, "ACCOUNT": {0: u"21599000"}, "DESCRIPTION": {0: u"USD to be accrued. "}, "PRODUCT": {0: "7500.0"}, "VALUE": {0: "7500.0"}, "AMOUNT": {0: "7500.0"}, "FORMULA": {0: "3 * 2500 "}}
```
The following is my python code:
```
f = open(path_to_analysis_results,'r')
message = f.read()
datastore = json.loads(str(message))
print datastore
f.close()
```
With the json.loads, I get the error "*ValueError: Expecting property name: line 1 column 21 (char 20)*". I have tried with json.load, json.dump and json.dumps, with all of them giving various errors. All I want to do is to be able to extract the key and the corresponding value and write to an Excel file. I have figured out how to write data to an Excel file, but am stuck with parsing this json.
```
RECONCILIATION : Successful
ACCOUNT : 21599000
DESCRIPTION : USD to be accrued.
PRODUCT : 7500.0
VALUE : 7500.0
AMOUNT : 7500.0
FORMULA : 3 * 2500
```
I would like the data to be in the above format to be able to write them to an Excel sheet.
|
2017/11/27
|
[
"https://Stackoverflow.com/questions/47508790",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3163920/"
] |
Your txt file does not contain valid JSON.
For starters, keys must be strings, not numbers.
The `u"..."` notation is not valid either.
You should fix your JSON first (maybe run it through a linter such as <https://jsonlint.com/> to make sure it's valid).
|
As Mike mentioned, your text file is not a valid JSON. It should be like:
```
{"RECONCILIATION": {"0": "Successful"}, "ACCOUNT": {"0": "21599000"}, "DESCRIPTION": {"0": "USD to be accrued. "}, "PRODUCT": {"0": "7500.0"}, "VALUE": {"0": "7500.0"}, "AMOUNT": {"0": "7500.0"}, "FORMULA": {"0": "3 * 2500 "}}
```
Note: keys are within doube quotes as JSON requires double quotes. And, your code should be (without **str()**):
```
import json
f = open(path_to_analysis_results,'r')
message = f.read()
print(message) # print message before, just to check it.
datastore = json.loads(message) # note: str() is not required. Message is already a string
print (datastore)
f.close()
```
|
31,303,728
|
Using pandas 0.16.2 on python 2.7, OSX.
I read a data-frame from a csv file like this:
```
import pandas as pd
data = pd.read_csv("my_csv_file.csv",sep='\t', skiprows=(0), header=(0))
```
The output of `data.dtypes` is:
```
name object
weight float64
ethnicity object
dtype: object
```
I was expecting string types for name, and ethnicity. But I found reasons here on SO on why they're "object" in newer pandas versions.
Now, I want to select rows based on ethnicity, for example:
```
data[data['ethnicity']=='Asian']
Out[3]:
Empty DataFrame
Columns: [name, weight, ethnicity]
Index: []
```
I get the same result with `data[data.ethnicity=='Asian']` or `data[data['ethnicity']=="Asian"]`.
But when I try the following:
```
data[data['ethnicity'].str.contains('Asian')].head(3)
```
I get the results I want.
However, I do not want to use "contains"- I would like to check for direct equality.
Please note that `data[data['ethnicity'].str=='Asian']` raises an error.
Am I doing something wrong? How to do this correctly?
|
2015/07/08
|
[
"https://Stackoverflow.com/questions/31303728",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/228177/"
] |
There is probably whitespace in your strings, for example,
```
data = pd.DataFrame({'ethnicity':[' Asian', ' Asian']})
data.loc[data['ethnicity'].str.contains('Asian'), 'ethnicity'].tolist()
# [' Asian', ' Asian']
print(data[data['ethnicity'].str.contains('Asian')])
```
yields
```
ethnicity
0 Asian
1 Asian
```
To strip the leading or trailing whitespace off the strings, you could use
```
data['ethnicity'] = data['ethnicity'].str.strip()
```
after which,
```
data.loc[data['ethnicity'] == 'Asian']
```
yields
```
ethnicity
0 Asian
1 Asian
```
|
You might try this:
```
data[data['ethnicity'].str.strip()=='Asian']
```
|
56,191,697
|
What is the `__weakrefoffset__` attribute for? What does the integer value signify?
```
>>> int.__weakrefoffset__
0
>>> class A:
... pass
...
>>> A.__weakrefoffset__
24
>>> type.__weakrefoffset__
368
>>> class B:
... __slots__ = ()
...
>>> B.__weakrefoffset__
0
```
All types seem to have this attribute. But there's no mention about that in [docs](https://docs.python.org/3/search.html?q=__weakrefoffset__) nor in [PEP 205](https://www.python.org/dev/peps/pep-0205/).
|
2019/05/17
|
[
"https://Stackoverflow.com/questions/56191697",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/674039/"
] |
In CPython, the layout of a simple type like `float` is three fields: its type pointer, its reference count, and its value (here, a `double`). For `list`, the value is three variables (the pointer to the separately-allocated array, its capacity, and the used size). These types do not support attributes or weak references to save space.
If a Python class inherits from one of these, it’s critical that it be able to support attributes, and yet there is no fixed offset at which to place the `__dict__` pointer (without wasting memory by putting it at a large unused offset). So the dictionary is stored wherever there is room, and its offset in bytes is [recorded in the type](https://stackoverflow.com/a/46591145/8586227). For base classes where the size is variable (like `tuple`, which includes all its pointers directly), there is special support for storing the `__dict__` pointer past the end of the variable-size section (*e.g.*, `type("",(tuple,),{}).__dictoffset__` is -8).
The situation with weak references is [exactly analogous](https://docs.python.org/3/extending/newtypes.html#weakref-support), except that there is no support for (subclasses of) variable-sized types. A `__weakrefoffset__` of 0 (which is conveniently the default for C static variables) indicates no support, since of course the object’s type is known to always be at the beginning of its layout.
|
It is referenced in [PEP 205](https://www.python.org/dev/peps/pep-0205/) here:
>
> Many built-in types will participate in the weak-reference management, and any extension type can elect to do so. **The type structure will contain an additional field which provides an offset into the instance structure which contains a list of weak reference structures.** If the value of the field is <= 0, the object does not participate. In this case, `weakref.ref()`, `<weakdict>.__setitem__()` and `.setdefault()`, and item assignment will raise `TypeError`. If the value of the field is > 0, a new weak reference can be generated and added to the list.
>
>
>
And you can see exactly how that works in the source code [here](https://github.com/python/cpython/blob/master/Objects/weakrefobject.c).
Check out the other answer for more why it exists.
|
47,013,083
|
I am trying to establish a Web-Socket connection to a server and enter into receive mode.Once the client starts receiving the data, it immediately closes the connection with below exception
```
webSoc_Received = await websocket.recv()
File "/root/envname/lib/python3.6/site-packages/websockets/protocol.py", line 319, in recv
raise ConnectionClosed(self.close_code, self.close_reason)
websockets.exceptions.ConnectionClosed: WebSocket connection is closed: code = 1007, no reason.
```
Client-side Code Snippet :
```
import asyncio
import websockets
async def connect_ws():
print("websockets.client module defines a simple WebSocket client API::::::")
async with websockets.client.connect(full_url,extra_headers=headers_conn1) as websocket:
print ("starting")
webSoc_Received = await websocket.recv()
print ("Ending")
Decode_data = zlib.decompress(webSoc_Received)
print(Decode_data)
asyncio.get_event_loop().run_until_complete(connect_ws())
```
Any thoughts on this?
|
2017/10/30
|
[
"https://Stackoverflow.com/questions/47013083",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3913710/"
] |
You can use hidden property or `*ngIf` directive :
```html
<app-form></app-form>
<app-results *ngIf="isOn"></app-results>
<app-contact-primary *ngIf="!isOn"></<app-contact-primary>
<app-contact-second *ngIf="!isOn"></app-contact-second>
<button pButton label="Contact" (click)="isOn= false">Contact</button>
<button pButton label="Results" (click)="isOn= true">Results</button>
```
|
Just make the variable true and false
```html
<button pButton label="Contact" (click)="isOn = true">Contact</button>
<button pButton label="Results" (click)="isOn = false">Results</button>
```
|
47,013,083
|
I am trying to establish a Web-Socket connection to a server and enter into receive mode.Once the client starts receiving the data, it immediately closes the connection with below exception
```
webSoc_Received = await websocket.recv()
File "/root/envname/lib/python3.6/site-packages/websockets/protocol.py", line 319, in recv
raise ConnectionClosed(self.close_code, self.close_reason)
websockets.exceptions.ConnectionClosed: WebSocket connection is closed: code = 1007, no reason.
```
Client-side Code Snippet :
```
import asyncio
import websockets
async def connect_ws():
print("websockets.client module defines a simple WebSocket client API::::::")
async with websockets.client.connect(full_url,extra_headers=headers_conn1) as websocket:
print ("starting")
webSoc_Received = await websocket.recv()
print ("Ending")
Decode_data = zlib.decompress(webSoc_Received)
print(Decode_data)
asyncio.get_event_loop().run_until_complete(connect_ws())
```
Any thoughts on this?
|
2017/10/30
|
[
"https://Stackoverflow.com/questions/47013083",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3913710/"
] |
You can use hidden property or `*ngIf` directive :
```html
<app-form></app-form>
<app-results *ngIf="isOn"></app-results>
<app-contact-primary *ngIf="!isOn"></<app-contact-primary>
<app-contact-second *ngIf="!isOn"></app-contact-second>
<button pButton label="Contact" (click)="isOn= false">Contact</button>
<button pButton label="Results" (click)="isOn= true">Results</button>
```
|
Just using it
Typescript:
```typescript
public show: boolean = false;
public buttonName: any = true;
toggle() {
this.show = !this.show;
if(this.show)
this.buttonName = false;
else
this.buttonName = true;
}
```
HTML:
```html
<div *ngIf="show">
<textarea #todoitem class="></textarea>
</div>
<button type="button" (click)="addItem('status')">Add</button>
<button type="button" (click)="toggle()">Close</button>
<div *ngIf="buttonName">
<a (click)="toggle()"><i class="fa fa-plus text-white"></i></a>
</div>
```
|
73,003,060
|
I made a simple example of taking a screenshot.
I debugged this, but there was an error.
cord
```
import pyautogui
import PIL
pyautogui.screenshot('screenshot.png')
```
error
```
The Pillow package is required to use this function.
```
I've already set up a pillow and version is 9.2.0(lastest version)
I'm using python 3.9.12
My Python version is compatible with the Pillow version.
I tried Pip install fill, pip install fill --upgrade, and so on
But it hasn't been fixed.
```
Requirement already satisfied: Pillow in d:\anaconda3\lib\site-packages (9.2.0)
```
Is there a way to fix it?
|
2022/07/16
|
[
"https://Stackoverflow.com/questions/73003060",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/19560998/"
] |
I've just installed everything new and it's fixed!
|
Try this, it should work perfectly on my side.
```
import pyautogui
myScreenshot = pyautogui.screenshot()
myScreenshot.save(r'D:\\image.png')
```
[](https://i.stack.imgur.com/7aGpM.png)
|
57,813,557
|
I would like to multiply every element in a numpy array by a constant raised to the power of the index of the array element without a for loop. I am using python 2.7.
I am new to this and can use a for loop by trying to not do that for no real reason.
This for loop would solve the problem
```py
x = 3
for i in range(test_array.size):
test_array[i] = test_array[i] * x**i
```
|
2019/09/05
|
[
"https://Stackoverflow.com/questions/57813557",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12027672/"
] |
What you want is something like this:
```
data = [
[{'Status': 'active', 'id': '0f1fb86da9c7ee380'}],
[{'Status': 'active', 'id': '0d6b330e4960c3382'}, {'Status': 'active', 'id': '033cfb634e595ccfa'}],
[{'Status': 'active', 'id': '0457f623cbb9f7c95'}],
[{'Status': 'active', 'id': '01b69eb6a3048f749'}, {'Status': 'active', 'id': '0f7ce44a9a5fc82f5'}, {'Status': 'active', 'id': '05417e161acf3ec5d'}],
[{'Status': 'active', 'id': '033cfb634e595ccfa'}, {'Status': 'active', 'id': '01eab32f9808acf19'}],
]
new_data = []
for l in data:
current_ids = []
for d in l:
current_ids.append(d["id"])
new_data.append(current_ids)
new_data
```
output:
```
[['0f1fb86da9c7ee380'],
['0d6b330e4960c3382', '033cfb634e595ccfa'],
['0457f623cbb9f7c95'],
['01b69eb6a3048f749', '0f7ce44a9a5fc82f5', '05417e161acf3ec5d'],
['033cfb634e595ccfa', '01eab32f9808acf19']]
```
|
You can also use list comprehension to achieve this.
```
output = [[item["id"] for item in items] for items in data]
```
|
57,813,557
|
I would like to multiply every element in a numpy array by a constant raised to the power of the index of the array element without a for loop. I am using python 2.7.
I am new to this and can use a for loop by trying to not do that for no real reason.
This for loop would solve the problem
```py
x = 3
for i in range(test_array.size):
test_array[i] = test_array[i] * x**i
```
|
2019/09/05
|
[
"https://Stackoverflow.com/questions/57813557",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12027672/"
] |
What you want is something like this:
```
data = [
[{'Status': 'active', 'id': '0f1fb86da9c7ee380'}],
[{'Status': 'active', 'id': '0d6b330e4960c3382'}, {'Status': 'active', 'id': '033cfb634e595ccfa'}],
[{'Status': 'active', 'id': '0457f623cbb9f7c95'}],
[{'Status': 'active', 'id': '01b69eb6a3048f749'}, {'Status': 'active', 'id': '0f7ce44a9a5fc82f5'}, {'Status': 'active', 'id': '05417e161acf3ec5d'}],
[{'Status': 'active', 'id': '033cfb634e595ccfa'}, {'Status': 'active', 'id': '01eab32f9808acf19'}],
]
new_data = []
for l in data:
current_ids = []
for d in l:
current_ids.append(d["id"])
new_data.append(current_ids)
new_data
```
output:
```
[['0f1fb86da9c7ee380'],
['0d6b330e4960c3382', '033cfb634e595ccfa'],
['0457f623cbb9f7c95'],
['01b69eb6a3048f749', '0f7ce44a9a5fc82f5', '05417e161acf3ec5d'],
['033cfb634e595ccfa', '01eab32f9808acf19']]
```
|
```
data =json.loads(v_string)
for id in data['DBI']:
datalist = id['Groups']
i=0
data_list=[] #<-- This was outside above mann for loop.
while i < len(datalist):
for dic in datalist[i]:
if 'active' not in datalist[i][dic]:
data_list.append(datalist[i][dic])
print(data_list)
i+=1
```
|
57,813,557
|
I would like to multiply every element in a numpy array by a constant raised to the power of the index of the array element without a for loop. I am using python 2.7.
I am new to this and can use a for loop by trying to not do that for no real reason.
This for loop would solve the problem
```py
x = 3
for i in range(test_array.size):
test_array[i] = test_array[i] * x**i
```
|
2019/09/05
|
[
"https://Stackoverflow.com/questions/57813557",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12027672/"
] |
You can also use list comprehension to achieve this.
```
output = [[item["id"] for item in items] for items in data]
```
|
```
data =json.loads(v_string)
for id in data['DBI']:
datalist = id['Groups']
i=0
data_list=[] #<-- This was outside above mann for loop.
while i < len(datalist):
for dic in datalist[i]:
if 'active' not in datalist[i][dic]:
data_list.append(datalist[i][dic])
print(data_list)
i+=1
```
|
1,653,460
|
What would be the best way to handle lightweight crash recovery for my program?
I have a Python program that runs a number of test cases and the results are stored in a dictionary which serves as a cache. If I could save (and then restore) each item that is added to the dictionary, I could simply run the program again and the caching would provide suitable crash recovery.
1. You may assume that the keys and values in the dictionary are easily convertible to strings ie. using either str or the [pickle module](http://docs.python.org/library/pickle.html).
2. I want this to be completely cross platform - well at least as cross platform as Python is
3. I don't want to simply write out each value to a file and load it in my program might crash while I am writing the file
4. **UPDATE**: This is intended to be a lightweight module so a DBMS is out of the question.
5. **UPDATE**: Alex is correct in that I don't actually *need* to protect against crashes while writing out, but there are circumstances where I would like to be able to manually terminate it in a recoverable state.
6. **UPDATE** Added a highly limited solution using standard input below
|
2009/10/31
|
[
"https://Stackoverflow.com/questions/1653460",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/165495/"
] |
There's no good way to guard against "your program crashing **while** writing a checkpoint to a file", but why should you worry so much about **that**?! What ELSE is your program doing at that time BESIDES "saving checkpoint to a file", that could easily cause it to crash?!
It's hard to beat `pickle` (or `cPickle`) for portability of serialization in Python, but, that's just about "turning your keys and values to strings". For saving key-value pairs (once stringified), few approaches are safer than just appending to a file (**don't** pickle to files if your crashes are far, far more frequent than normal, as you suggest tjey are).
If your environment is incredibly crash-prone for whatever reason (*very* cheap HW?-), just make sure you close the file (and fflush if the OS is also crash-prone;-), then reopen it for append. This way, worst that can happen is that the very latest append will be incomplete (due to a crash in the middle of things) -- then you just catch the exception raised by unpickling that incomplete record and redo only the things that weren't saved (because they weren't completed due to a crash, OR because they were completed but not fully saved due to a crash, comes to much the same thing in the end).
If you have the option of checkpointing to a database engine (instead of just doing so to files), consider it seriously! The DB engine will keep transaction logs and ensure ACID properties, making your application-side programming much easier IF you can count on that!-)
|
The pickle module supports serializing objects to a file (and loading from file):
<http://docs.python.org/library/pickle.html>
|
1,653,460
|
What would be the best way to handle lightweight crash recovery for my program?
I have a Python program that runs a number of test cases and the results are stored in a dictionary which serves as a cache. If I could save (and then restore) each item that is added to the dictionary, I could simply run the program again and the caching would provide suitable crash recovery.
1. You may assume that the keys and values in the dictionary are easily convertible to strings ie. using either str or the [pickle module](http://docs.python.org/library/pickle.html).
2. I want this to be completely cross platform - well at least as cross platform as Python is
3. I don't want to simply write out each value to a file and load it in my program might crash while I am writing the file
4. **UPDATE**: This is intended to be a lightweight module so a DBMS is out of the question.
5. **UPDATE**: Alex is correct in that I don't actually *need* to protect against crashes while writing out, but there are circumstances where I would like to be able to manually terminate it in a recoverable state.
6. **UPDATE** Added a highly limited solution using standard input below
|
2009/10/31
|
[
"https://Stackoverflow.com/questions/1653460",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/165495/"
] |
The pickle module supports serializing objects to a file (and loading from file):
<http://docs.python.org/library/pickle.html>
|
**Solution with severe restrictions**
If I don't worry about it crashing while writing out and I only want to allow manual termination, I can use standard output to control this. Unfortunately, this can only terminate the program when a control point is reached. This could be solved by creating a new thread to read standard input. This thread could use a global lock to check if the main thread is inside a critical section (writing to a file) and terminate the program if this is not the case.
Downsides:
* This is reasonably complex
* It adds an extra thread
* It stops me using standard input for anything else
|
1,653,460
|
What would be the best way to handle lightweight crash recovery for my program?
I have a Python program that runs a number of test cases and the results are stored in a dictionary which serves as a cache. If I could save (and then restore) each item that is added to the dictionary, I could simply run the program again and the caching would provide suitable crash recovery.
1. You may assume that the keys and values in the dictionary are easily convertible to strings ie. using either str or the [pickle module](http://docs.python.org/library/pickle.html).
2. I want this to be completely cross platform - well at least as cross platform as Python is
3. I don't want to simply write out each value to a file and load it in my program might crash while I am writing the file
4. **UPDATE**: This is intended to be a lightweight module so a DBMS is out of the question.
5. **UPDATE**: Alex is correct in that I don't actually *need* to protect against crashes while writing out, but there are circumstances where I would like to be able to manually terminate it in a recoverable state.
6. **UPDATE** Added a highly limited solution using standard input below
|
2009/10/31
|
[
"https://Stackoverflow.com/questions/1653460",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/165495/"
] |
There's no good way to guard against "your program crashing **while** writing a checkpoint to a file", but why should you worry so much about **that**?! What ELSE is your program doing at that time BESIDES "saving checkpoint to a file", that could easily cause it to crash?!
It's hard to beat `pickle` (or `cPickle`) for portability of serialization in Python, but, that's just about "turning your keys and values to strings". For saving key-value pairs (once stringified), few approaches are safer than just appending to a file (**don't** pickle to files if your crashes are far, far more frequent than normal, as you suggest tjey are).
If your environment is incredibly crash-prone for whatever reason (*very* cheap HW?-), just make sure you close the file (and fflush if the OS is also crash-prone;-), then reopen it for append. This way, worst that can happen is that the very latest append will be incomplete (due to a crash in the middle of things) -- then you just catch the exception raised by unpickling that incomplete record and redo only the things that weren't saved (because they weren't completed due to a crash, OR because they were completed but not fully saved due to a crash, comes to much the same thing in the end).
If you have the option of checkpointing to a database engine (instead of just doing so to files), consider it seriously! The DB engine will keep transaction logs and ensure ACID properties, making your application-side programming much easier IF you can count on that!-)
|
One possibility would be to create a number of smaller files ... each representing a subset of the state that you're trying to preserve and each with a checksum or tag indicating that it's complete as the last line/datum of the file (just before the file is closed).
If the checksum/tag is good then the rest of the data can be considered valid ... though program would then have to find all of these files, open and read all of them, and use meta data you've provided (in their headers or their names?) to determine which ones constitute the most recent cohesive state representation (or checkpoint) from which you can continue processing.
Without knowing more about the nature of the data that you're working with it's impossible to be more specific.
You can use files, of course, or you could use a DBMS system just about as easily. Any decent DBMS (PostgreSQL, MySQL if you're using the proper storage back-ends) can give you ACID guarantees and transactional support. So the data you read back should always be consistent with the constraints that you put in your schema and/or with the transactions (BEGIN, COMMIT, ROLLBACK) that you processed.
A possible advantage of posting your serialized date to a DBMS is that you can host the DBMS on a separate system (which is unlikely to suffer the same instabilities as your test host at the same times).
|
1,653,460
|
What would be the best way to handle lightweight crash recovery for my program?
I have a Python program that runs a number of test cases and the results are stored in a dictionary which serves as a cache. If I could save (and then restore) each item that is added to the dictionary, I could simply run the program again and the caching would provide suitable crash recovery.
1. You may assume that the keys and values in the dictionary are easily convertible to strings ie. using either str or the [pickle module](http://docs.python.org/library/pickle.html).
2. I want this to be completely cross platform - well at least as cross platform as Python is
3. I don't want to simply write out each value to a file and load it in my program might crash while I am writing the file
4. **UPDATE**: This is intended to be a lightweight module so a DBMS is out of the question.
5. **UPDATE**: Alex is correct in that I don't actually *need* to protect against crashes while writing out, but there are circumstances where I would like to be able to manually terminate it in a recoverable state.
6. **UPDATE** Added a highly limited solution using standard input below
|
2009/10/31
|
[
"https://Stackoverflow.com/questions/1653460",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/165495/"
] |
There's no good way to guard against "your program crashing **while** writing a checkpoint to a file", but why should you worry so much about **that**?! What ELSE is your program doing at that time BESIDES "saving checkpoint to a file", that could easily cause it to crash?!
It's hard to beat `pickle` (or `cPickle`) for portability of serialization in Python, but, that's just about "turning your keys and values to strings". For saving key-value pairs (once stringified), few approaches are safer than just appending to a file (**don't** pickle to files if your crashes are far, far more frequent than normal, as you suggest tjey are).
If your environment is incredibly crash-prone for whatever reason (*very* cheap HW?-), just make sure you close the file (and fflush if the OS is also crash-prone;-), then reopen it for append. This way, worst that can happen is that the very latest append will be incomplete (due to a crash in the middle of things) -- then you just catch the exception raised by unpickling that incomplete record and redo only the things that weren't saved (because they weren't completed due to a crash, OR because they were completed but not fully saved due to a crash, comes to much the same thing in the end).
If you have the option of checkpointing to a database engine (instead of just doing so to files), consider it seriously! The DB engine will keep transaction logs and ensure ACID properties, making your application-side programming much easier IF you can count on that!-)
|
Pickle/cPickle have problems.
I use the JSON module to serialize objects out. I like it because not only does it work on any OS, but it will work fine in other programming languages, too; many other languages and platforms have readily-accessible JSON deserialization support, which makes it easy to use the same objects in different programs.
|
1,653,460
|
What would be the best way to handle lightweight crash recovery for my program?
I have a Python program that runs a number of test cases and the results are stored in a dictionary which serves as a cache. If I could save (and then restore) each item that is added to the dictionary, I could simply run the program again and the caching would provide suitable crash recovery.
1. You may assume that the keys and values in the dictionary are easily convertible to strings ie. using either str or the [pickle module](http://docs.python.org/library/pickle.html).
2. I want this to be completely cross platform - well at least as cross platform as Python is
3. I don't want to simply write out each value to a file and load it in my program might crash while I am writing the file
4. **UPDATE**: This is intended to be a lightweight module so a DBMS is out of the question.
5. **UPDATE**: Alex is correct in that I don't actually *need* to protect against crashes while writing out, but there are circumstances where I would like to be able to manually terminate it in a recoverable state.
6. **UPDATE** Added a highly limited solution using standard input below
|
2009/10/31
|
[
"https://Stackoverflow.com/questions/1653460",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/165495/"
] |
There's no good way to guard against "your program crashing **while** writing a checkpoint to a file", but why should you worry so much about **that**?! What ELSE is your program doing at that time BESIDES "saving checkpoint to a file", that could easily cause it to crash?!
It's hard to beat `pickle` (or `cPickle`) for portability of serialization in Python, but, that's just about "turning your keys and values to strings". For saving key-value pairs (once stringified), few approaches are safer than just appending to a file (**don't** pickle to files if your crashes are far, far more frequent than normal, as you suggest tjey are).
If your environment is incredibly crash-prone for whatever reason (*very* cheap HW?-), just make sure you close the file (and fflush if the OS is also crash-prone;-), then reopen it for append. This way, worst that can happen is that the very latest append will be incomplete (due to a crash in the middle of things) -- then you just catch the exception raised by unpickling that incomplete record and redo only the things that weren't saved (because they weren't completed due to a crash, OR because they were completed but not fully saved due to a crash, comes to much the same thing in the end).
If you have the option of checkpointing to a database engine (instead of just doing so to files), consider it seriously! The DB engine will keep transaction logs and ensure ACID properties, making your application-side programming much easier IF you can count on that!-)
|
**Solution with severe restrictions**
If I don't worry about it crashing while writing out and I only want to allow manual termination, I can use standard output to control this. Unfortunately, this can only terminate the program when a control point is reached. This could be solved by creating a new thread to read standard input. This thread could use a global lock to check if the main thread is inside a critical section (writing to a file) and terminate the program if this is not the case.
Downsides:
* This is reasonably complex
* It adds an extra thread
* It stops me using standard input for anything else
|
1,653,460
|
What would be the best way to handle lightweight crash recovery for my program?
I have a Python program that runs a number of test cases and the results are stored in a dictionary which serves as a cache. If I could save (and then restore) each item that is added to the dictionary, I could simply run the program again and the caching would provide suitable crash recovery.
1. You may assume that the keys and values in the dictionary are easily convertible to strings ie. using either str or the [pickle module](http://docs.python.org/library/pickle.html).
2. I want this to be completely cross platform - well at least as cross platform as Python is
3. I don't want to simply write out each value to a file and load it in my program might crash while I am writing the file
4. **UPDATE**: This is intended to be a lightweight module so a DBMS is out of the question.
5. **UPDATE**: Alex is correct in that I don't actually *need* to protect against crashes while writing out, but there are circumstances where I would like to be able to manually terminate it in a recoverable state.
6. **UPDATE** Added a highly limited solution using standard input below
|
2009/10/31
|
[
"https://Stackoverflow.com/questions/1653460",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/165495/"
] |
One possibility would be to create a number of smaller files ... each representing a subset of the state that you're trying to preserve and each with a checksum or tag indicating that it's complete as the last line/datum of the file (just before the file is closed).
If the checksum/tag is good then the rest of the data can be considered valid ... though program would then have to find all of these files, open and read all of them, and use meta data you've provided (in their headers or their names?) to determine which ones constitute the most recent cohesive state representation (or checkpoint) from which you can continue processing.
Without knowing more about the nature of the data that you're working with it's impossible to be more specific.
You can use files, of course, or you could use a DBMS system just about as easily. Any decent DBMS (PostgreSQL, MySQL if you're using the proper storage back-ends) can give you ACID guarantees and transactional support. So the data you read back should always be consistent with the constraints that you put in your schema and/or with the transactions (BEGIN, COMMIT, ROLLBACK) that you processed.
A possible advantage of posting your serialized date to a DBMS is that you can host the DBMS on a separate system (which is unlikely to suffer the same instabilities as your test host at the same times).
|
**Solution with severe restrictions**
If I don't worry about it crashing while writing out and I only want to allow manual termination, I can use standard output to control this. Unfortunately, this can only terminate the program when a control point is reached. This could be solved by creating a new thread to read standard input. This thread could use a global lock to check if the main thread is inside a critical section (writing to a file) and terminate the program if this is not the case.
Downsides:
* This is reasonably complex
* It adds an extra thread
* It stops me using standard input for anything else
|
1,653,460
|
What would be the best way to handle lightweight crash recovery for my program?
I have a Python program that runs a number of test cases and the results are stored in a dictionary which serves as a cache. If I could save (and then restore) each item that is added to the dictionary, I could simply run the program again and the caching would provide suitable crash recovery.
1. You may assume that the keys and values in the dictionary are easily convertible to strings ie. using either str or the [pickle module](http://docs.python.org/library/pickle.html).
2. I want this to be completely cross platform - well at least as cross platform as Python is
3. I don't want to simply write out each value to a file and load it in my program might crash while I am writing the file
4. **UPDATE**: This is intended to be a lightweight module so a DBMS is out of the question.
5. **UPDATE**: Alex is correct in that I don't actually *need* to protect against crashes while writing out, but there are circumstances where I would like to be able to manually terminate it in a recoverable state.
6. **UPDATE** Added a highly limited solution using standard input below
|
2009/10/31
|
[
"https://Stackoverflow.com/questions/1653460",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/165495/"
] |
Pickle/cPickle have problems.
I use the JSON module to serialize objects out. I like it because not only does it work on any OS, but it will work fine in other programming languages, too; many other languages and platforms have readily-accessible JSON deserialization support, which makes it easy to use the same objects in different programs.
|
**Solution with severe restrictions**
If I don't worry about it crashing while writing out and I only want to allow manual termination, I can use standard output to control this. Unfortunately, this can only terminate the program when a control point is reached. This could be solved by creating a new thread to read standard input. This thread could use a global lock to check if the main thread is inside a critical section (writing to a file) and terminate the program if this is not the case.
Downsides:
* This is reasonably complex
* It adds an extra thread
* It stops me using standard input for anything else
|
34,460,369
|
I had a bunch of bash scripts in a directory that I "backed up" doing `$ tail -n +1 -- *.sh`
The output of that tail is something like:
```
==> do_stuff.sh <==
#! /bin/bash
cd ~/my_dir
source ~/my_dir/bin/activate
python scripts/do_stuff.py
==> do_more_stuff.sh <==
#! /bin/bash
cd ~/my_dir
python scripts/do_more_stuff.py
```
These are all fairly simple scripts with 2-10 lines.
Given the output of that `tail`, I want to recreate all of the above files with the same content.
That is, I'm looking for a command that can ingest the above text and create `do_stuff.sh` and `do_more_stuff.sh` with the appropriate content.
This is more of a one-off task so I don't really need anything robust and I believe there are no big edge cases given files are simple (e.g none of the files actually contain `==>` in them).
I started with trying to come up with a matching regex and it will probably look something like this `(==>.*\.sh <==)(.*)(==>.*\.sh <==)`, but I'm stuck into actually getting it to capture filename, content and output to file.
Any ideas?
|
2015/12/25
|
[
"https://Stackoverflow.com/questions/34460369",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/845169/"
] |
Presume your backup file is named backup.txt
```
perl -ne "if (/==> (\S+) <==/){open OUT,'>',$1;next}print OUT $_" backup.txt
```
Above version is for Windows
fixed version on \*nix:
```
perl -ne 'if (/==> (\S+) <==/){open OUT,">",$1;next}print OUT $_' backup.txt
```
|
```
#!/bin/bash
while read -r line; do
if [[ $line =~ ^==\>[[:space:]](.*)[[:space:]]\<==$ ]]; then
out="${BASH_REMATCH[1]}"
continue
fi
printf "%s\n" "$line" >> "$out"
done < backup.txt
```
Drawback: extra blank line at the end of every created file except the last one.
|
14,075,337
|
I have a csv file with date, time., price, mag, signal.
62035 rows; there are 42 times of day associated to each unique date in the file.
For each date, when there is an 'S' in the signal column append the corresponding price at the time the 'S' occurred. Below is the attempt.
```
from pandas import *
from numpy import *
from io import *
from os import *
from sys import *
DF1 = read_csv('___.csv')
idf=DF1.set_index(['date','time','price'],inplace=True)
sStore=[]
for i in idf.index[i][0]:
sStore.append([idf.index[j][2] for j in idf[j][1] if idf['signal']=='S'])
sStore.head()
```
```
Traceback (most recent call last)
<ipython-input-7-8769220929e4> in <module>()
1 sStore=[]
2
----> 3 for time in idf.index[i][0]:
4
5 sStore.append([idf.index[j][2] for j in idf[j][1] if idf['signal']=='S'])
NameError: name 'i' is not defined
```
I do not understand why the i index is not permitted here. Thanks.
I also think it's strange that :
idf.index.levels[0] will show the dates "not parsed" as it is in the file but out of order. Despite that parse\_date=True as an argument in set\_index.
I bring this up since I was thinking of side swiping the problem with something like:
```
for i in idf.index.levels[0]:
sStore.append([idf.index[j][2] for j in idf.index.levels[1] if idf['signal']=='S'])
sStore.head()
```
**My edit 12/30/2012 based on DSM's comment below:**
I would like to use your idea to get the P&L, as I commented below. Where if S!=B, for any given date, we difference using the closing time, 1620.
```
v=[df["signal"]=="S"]
t=[df["time"]=="1620"]
u=[df["signal"]!="S"]
df["price"][[v and (u and t)]]
```
That is, "give me the price at 1620; (even when it doesn't give a "sell signal", S) so that I can diff. with the "extra B's"--for the special case where B>S. This ignores the symmetric concern (where S>B) but for now I want to understand this logical issue.
On traceback, this expression gives:
```
ValueError: boolean index array should have 1 dimension
```
Note that in order to invoke df["time'] **I do not set\_index here**. Trying the union operator | gives:
```
TypeError: unsupported operand type(s) for |: 'list' and 'list'
```
**Looking at Max Fellows's approach**,
@Max Fellows
The point is to close out the positions at the end of the day; so we need to capture to price at the close to "unload" all those B, S which were accumulated; but didn't net each other out.
If I say:
```
filterFunc1 = lambda row: row["signal"] == "S" and ([row["signal"] != "S"][row["price"]=="1620"])
filterFunc2 =lambda row: ([row["price"]=="1620"][row["signal"] != "S"])
filterFunc=filterFunc1 and filterFunc2
filteredData = itertools.ifilter(filterFunc, reader)
```
On traceback:
```
IndexError: list index out of range
```
|
2012/12/28
|
[
"https://Stackoverflow.com/questions/14075337",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1374969/"
] |
Try something like this:
```
for i in range(len(idf.index)):
value = idf.index[i][0]
```
Same thing for the iteration with the `j` index variable. As has been pointed, you can't reference the iteration index in the expression to be iterated, and besides you need to perform a very specific iteration (traversing over a column in a matrix), and Python's default iterators won't work "out of the box" for this, so a custom index handling is needed here.
|
It's because `i` is not yet defined, just like the error message says.
In this line:
```
for i in idf.index[i][0]:
```
You are telling the Python interpreter to iterate over all the values yielded by the list returning from the expression `idf.index[i][0]` but you have not yet defined what `i` is (although you are attempting to set each item in the list to the variable `i` as well).
The way the Python `for ... in ...` loop works is that it takes the right most component and asks for the `next` item from the iterator. It then assigns the value yielded by the call to the variable name provided on the left hand side.
|
14,075,337
|
I have a csv file with date, time., price, mag, signal.
62035 rows; there are 42 times of day associated to each unique date in the file.
For each date, when there is an 'S' in the signal column append the corresponding price at the time the 'S' occurred. Below is the attempt.
```
from pandas import *
from numpy import *
from io import *
from os import *
from sys import *
DF1 = read_csv('___.csv')
idf=DF1.set_index(['date','time','price'],inplace=True)
sStore=[]
for i in idf.index[i][0]:
sStore.append([idf.index[j][2] for j in idf[j][1] if idf['signal']=='S'])
sStore.head()
```
```
Traceback (most recent call last)
<ipython-input-7-8769220929e4> in <module>()
1 sStore=[]
2
----> 3 for time in idf.index[i][0]:
4
5 sStore.append([idf.index[j][2] for j in idf[j][1] if idf['signal']=='S'])
NameError: name 'i' is not defined
```
I do not understand why the i index is not permitted here. Thanks.
I also think it's strange that :
idf.index.levels[0] will show the dates "not parsed" as it is in the file but out of order. Despite that parse\_date=True as an argument in set\_index.
I bring this up since I was thinking of side swiping the problem with something like:
```
for i in idf.index.levels[0]:
sStore.append([idf.index[j][2] for j in idf.index.levels[1] if idf['signal']=='S'])
sStore.head()
```
**My edit 12/30/2012 based on DSM's comment below:**
I would like to use your idea to get the P&L, as I commented below. Where if S!=B, for any given date, we difference using the closing time, 1620.
```
v=[df["signal"]=="S"]
t=[df["time"]=="1620"]
u=[df["signal"]!="S"]
df["price"][[v and (u and t)]]
```
That is, "give me the price at 1620; (even when it doesn't give a "sell signal", S) so that I can diff. with the "extra B's"--for the special case where B>S. This ignores the symmetric concern (where S>B) but for now I want to understand this logical issue.
On traceback, this expression gives:
```
ValueError: boolean index array should have 1 dimension
```
Note that in order to invoke df["time'] **I do not set\_index here**. Trying the union operator | gives:
```
TypeError: unsupported operand type(s) for |: 'list' and 'list'
```
**Looking at Max Fellows's approach**,
@Max Fellows
The point is to close out the positions at the end of the day; so we need to capture to price at the close to "unload" all those B, S which were accumulated; but didn't net each other out.
If I say:
```
filterFunc1 = lambda row: row["signal"] == "S" and ([row["signal"] != "S"][row["price"]=="1620"])
filterFunc2 =lambda row: ([row["price"]=="1620"][row["signal"] != "S"])
filterFunc=filterFunc1 and filterFunc2
filteredData = itertools.ifilter(filterFunc, reader)
```
On traceback:
```
IndexError: list index out of range
```
|
2012/12/28
|
[
"https://Stackoverflow.com/questions/14075337",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1374969/"
] |
This is what I think you're trying to accomplish based on your edit: for every date in your CSV file, group the date along with a list of prices for each item with a signal of "S".
You didn't include any sample data in your question, so I made a test one that I hope matches the format you described:
```
12/28/2012,1:30,10.00,"foo","S"
12/28/2012,2:15,11.00,"bar","N"
12/28/2012,3:00,12.00,"baz","S"
12/28/2012,4:45,13.00,"fibble","N"
12/28/2012,5:30,14.00,"whatsit","S"
12/28/2012,6:15,15.00,"bobs","N"
12/28/2012,7:00,16.00,"widgets","S"
12/28/2012,7:45,17.00,"weevils","N"
12/28/2012,8:30,18.00,"badger","S"
12/28/2012,9:15,19.00,"moose","S"
11/29/2012,1:30,10.00,"foo","N"
11/29/2012,2:15,11.00,"bar","N"
11/29/2012,3:00,12.00,"baz","S"
11/29/2012,4:45,13.00,"fibble","N"
11/29/2012,5:30,14.00,"whatsit","N"
11/29/2012,6:15,15.00,"bobs","N"
11/29/2012,7:00,16.00,"widgets","S"
11/29/2012,7:45,17.00,"weevils","N"
11/29/2012,8:30,18.00,"badger","N"
11/29/2012,9:15,19.00,"moose","N"
12/29/2012,1:30,10.00,"foo","N"
12/29/2012,2:15,11.00,"bar","N"
12/29/2012,3:00,12.00,"baz","S"
12/29/2012,4:45,13.00,"fibble","N"
12/29/2012,5:30,14.00,"whatsit","N"
12/29/2012,6:15,15.00,"bobs","N"
12/29/2012,7:00,16.00,"widgets","S"
12/29/2012,7:45,17.00,"weevils","N"
12/29/2012,8:30,18.00,"badger","N"
12/29/2012,9:15,19.00,"moose","N"
8/9/2008,1:30,10.00,"foo","N"
8/9/2008,2:15,11.00,"bar","N"
8/9/2008,3:00,12.00,"baz","S"
8/9/2008,4:45,13.00,"fibble","N"
8/9/2008,5:30,14.00,"whatsit","N"
8/9/2008,6:15,15.00,"bobs","N"
8/9/2008,7:00,16.00,"widgets","S"
8/9/2008,7:45,17.00,"weevils","N"
8/9/2008,8:30,18.00,"badger","N"
8/9/2008,9:15,19.00,"moose","N"
```
And here's a method using Python 2.7 and built-in libraries to group it in the way it sounds like you want:
```
import csv
import itertools
import time
from collections import OrderedDict
with open("sample.csv", "r") as file:
reader = csv.DictReader(file,
fieldnames=["date", "time", "price", "mag", "signal"])
# Reduce the size of the data set by filtering out the non-"S" rows.
filterFunc = lambda row: row["signal"] == "S"
filteredData = itertools.ifilter(filterFunc, reader)
# Sort by date so we can use the groupby function.
dateKeyFunc = lambda row: time.strptime(row["date"], r"%m/%d/%Y")
sortedData = sorted(filteredData, key=dateKeyFunc)
# Group by date: create a new dictionary of date to a list of prices.
datePrices = OrderedDict((date, [row["price"] for row in rows])
for date, rows
in itertools.groupby(sortedData, dateKeyFunc))
for date, prices in datePrices.iteritems():
print "{0}: {1}".format(time.strftime(r"%m/%d/%Y", date),
", ".join(str(price) for price in prices))
>>> 08/09/2008: 12.00, 16.00
>>> 11/29/2012: 12.00, 16.00
>>> 12/28/2012: 10.00, 12.00, 14.00, 16.00, 18.00, 19.00
>>> 12/29/2012: 12.00, 16.00
```
The type conversions are up to you, since you may be using other libraries to do your CSV reading, but that should hopefully get you started -- and take careful note of @DSM's comment about import \*.
|
It's because `i` is not yet defined, just like the error message says.
In this line:
```
for i in idf.index[i][0]:
```
You are telling the Python interpreter to iterate over all the values yielded by the list returning from the expression `idf.index[i][0]` but you have not yet defined what `i` is (although you are attempting to set each item in the list to the variable `i` as well).
The way the Python `for ... in ...` loop works is that it takes the right most component and asks for the `next` item from the iterator. It then assigns the value yielded by the call to the variable name provided on the left hand side.
|
14,075,337
|
I have a csv file with date, time., price, mag, signal.
62035 rows; there are 42 times of day associated to each unique date in the file.
For each date, when there is an 'S' in the signal column append the corresponding price at the time the 'S' occurred. Below is the attempt.
```
from pandas import *
from numpy import *
from io import *
from os import *
from sys import *
DF1 = read_csv('___.csv')
idf=DF1.set_index(['date','time','price'],inplace=True)
sStore=[]
for i in idf.index[i][0]:
sStore.append([idf.index[j][2] for j in idf[j][1] if idf['signal']=='S'])
sStore.head()
```
```
Traceback (most recent call last)
<ipython-input-7-8769220929e4> in <module>()
1 sStore=[]
2
----> 3 for time in idf.index[i][0]:
4
5 sStore.append([idf.index[j][2] for j in idf[j][1] if idf['signal']=='S'])
NameError: name 'i' is not defined
```
I do not understand why the i index is not permitted here. Thanks.
I also think it's strange that :
idf.index.levels[0] will show the dates "not parsed" as it is in the file but out of order. Despite that parse\_date=True as an argument in set\_index.
I bring this up since I was thinking of side swiping the problem with something like:
```
for i in idf.index.levels[0]:
sStore.append([idf.index[j][2] for j in idf.index.levels[1] if idf['signal']=='S'])
sStore.head()
```
**My edit 12/30/2012 based on DSM's comment below:**
I would like to use your idea to get the P&L, as I commented below. Where if S!=B, for any given date, we difference using the closing time, 1620.
```
v=[df["signal"]=="S"]
t=[df["time"]=="1620"]
u=[df["signal"]!="S"]
df["price"][[v and (u and t)]]
```
That is, "give me the price at 1620; (even when it doesn't give a "sell signal", S) so that I can diff. with the "extra B's"--for the special case where B>S. This ignores the symmetric concern (where S>B) but for now I want to understand this logical issue.
On traceback, this expression gives:
```
ValueError: boolean index array should have 1 dimension
```
Note that in order to invoke df["time'] **I do not set\_index here**. Trying the union operator | gives:
```
TypeError: unsupported operand type(s) for |: 'list' and 'list'
```
**Looking at Max Fellows's approach**,
@Max Fellows
The point is to close out the positions at the end of the day; so we need to capture to price at the close to "unload" all those B, S which were accumulated; but didn't net each other out.
If I say:
```
filterFunc1 = lambda row: row["signal"] == "S" and ([row["signal"] != "S"][row["price"]=="1620"])
filterFunc2 =lambda row: ([row["price"]=="1620"][row["signal"] != "S"])
filterFunc=filterFunc1 and filterFunc2
filteredData = itertools.ifilter(filterFunc, reader)
```
On traceback:
```
IndexError: list index out of range
```
|
2012/12/28
|
[
"https://Stackoverflow.com/questions/14075337",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1374969/"
] |
Using @Max Fellows' handy example data, we can have a look at it in `pandas`. [BTW, you should always try to provide a short, self-contained, correct example (see [here](http://sscce.org/) for more details), so that the people trying to help you don't have to spend time coming up with one.]
First, `import pandas as pd`. Then:
```
In [23]: df = pd.read_csv("sample.csv", names="date time price mag signal".split())
In [24]: df.set_index(["date", "time"], inplace=True)
```
which gives me
```
In [25]: df
Out[25]:
price mag signal
date time
12/28/2012 1:30 10 foo S
2:15 11 bar N
3:00 12 baz S
4:45 13 fibble N
5:30 14 whatsit S
6:15 15 bobs N
7:00 16 widgets S
7:45 17 weevils N
8:30 18 badger S
9:15 19 moose S
11/29/2012 1:30 10 foo N
2:15 11 bar N
3:00 12 baz S
4:45 13 fibble N
5:30 14 whatsit N
6:15 15 bobs N
7:00 16 widgets S
7:45 17 weevils N
8:30 18 badger N
9:15 19 moose N
[etc.]
```
We can see which rows have a signal of `S` easily:
```
In [26]: df["signal"] == "S"
Out[26]:
date time
12/28/2012 1:30 True
2:15 False
3:00 True
4:45 False
5:30 True
6:15 False
[etc..]
```
and we can select using this too:
```
In [27]: df["price"][df["signal"] == "S"]
Out[27]:
date time
12/28/2012 1:30 10
3:00 12
5:30 14
7:00 16
8:30 18
9:15 19
11/29/2012 3:00 12
7:00 16
12/29/2012 3:00 12
7:00 16
8/9/2008 3:00 12
7:00 16
Name: price
```
This is a `DataFrame` with every date, time, and price where there's an `S`. And if you simply want a list:
```
In [28]: list(df["price"][df["signal"] == "S"])
Out[28]: [10.0, 12.0, 14.0, 16.0, 18.0, 19.0, 12.0, 16.0, 12.0, 16.0, 12.0, 16.0]
```
**Update**:
`v=[df["signal"]=="S"]` makes `v` a Python `list` containing a `Series`. That's not what you want. `df["price"][[v and (u and t)]]` doesn't make much sense to me either --: `v` and `u` are mutually exclusive, so if you and them together, you'll get nothing. For these logical vector ops you can use `&` and `|` instead of `and` and `or`. Using the reference data again:
```
In [85]: import pandas as pd
In [86]: df = pd.read_csv("sample.csv", names="date time price mag signal".split())
In [87]: v=df["signal"]=="S"
In [88]: t=df["time"]=="4:45"
In [89]: u=df["signal"]!="S"
In [90]: df[t]
Out[90]:
date time price mag signal
3 12/28/2012 4:45 13 fibble N
13 11/29/2012 4:45 13 fibble N
23 12/29/2012 4:45 13 fibble N
33 8/9/2008 4:45 13 fibble N
In [91]: df["price"][t]
Out[91]:
3 13
13 13
23 13
33 13
Name: price
In [92]: df["price"][v | (u & t)]
Out[92]:
0 10
2 12
3 13
4 14
6 16
8 18
9 19
12 12
13 13
16 16
22 12
23 13
26 16
32 12
33 13
36 16
Name: price
```
[Note: this question has now become too long and meandering. I suggest spending some time working through the examples in the `pandas` documentation at the console to get a feel for it.]
|
It's because `i` is not yet defined, just like the error message says.
In this line:
```
for i in idf.index[i][0]:
```
You are telling the Python interpreter to iterate over all the values yielded by the list returning from the expression `idf.index[i][0]` but you have not yet defined what `i` is (although you are attempting to set each item in the list to the variable `i` as well).
The way the Python `for ... in ...` loop works is that it takes the right most component and asks for the `next` item from the iterator. It then assigns the value yielded by the call to the variable name provided on the left hand side.
|
14,075,337
|
I have a csv file with date, time., price, mag, signal.
62035 rows; there are 42 times of day associated to each unique date in the file.
For each date, when there is an 'S' in the signal column append the corresponding price at the time the 'S' occurred. Below is the attempt.
```
from pandas import *
from numpy import *
from io import *
from os import *
from sys import *
DF1 = read_csv('___.csv')
idf=DF1.set_index(['date','time','price'],inplace=True)
sStore=[]
for i in idf.index[i][0]:
sStore.append([idf.index[j][2] for j in idf[j][1] if idf['signal']=='S'])
sStore.head()
```
```
Traceback (most recent call last)
<ipython-input-7-8769220929e4> in <module>()
1 sStore=[]
2
----> 3 for time in idf.index[i][0]:
4
5 sStore.append([idf.index[j][2] for j in idf[j][1] if idf['signal']=='S'])
NameError: name 'i' is not defined
```
I do not understand why the i index is not permitted here. Thanks.
I also think it's strange that :
idf.index.levels[0] will show the dates "not parsed" as it is in the file but out of order. Despite that parse\_date=True as an argument in set\_index.
I bring this up since I was thinking of side swiping the problem with something like:
```
for i in idf.index.levels[0]:
sStore.append([idf.index[j][2] for j in idf.index.levels[1] if idf['signal']=='S'])
sStore.head()
```
**My edit 12/30/2012 based on DSM's comment below:**
I would like to use your idea to get the P&L, as I commented below. Where if S!=B, for any given date, we difference using the closing time, 1620.
```
v=[df["signal"]=="S"]
t=[df["time"]=="1620"]
u=[df["signal"]!="S"]
df["price"][[v and (u and t)]]
```
That is, "give me the price at 1620; (even when it doesn't give a "sell signal", S) so that I can diff. with the "extra B's"--for the special case where B>S. This ignores the symmetric concern (where S>B) but for now I want to understand this logical issue.
On traceback, this expression gives:
```
ValueError: boolean index array should have 1 dimension
```
Note that in order to invoke df["time'] **I do not set\_index here**. Trying the union operator | gives:
```
TypeError: unsupported operand type(s) for |: 'list' and 'list'
```
**Looking at Max Fellows's approach**,
@Max Fellows
The point is to close out the positions at the end of the day; so we need to capture to price at the close to "unload" all those B, S which were accumulated; but didn't net each other out.
If I say:
```
filterFunc1 = lambda row: row["signal"] == "S" and ([row["signal"] != "S"][row["price"]=="1620"])
filterFunc2 =lambda row: ([row["price"]=="1620"][row["signal"] != "S"])
filterFunc=filterFunc1 and filterFunc2
filteredData = itertools.ifilter(filterFunc, reader)
```
On traceback:
```
IndexError: list index out of range
```
|
2012/12/28
|
[
"https://Stackoverflow.com/questions/14075337",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1374969/"
] |
Using @Max Fellows' handy example data, we can have a look at it in `pandas`. [BTW, you should always try to provide a short, self-contained, correct example (see [here](http://sscce.org/) for more details), so that the people trying to help you don't have to spend time coming up with one.]
First, `import pandas as pd`. Then:
```
In [23]: df = pd.read_csv("sample.csv", names="date time price mag signal".split())
In [24]: df.set_index(["date", "time"], inplace=True)
```
which gives me
```
In [25]: df
Out[25]:
price mag signal
date time
12/28/2012 1:30 10 foo S
2:15 11 bar N
3:00 12 baz S
4:45 13 fibble N
5:30 14 whatsit S
6:15 15 bobs N
7:00 16 widgets S
7:45 17 weevils N
8:30 18 badger S
9:15 19 moose S
11/29/2012 1:30 10 foo N
2:15 11 bar N
3:00 12 baz S
4:45 13 fibble N
5:30 14 whatsit N
6:15 15 bobs N
7:00 16 widgets S
7:45 17 weevils N
8:30 18 badger N
9:15 19 moose N
[etc.]
```
We can see which rows have a signal of `S` easily:
```
In [26]: df["signal"] == "S"
Out[26]:
date time
12/28/2012 1:30 True
2:15 False
3:00 True
4:45 False
5:30 True
6:15 False
[etc..]
```
and we can select using this too:
```
In [27]: df["price"][df["signal"] == "S"]
Out[27]:
date time
12/28/2012 1:30 10
3:00 12
5:30 14
7:00 16
8:30 18
9:15 19
11/29/2012 3:00 12
7:00 16
12/29/2012 3:00 12
7:00 16
8/9/2008 3:00 12
7:00 16
Name: price
```
This is a `DataFrame` with every date, time, and price where there's an `S`. And if you simply want a list:
```
In [28]: list(df["price"][df["signal"] == "S"])
Out[28]: [10.0, 12.0, 14.0, 16.0, 18.0, 19.0, 12.0, 16.0, 12.0, 16.0, 12.0, 16.0]
```
**Update**:
`v=[df["signal"]=="S"]` makes `v` a Python `list` containing a `Series`. That's not what you want. `df["price"][[v and (u and t)]]` doesn't make much sense to me either --: `v` and `u` are mutually exclusive, so if you and them together, you'll get nothing. For these logical vector ops you can use `&` and `|` instead of `and` and `or`. Using the reference data again:
```
In [85]: import pandas as pd
In [86]: df = pd.read_csv("sample.csv", names="date time price mag signal".split())
In [87]: v=df["signal"]=="S"
In [88]: t=df["time"]=="4:45"
In [89]: u=df["signal"]!="S"
In [90]: df[t]
Out[90]:
date time price mag signal
3 12/28/2012 4:45 13 fibble N
13 11/29/2012 4:45 13 fibble N
23 12/29/2012 4:45 13 fibble N
33 8/9/2008 4:45 13 fibble N
In [91]: df["price"][t]
Out[91]:
3 13
13 13
23 13
33 13
Name: price
In [92]: df["price"][v | (u & t)]
Out[92]:
0 10
2 12
3 13
4 14
6 16
8 18
9 19
12 12
13 13
16 16
22 12
23 13
26 16
32 12
33 13
36 16
Name: price
```
[Note: this question has now become too long and meandering. I suggest spending some time working through the examples in the `pandas` documentation at the console to get a feel for it.]
|
Try something like this:
```
for i in range(len(idf.index)):
value = idf.index[i][0]
```
Same thing for the iteration with the `j` index variable. As has been pointed, you can't reference the iteration index in the expression to be iterated, and besides you need to perform a very specific iteration (traversing over a column in a matrix), and Python's default iterators won't work "out of the box" for this, so a custom index handling is needed here.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.