qid
int64
46k
74.7M
question
stringlengths
54
37.8k
date
stringlengths
10
10
metadata
listlengths
3
3
response_j
stringlengths
29
22k
response_k
stringlengths
26
13.4k
__index_level_0__
int64
0
17.8k
2,575,219
Python language has a well known feature named [interactive mode](http://docs.python.org/tutorial/interpreter.html#interactive-mode) where the interpreter can read commands directly from tty. I typically use this mode to test if a given module is in the classpath or to play around and test some snippets. Do you know any other programming languages that have Interactive Mode? If you can, give the name of the languages and where possible, a web reference. If it is already mentioned, you can just vote for it.
2010/04/04
[ "https://Stackoverflow.com/questions/2575219", "https://Stackoverflow.com", "https://Stackoverflow.com/users/130929/" ]
[Perl](http://www.perl.org/) - interesting that there are so many answers before this
You can do almost-interactive C# and VB.NET using [LINQPad](http://www.linqpad.net/)
4,927
59,023,371
I am tryin to have a form submit to a python script using flask. the form is in my index.html - ``` <form action="{{ url_for('/predict') }}" method="POST"> <p>Enter Mileage</p> <input type="text" name="mileage"> <p>Enter Year</p> <input type="text" name="year"> <input type="submit" value="Predict"> </form> ``` Here is my flask page (my\_flask.py)- ``` @app.route('/') def index(): return render_template('index.html') @app.route('/predict', methods=("POST", "GET")) def predict(): df = pd.read_csv("carprices.csv") X = df[['Mileage','Year']] y = df['Sell Price($)'] X_train, X_test, y_train, y_test = train_test_split(X,y,test_size=0.2) clf = LinearRegression() clf.fit(X_train, y_train) if request.method == 'POST': mileage = request.form['mileage'] year = request.form['year'] data = [[mileage, year]] price = clf.predict(data) return render_template('prediction.html', prediction = price) ``` But when I go to my index page I get internal server error because of the Why would this be happening? {{ url\_for('/predict') }}
2019/11/24
[ "https://Stackoverflow.com/questions/59023371", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4671619/" ]
Instead of `url_for('/predict')`, drop the leading slash and use `url_for('predict')`. `url_for(...)` takes the method name and not the route name.
I was not importing url\_for. ``` from flask import Flask, request, render_template, url_for ```
4,937
62,075,847
I tried to create a polygon shapefile in QGIS and read it in python by shapely. An example code looks like this: ``` import fiona from shapely.geometry import shape multipolys = fiona.open(somepath) multi = multipolys[0] coord = shape(multi['geometry']) ``` The EOSGeom\_createLinearRing\_r returned a NULL pointer I checked if the polygon is valid in QGIS and no error was reported. Actually, it does not work for even a simple triangle generated in QGIS. Anyone knows how to solve it? Thank you
2020/05/28
[ "https://Stackoverflow.com/questions/62075847", "https://Stackoverflow.com", "https://Stackoverflow.com/users/13242482/" ]
I had a similar problem but with the shapely.geometry.LineString. The error I got was ``` ValueError: GEOSGeom_createLineString_r returned a NULL pointer ``` I don't know the reason behind this message, but there are two ways, how to avoid it: 1. Do the following: ``` ... from shapely import speedups ... speedups.disable() ``` Import the speedups module and disable the speedups. This needs to be done, since they are enabled by default. From shapelys speedups init method: ``` """ The shapely.speedups module contains performance enhancements written in C. They are automaticaly installed when Python has access to a compiler and GEOS development headers during installation, and are enabled by default. """ ``` If you disable them, you won't get the NULL Pointer Exception, because you don't use the C implementation, rather than the usual implementation. 2. If you call python in a command shell, type: ``` from shapely.geometry import shape ``` this loads your needed shape. Then load your program ``` import yourscript ``` then run your script. ``` yourscript.main() ``` This should also work. I think in this variant the C modules get properly loaded and therefore you don't get the NULL Pointer Exception. But this only works, if you open a python terminal by hand and import the needed shape by hand. If you import the shape with your program, you will run into the same error again.
Face the same issue and this work for me `import shapely` `shapely.speedups.disable()`
4,938
62,479,608
What's the difference? [docs](https://docs.python.org/3.7/library/types.html#types.FunctionType) show nothing on this, and their `help()` is identical. Is there an object for which `isinstance` will fail with one but not other?
2020/06/19
[ "https://Stackoverflow.com/questions/62479608", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10133797/" ]
Back in 1994 I wasn't sure that we would always be using the same implementation type for lambda and def. That's all there is to it. It would be a pain to remove it, so we're just leaving it (it's only one line). If you want to add a note to the docs, feel free to submit a PR.
See [`cpython/Lib/types.py`](https://github.com/python/cpython/blob/a041e116db5f1e78222cbf2c22aae96457372680/Lib/types.py#L11-L13): ``` def _f(): pass FunctionType = type(_f) LambdaType = type(lambda: None) # Same as FunctionType ```
4,940
34,247,930
I have installed python 3.5 on my Windows 7 machine. When I installed it, I marked the check box to install `pip`. After the installation, I wanted to check whether pip was working, so I typed `pip` on the command line and hit enter, but did not respond. The cursor blinks but it does not display anything. Please help. Regards.
2015/12/13
[ "https://Stackoverflow.com/questions/34247930", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4542278/" ]
1. go the windows cmd prompt 2. go to the python directory 3. then type python -m pip install package-name
I had the same problem with Version 3.5.2. Have you tried `py.exe -m install package-name`? This worked for me.
4,941
66,873,774
I'm really new to python and pandas so would you please help me answer this seemingly simple question? I already have an excel file containing my data, now I want to create an array containing those data in python. For example, I have data in excel that look like this: [![enter image description here](https://i.stack.imgur.com/xgcx9.jpg)](https://i.stack.imgur.com/xgcx9.jpg) I want from those data to create a matrix of the form like the python code below: [![enter image description here](https://i.stack.imgur.com/zOSpc.jpg)](https://i.stack.imgur.com/zOSpc.jpg) Actually, my data is much longer so is there any way that I can take advantage of pandas to put the data from my excel file into a matrix in python similar to the simple example above? Thank you!
2021/03/30
[ "https://Stackoverflow.com/questions/66873774", "https://Stackoverflow.com", "https://Stackoverflow.com/users/14533186/" ]
This is quite simple; if you take a look at your code you should be able to follow through this sequence of operations. 1. The widget is created. **No action.** *At this point userIconData is null.* 2. `initState` is called. **async http call is initiated.** *userIconData == null* 3. `build` is called. **build occurs, throws error.** *userIconData == null* 4. http call returns. **userIconData is set.** *userIconData == your image* Due to not calling `setState`, your build function won't run again. If you did, this would happen (but you'd still have had the exception earlier). 6. `build` is called. **userIconData is set.** *userIconData == your image* The key here is understanding that asynchronous calls (anything that returns a future and optionally uses `async` and `await`) do not return immediately, but rather at some later point, and that you can't rely on them having set what you need in the meantime. If you had previously tried doing this with an image loaded from disk and it worked, that's only because flutter does some tricks that are only possible because loading from disk is synchronous. Here are two options for how you can write your code instead. ```dart class _FuncState extends State<Func> { Uint8List? userIconData; // if you're using any data from the `func` widget, use this instead // of initState in case the widget changes. // You could also check the old vs new and if there has been no change // that would need a reload, not do the reload. @override void didUpdateWidget(Func oldWidget) { super.didUpdateWidget(oldWidget); updateUI(); } void updateUI() async { await getUserIconData(widget.role, widget.id, widget.session).then((value){ // this ensures that a rebuild happens setState(() => userIconData = value); }); } @override Widget build(BuildContext context) { return SafeArea( child: Scaffold( body: Container( // this only uses your circle avatar if the image is loaded, otherwise // show a loading indicator. child: userIconData != null ? CircleAvatar( backgroundImage: Image.memory(userIconData!).image, maxRadius: 20, ) : CircularProgressIndicator(), ), ), ); } } ``` Another way to do the same thing is to use a FutureBuilder. ```dart class _FuncState extends State<Func> { // using late isn't entirely safe, but we can trust // flutter to always call didUpdateWidget before // build so this will work. late Future<Uint8List> userIconDataFuture; @override void didUpdateWidget(Func oldWidget) { super.didUpdateWidget(oldWidget); userIconDataFuture = getUserIconData(widget.role, widget.id, widget.session); } @override Widget build(BuildContext context) { return SafeArea( child: Scaffold( body: Container( child: FutureBuilder( future: userIconDataFuture, builder: (BuildContext context, AsyncSnapshot<Uint8List> snapshot) { if (snapshot.hasData) { return CircleAvatar( backgroundImage: Image.memory(snapshot.data!).image, maxRadius: 20); } else { return CircularProgressIndicator(); } }, ), ), ), ); } } ``` Note that the loading indicator is just one option; I'd actually recommend having a hard-coded default for your avatar (i.e. a grey 'user' image) that gets switched out when the image is loaded. Note that I've used null-safe code here as that will make this answer have better longevity, but to switch back to non-null-safe code you can just remove the extraneous `?`, `!` and `late` in the code.
The error message is pretty clear to me. `userIconData` is `null` when you pass it to the `Image.memory` constructor. Either use [FutureBuilder](https://api.flutter.dev/flutter/widgets/FutureBuilder-class.html) or a condition to check if `userIconData` is null before rendering image, and manually show a loading indicator if it is, or something along these lines. Also you'd need to actually set the state to trigger a re-render. I'd go with the former, though.
4,951
54,392,016
I have a python script were I was experimenting with minmax AI. And so tried to make a tic tac toe game. I had a self calling function to calculate the values and it used a variable called alist(not the one below) which would be given to it by the function before. it would then save it as new list and modify it. This worked but when it came to back-tracking and viewing all the other possibilities the original alist variable had been changed by the function following it e.g. ``` import sys sys.setrecursionlimit(1000000) def somefunction(alist): newlist = alist newlist[0][0] = newlist[0][0] + 1 if newlist[0][0] < 10: somefunction(newlist) print(newlist) thelist = [[0, 0], [0, 0]] somefunction(thelist) ``` it may be that this is difficult to solve but if someone could help me it would be greatly appreciated
2019/01/27
[ "https://Stackoverflow.com/questions/54392016", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10976004/" ]
`newlist = alist` does not make a copy of the list. You just have two variable names for the same list. There are several ways to actually copy a list. I usually do this: ``` newlist = alist[:] ``` On the other hand, that will make a new list with the same elements. To make a deep copy of the list: ``` import copy newlist = copy.deepcopy(alist) ```
You probably want to `deepcopy` your list, as it contains other lists: ``` from copy import deepcopy ``` And then change: ``` newlist = alist ``` to: ``` newlist = deepcopy(alist) ```
4,952
70,163,997
I have a folder of python scripts, I want to call each of them and pass in a DB object, this is easily doable, but I would like to do it dynamically, that is if I don't know the name of the script beforehand, is this possible? Let's say all scripts are in the "scripts" subfolder. My caller file: ``` #!/usr/bin/python import scripts.Script1 as MyScript1 import scripts.Script2 as MyScript2 import pandas as pd _scripts = { 'MyScript1': MyScript1, 'MyScript2': MyScript2, } def Invoke(DB, script, parameters): if script in _scripts: curScript = _scripts[script] tables = GetTable(DB, curScript) result = curScript.Invoke(tables) return result def GetTable(DB, script): tables = script.TableToLoad() dataframes = {} if not isinstance(tables, list): tables = [tables] for table in tables: dataframes[table] = DB.LoadDataframe(table) return dataframes ``` The script file: ``` def TableToLoad(): return ['MyDBTable1'] def Invoke(tables): df = tables['MyDBTable1'] # do useful work here ``` Is something like this dynamically loadable? Like dynamically load the \_scripts variable
2021/11/30
[ "https://Stackoverflow.com/questions/70163997", "https://Stackoverflow.com", "https://Stackoverflow.com/users/468384/" ]
If we work backwards, you'll need your DataFrame to have the addenda information in a single row before using `.to_dict` operation: | id\_number | name | amount | addenda | | --- | --- | --- | --- | | 1234 | ABCD | $100 | [{payment\_related\_info: Car-wash-$30, payment\_related\_info: Maintenance-$70}] | To get here, you can perform a `groupby` on `id_number, name, amount`, then apply a function that collapses the strings in the `addenda` rows for that groupby into a list of dictionaries where each key is the string `'payment_related_info'`. This works as expected if you add more rows to your original `df` as well: | id\_number | name | amount | addenda | | --- | --- | --- | --- | | 1234 | ABCD | $100 | Car-wash-$30 | | 1234 | ABCD | $100 | Maintenance-$70 | | 2345 | BCDE | $200 | Car-wash-$100 | | 2345 | BCDE | $200 | Maintenance-$100 | ``` def collapse_row(x): addenda_list = x["addenda"].to_list() last_row = x.iloc[-1] last_row["addenda"] = [{'payment_related_info':v} for v in addenda_list] return last_row grouped = df.groupby(["id_number","name","amount"]).apply(collapse_row).reset_index(drop=True) grouped.to_dict(orient='records') ``` Result: ``` [ { "id_number":1234, "name":"ABCD", "amount":"$100", "addenda":[ {"payment_related_info":"Car-wash-$30"}, {"payment_related_info":"Maintenance-$70"} ] }, { "id_number":2345, "name":"BCDE", "amount":"$200", "addenda":[ {"payment_related_info":"Car-wash-$100"}, {"payment_related_info":"Maintenance-$100"} ] } ] ```
Just apply a groupby and aggregate by creating a dataframe inside like this: ```py data = { "id_number": [1234, 1234], "name": ["ABCD", "ABCD"], "amount": ["$100", "$100"], "addenda": ["Car-wash-$30", "Maintenance-$70"] } df = pd.DataFrame(data=data) df.groupby(by=["id_number", "name", "amount"]) \ .agg(lambda col: pd.DataFrame(data=col) \ .rename(columns={"addenda": "payment_related_info"})) \ .reset_index() \ .to_json(orient="records") ``` This returns axactly the result you want!
4,953
25,598,838
I'm really new to python so this is probably a really stupid problem but I honestly have no idea what I'm doing and I have spent hours trying to get this to work. I need to have the user input a date (in string form) and then use this date to return some data (The function get\_data\_for\_date has already previously been created and works fine, I just have to call it manually in the console and enter the date for it to work currently ). The data then needs to be split when it is returned. Any help would be appreciated, or even if you could just point me in the right direction. ``` dateStr = raw_input('Date? ') def load_data(dateStr): def get_data_for_date(dateStr): text = data return data.split('\n') ```
2014/09/01
[ "https://Stackoverflow.com/questions/25598838", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3995938/" ]
Try this sequence : ``` MYApplication.getInstance().clearApplicationData(); android.os.Process.killProcess(android.os.Process.myPid()); Intent intent1 = new Intent(Intent.ACTION_MAIN); intent1.addCategory(Intent.CATEGORY_HOME); intent1.addFlags(Intent.FLAG_ACTIVITY_CLEAR_TOP); startActivity(intent1); finish(); ```
Avoid `killProcess` Try this code : ``` Intent startMain = new Intent(Intent.ACTION_MAIN); startMain.addCategory(Intent.CATEGORY_HOME); startMain.setFlags(Intent.FLAG_ACTIVITY_NEW_TASK); activity.startActivity(startMain); System.exit(-1); ```
4,954
63,570,453
I plan on uninstalling and reinstalling Python to fix pip. I, however, have a lot of python files which I worked hard on and I really don't want to lose them. Would my Python files be okay if I uninstalled Python?
2020/08/25
[ "https://Stackoverflow.com/questions/63570453", "https://Stackoverflow.com", "https://Stackoverflow.com/users/13931651/" ]
If you are using Linux and a distribution like Ubuntu, you will definitely break the OS. Don't do it. Moreover, there is no evidence that your installation is broken because of Python, and you may probably not solve your problem.
There's no harm I can see in overwriting a pip installation. So, just follow the [instructions](https://pip.pypa.io/en/stable/installing/) and let us know if you have further problems: 1. Download [get-pip.py](https://bootstrap.pypa.io/get-pip.py). 2. Run python get-pip.py and get on with the rest of your stuff.
4,956
14,068,042
Resently I'm installed Opencv in my machine. Its working in python well(I just checked it by some eg programs). But due to the lack of tutorials in python I decided to move to c. I just run an Hello world program from <http://www.cs.iit.edu/~agam/cs512/lect-notes/opencv-intro/> My program is ``` #include <stdlib.h> #include <stdio.h> #include <math.h> #include <cv.h> #include <highgui.h> int main(int argc, char *argv[]) { IplImage* img = 0; int height,width,step,channels; uchar *data; int i,j,k; if(argc<2){ printf("Usage: main <image-file-name>\n\7"); exit(0); } // load an image img=cvLoadImage(argv[1]); if(!img){ printf("Could not load image file: %s\n",argv[1]); exit(0); } // get the image data height = img->height; width = img->width; step = img->widthStep; channels = img->nChannels; data = (uchar *)img->imageData; printf("Processing a %dx%d image with %d channels\n",height,width,channels); // create a window cvNamedWindow("mainWin", CV_WINDOW_AUTOSIZE); cvMoveWindow("mainWin", 100, 100); // invert the image for(i=0;i<height;i++) for(j=0;j<width;j++) for(k=0;k<channels;k++) data[i*step+j*channels+k]=255-data[i*step+j*channels+k]; // show the image cvShowImage("mainWin", img ); // wait for a key cvWaitKey(0); // release the image cvReleaseImage(&img ); return 0; } ``` first while compiling I got the following error ``` hello-world.c:4:16: fatal error: cv.h: No such file or directory compilation terminated. ``` and I rectify this error by compiling like this ``` gcc -I/usr/lib/perl/5.12.4/CORE -o hello-world hello-world.c ``` But now the error is ``` In file included from hello-world.c:4:0: /usr/lib/perl/5.12.4/CORE/cv.h:14:5: error: expected specifier-qualifier-list before ‘_XPV_HEAD’ hello-world.c:5:21: fatal error: highgui.h: No such file or directory compilation terminated. ``` Qns : Is it this header is not installed in my system? While I'm using this command find /usr -name "highgui.h" I'm not find anything If this header is not in my sysytem hoew I install this? Please help me . I'm new in opencv
2012/12/28
[ "https://Stackoverflow.com/questions/14068042", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1894272/" ]
First check if highgui.h exists on your machine: ``` sudo find /usr/include -name "highgui.h" ``` If you find it on path lets say "/usr/include/opencv/highgui.h" then use: ``` #include <opencv/highgui.h> in your c file. ``` or while compiling you could add ``` -I/usr/include/opencv in gcc line ``` but then your include line in c file should become: ``` #include "highgui.h" ``` if, your first command fails that is you don't "find" highgui.h on your machine. Then clearly you are missing some package. To figure out that package name, use apt-find command: ``` sudo apt-find search highgui.h ``` on my machine, it gave me this: ``` libhighgui-dev: /usr/include/opencv/highgui.h libhighgui-dev: /usr/include/opencv/highgui.hpp ``` if you don't have apt-find then install it first, using: ``` sudo apt-get install apt-find ``` So, now you know the package name, then issue: ``` sudo apt-get install libhighgui-dev ``` once this is done, use the find command to see where exactly, headers been installed and then use then change include path accordingly
I have the following headers in my project: ``` #include <opencv2/opencv.hpp> #include <opencv2/highgui/highgui.hpp> #include <opencv2/imgproc/imgproc.hpp> #include <opencv2/objdetect/objdetect.hpp> #include <opencv2/features2d/features2d.hpp> ``` The version of OpenCV 2.4.2
4,964
64,983,755
As of until now, my understanding is that python imports module by the path relative to the directory, despite the source file being anywhere else. for example: ``` bar |-foo.py |-foo1.py ``` so if we want to access `fo01.py` through `foo.py` from `bar`, I would think I need to do `from bar import foo1`. But the former does not seem to work. It not being the way has this following issue: if we now have another dir - `examples` ``` bar |-foo.py |-foo1.py examples |- getfoo.py ``` How can we access `foo` and `foo1` under `bar` from `getfoo.py` under examples? (My common intuition would be, import `foo` from the scope of `bar.<foo>`, but it does not work)
2020/11/24
[ "https://Stackoverflow.com/questions/64983755", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9817556/" ]
before explaining why and how to make things work. let me put some right code. here is the dir tree(which followed yours) ``` . ├── bar │   ├── foo1.py │   └── foo.py └── examples └── getfoo.py ``` and there is a variable named `var` in foo1.py and foo.py Question I: > > so if we want to access fo01.py through foo.py from bar, I would think I need to do from bar import foo1. But the former does not seem to work. > > > Answer I: > > In `foo.py`. you should change `from bar import foo1` to `from foo1 import xxx` (`xxx` are things you want from `foo1.py`) > > > Question II: > > How can we access foo and foo1 under bar from getfoo.py under examples? (My common intuition would be, import foo from the scope of bar., but it does not work) > > > Answer II: > > You can't import modules like this before you add some `sys path`, when you're testing py file like `python getfoo.py`. > if you want to do things like `from bar import foo`, you may need to `sys.path.append('path/to/bar')`, after which you can do the imports. > > > Explanations: these are things about `python interpret path`. which you can find in [python sys.path](https://docs.python.org/3/library/sys.html?highlight=sys%20path#sys.path) when we do `python file.py`, the interpreter creates a list that contains all of directories it will use to search for modules when importing automatically. you can see the list with `import sys; print(sys.path)` the problem is that this list only contain the directory where file.py is located and some system path that python own. So, in `bar` directory, you can access `foo1.py` through foo.py using `from foo1 import xxx`. or `from foo import xxx` in `foo1.py` But you can not import bar in `getfoo.py`. this is because python interpreter don't know where to find it. so if you want to import things tell the interpreter with `sys.path.append('path/to/add')`
Try importing foo1.py in getfoo1.py using its path. ``` import ../bar/foo1.py ``` Or copy paste foo1.py in examples and then call ``` import foo1.py ``` Please check syntax for "import ../bar/foo1.py"
4,965
67,045,619
I have a python script and I used on Kubernetes. After process ended on python script Kubernetes restart pod. And I don't want to this. I tried to add a line of code from python script like that: ``` text = input("please a key for exiting") ``` And I get EOF error, so its depends on container has no EOF config on my Kubernetes. After that I tried to use restartPolicy: Never. But restartPolicy is can not be Never and I get error like that: ``` error validating data: ValidationError(Deployment.spec.template): unknown field \"restartPolicy\" in io.k8s.api.core.v1.PodTemplateSpec; ``` How can I make this? I just want to no restart for this pod. Its can be on python script or Kubernetes yaml file.
2021/04/11
[ "https://Stackoverflow.com/questions/67045619", "https://Stackoverflow.com", "https://Stackoverflow.com/users/13338897/" ]
You get `unknown field \"restartPolicy\" in io.k8s.api.core.v1.PodTemplateSpec;` because you most probably messed up some indentation. Here is an example deploymeny with **incorrect indentation** of `restartPolicy` field: ``` apiVersion: apps/v1 kind: Deployment metadata: name: nginx-deployment labels: app: nginx spec: replicas: 1 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx:1.14.2 ports: - containerPort: 80 restartPolicy: Never # <------- ``` --- Here is a deploymet with **correct indentation**: ``` apiVersion: apps/v1 kind: Deployment metadata: name: nginx-deployment labels: app: nginx spec: replicas: 1 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx:1.14.2 ports: - containerPort: 80 restartPolicy: Never # <------- ``` But this will result in error: ``` kubectl apply -f deploy.yaml The Deployment "nginx-deployment" is invalid: spec.template.spec.restartPolicy: Unsupported value: "Never": supported values: "Always" ``` Here is an explaination why: [restartpolicy-unsupported-value-never-supported-values-always](https://stackoverflow.com/questions/55169075/restartpolicy-unsupported-value-never-supported-values-always) --- If you want to run a one time pod, use a [k8s job](https://kubernetes.io/docs/concepts/workloads/controllers/job/) or use pod directly: ``` apiVersion: v1 kind: Pod metadata: labels: run: ngx name: ngx spec: containers: - image: nginx name: ngx restartPolicy: Never ```
A PodSpec has a restartPolicy field with possible values Always, OnFailure, and Never. The default value is Always. could you please try OnFailure you have only one container it should work.
4,966
67,025,052
As I am teaching myself Bash programming, I came across an interesting use case, where **I want to take a list of variables that exist in the environment, and put them into an array. Then, I want to output a list of the variable names and their values, and store that output in an array, one entry per variable.** I'm only about 2 weeks into Bash shell scripting in any "real" way, and I am educating myself on arrays. A common function in other programming language is the ability to "zip" two arrays, e.g. [as is done in Python](https://realpython.com/python-zip-function/). Another common feature in any programming language is indirection, e.g. via [pointer indirection](https://en.wikipedia.org/wiki/Pointer_(computer_programming)), etc. This is largely academic, to teach myself through a somewhat challenging example, but I think this has widespread use if for no other reason than debugging, keeping track of overall system state, etc. **What I want is for the following input... :** ``` VAR_ONE="LIGHT RED" VAR_TWO="DARK GREEN" VAR_THREE="BLUE" VARIABLE_ARRAY=(VAR_ONE VAR_TWO VAR_THREE) ``` **... to be converted into the following output (as an array, one element per line):** ``` VAR_ONE: LIGHT RED VAR_TWO: DARK GREEN VAR_THREE: BLUE ``` **Constraints:** * Assume that I do not have control of all of the variables, so I cannot just sidestep the problem e.g. by using an associative array from the get-go. (i.e. please do not recommend avoiding the need for indirect reference lookups altogether by never having a discrete variable named "VAR\_ONE"). But a solution that stores the result in an associative array is fine. * Assume that variable names will never contain spaces, but their values might. * The final output should *not* contain separate elements just because the input variables had values containing spaces. **What I've read about so far:** * I've read some StackOverflow posts like this one, that deal with using indirect references to arrays *themselves* (e.g. if you have three arrays and want to choose which one to pull from based on an "array choice" variable): [How to iterate over an array using indirect reference?](https://stackoverflow.com/q/11180714/12854372) * I've also found one single post that deals with "zipping" arrays in Bash in the manner I'm talking about, where you pair-up e.g. the 1st element from `array1` and `array2`, then pair up the 2nd elements, etc.: [Iterate over two arrays simultaneously in bash](https://stackoverflow.com/questions/17403498/iterate-over-two-arrays-simultaneously-in-bash) * ...but I haven't found anything that quite discusses this unique use-case... **QUESTION:** **How should I make an array containing a list of variable names and their values (colon-separated), given an array containing a list of variable names only. I'm not "failing to come up with any way to do it" but I want to find the "preferred" way to do this in Bash, considering performance, security, and being concise/understandable.** EDIT: I'll post what I've come up with thus far as an answer to this post... but not mark it as answered, since I want to also hear some unbiased recommendations...
2021/04/09
[ "https://Stackoverflow.com/questions/67025052", "https://Stackoverflow.com", "https://Stackoverflow.com/users/12854372/" ]
OP starts with: ``` VAR_ONE="LIGHT RED" VAR_TWO="DARK GREEN" VAR_THREE="BLUE" VARIABLE_ARRAY=(VAR_ONE VAR_TWO VAR_THREE) ``` OP has provided an answer with 4 sets of code: ``` # first 3 sets of code generate: $ typeset -p outputValues declare -a outputValues=([0]="VAR_ONE: LIGHT RED" [1]="VAR_TWO: DARK GREEN" [2]="VAR_THREE: BLUE") # the 4th set of code generates the following where the data values are truncated at the first space: $ typeset -p outputValues declare -a outputValues=([0]="VAR_ONE: LIGHT" [1]="VAR_TWO: DARK" [2]="VAR_THREE: BLUE") ``` **NOTES**: * I'm assuming the output from the 4th set of code is wrong so will be ignoring this one * OP's code samples touch on a couple ideas I'm going to make use of (below) ... nameref's (`declare -n <variable_name>`) and indirect variable references (`${!<variable_name>}`) --- --- For readability (and maintainability by others) I'd probably avoid the various `eval` and expansion ideas and instead opt for using `bash` namesref's (`declare -n`); a quick example: ``` $ x=5 $ echo "${x}" 5 $ y=x $ echo "${y}" x $ declare -n y="x" # nameref => y=(value of x) $ echo "${y}" 5 ``` Pulling this into the original issue we get: ``` unset outputValues declare -a outputValues # optional; declare 'normal' array for var_name in "${VARIABLE_ARRAY[@]}" do declare -n data_value="${var_name}" outputValues+=("${var_name}: ${data_value}") done ``` Which gives us: ``` $ typeset -p outputValues declare -a outputValues=([0]="VAR_ONE: LIGHT RED" [1]="VAR_TWO: DARK GREEN" [2]="VAR_THREE: BLUE") ``` While this generates the same results (as OP's first 3 sets of code) there is (for me) the nagging question of how is this new array going to be used? If the sole objective is to print this pre-formatted data to stdout ... ok, though why bother with a new array when the same can be done with the current array and nameref's? If the objective is to access this array as sets of variable name/value pairs for processing purposes, then the current structure is going to be hard(er) to work with, eg, each array 'value' will need to be parsed/split based on the delimiter `:<space>` in order to access the actual variable names and values. In this scenario I'd opt for using an associative array, eg: ``` unset outputValues declare -A outputValues # required; declare associative array for var_name in "${VARIABLE_ARRAY[@]}" do declare -n data_value="${var_name}" outputValues[${var_name}]="${data_value}" done ``` Which gives us: ``` $ typeset -p outputValues declare -A outputValues=([VAR_ONE]="LIGHT RED" [VAR_THREE]="BLUE" [VAR_TWO]="DARK GREEN" ) ``` **NOTES**: * again, why bother with a new array when the same can be done with the current array and nameref's? * if the variable `$data_value` is to be re-used in follow-on code as a 'normal' variable it will be necessary to remove the nameref attribute (`unset -n data_value`) With an associative array (index=variable name / array element=variable value) it becomes easier to reference the variable name/value pairs, eg: ``` $ myvar=VAR_ONE $ echo "${myvar}: ${outputValues[${myvar}]}" VAR_ONE: LIGHT RED $ for var_name in "${!outputValues[@]}"; do echo "${var_name}: ${outputValues[${var_name}]}"; done VAR_ONE: LIGHT RED VAR_THREE: BLUE VAR_TWO: DARK GREEN ``` --- --- In older versions of `bash` (before nameref's were available), and still available in newer versions of `bash`, there's the option of using indirect variable references; ``` $ x=5 $ echo "${x}" 5 $ unset -n y # make sure 'y' has not been previously defined as a nameref $ y=x $ echo "${y}" x $ echo "${!y}" 5 ``` Pulling this into the associative array approach: ``` unset -n var_name # make sure var_name not previously defined as a nameref unset outputValues declare -A outputValues # required; declare associative array for var_name in "${VARIABLE_ARRAY[@]}" do outputValues[${var_name}]="${!var_name}" done ``` Which gives us: ``` $ typeset -p outputValues declare -A outputValues=([VAR_ONE]="LIGHT RED" [VAR_THREE]="BLUE" [VAR_TWO]="DARK GREEN" ) ``` **NOTE**: While this requires less coding in the `for` loop, if you forget to `unset -n` the variable (`var_name` in this case) then you'll end up with the wrong results if `var_name` was previously defined as a nameref; perhaps a minor issue but it requires the coder to know of, and code for, this particular issue ... a bit too esoteric (for my taste) so I prefer to stick with namerefs ... ymmv ...
I've come up with a handful of possible solutions in the last couple days, each one with their own pro's and con's. I won't mark this as the answer for awhile though, since I'm interested in hearing unbiased recommendations. --- My brainstorming solutions thus far: OPTION #1 - FOR-LOOP: ``` alias PrintCommandValues='unset outputValues for var in ${VARIABLE_ARRAY[@]} do outputValues+=("${var}: ${!var}") done; printf "%s\n\n" "${outputValues[@]}"' PrintCommandValues ``` Pro's: Traditional, easy to understand Cons: A little verbose. I'm not sure about Bash, but I've been doing a lot of Mathematica programming (imperative-style), where such loops are notably slower. Anybody know if that's true for Bash? OPTION #2 - EVAL: ``` i=0; outputValues=("${VARIABLE_ARRAY[@]}") eval declare "${VARIABLE_ARRAY[@]/#/outputValues[i++]+=:\\ $}" printf "%s\n\n" "${outputValues[@]}" ``` Pros: Shorter than the for-loop, and still easy to understand. Cons: I'm no expert, but I've read a lot of warnings to avoid `eval` whenever possible, due to security issues. Probably not something I'll concern myself a ton over when I'm mostly writing scripts for "handy utility purposes" for my personal machine only, but... OPTION #3 - QUOTED DECLARE WITH PARENTHESIS: ``` i=0; declare -a outputValues="(${VARIABLE_ARRAY[@]/%/'\:\ "${!VARIABLE_ARRAY[i++]}"'})" printf "%s\n\n" "${outputValues[@]}" ``` Pros: Super-concise. I just plain stumbled onto this syntax -- I haven't found it mentioned anywhere on the web. Apparently, using `declare` in Bash (I use version 4.4.20(1)), if (**and ONLY if**) you place array-style `(...)` brackets after the equals-sign, and quote it, you get one more "round" of expansion/dereferencing, similar to `eval`. I happened to be toying with [this post](https://superuser.com/a/1186997), and found the part about the "extra expansion" by accident. For example, compare these two tests: ``` varName=varOne; varOne=something declare test1=\$$varName declare -a test2="(\$$varName)" declare -p test1 test2 ``` Output: ``` declare -- test1="\$varOne" declare -a test2=([0]="something") ``` Pretty neat, I think... Anyways, the cons for this method are... I've never seen it documented officially or unofficially anywhere, so... portability...? Alternative for this option: ``` i=0; declare -a LABELED_VARIABLE_ARRAY="(${VARIABLE_ARRAY[@]/%/'\:\ \$"${VARIABLE_ARRAY[i++]}"'})" declare -a outputValues=("${LABELED_VARIABLE_ARRAY[@]@P}") printf "%s\n\n" "${outputValues[@]}" ``` JUST FOR FUN - BRACE EXPANSION: ``` unset outputValues; OLDIFS=$IFS; IFS=; i=0; j=0 declare -n nameCursor=outputValues[i++]; declare -n valueCursor=outputValues[j++] declare {nameCursor+=,valueCursor+=": "$}{VAR_ONE,VAR_TWO,VAR_THREE} printf "%s\n\n" "${outputValues[@]}" IFS=$OLDIFS ``` Pros: ??? Maybe speed? Cons: Pretty verbose, not very easy to understand --- Anyways, those are all of my methods... Are any of them reasonable, or would you do something different altogether?
4,968
1,894,099
I am trying to run the script [csv2json.py](http://www.djangosnippets.org/snippets/1680/) in the Command Prompt, but I get this error: ``` C:\Users\A\Documents\PROJECTS\Django\sw2>csv2json.py csvtest1.csv wkw1.Lawyer Converting C:\Users\A\Documents\PROJECTS\Django\sw2csvtest1.csv from CSV to JSON as C:\Users\A\Documents\PROJECTS\Django\sw2csvtest1.csv.json Traceback (most recent call last): File "C:\Users\A\Documents\PROJECTS\Django\sw2\csv2json.py", line 37, in <module> f = open(in_file, 'r' ) IOError: [Errno 2] No such file or directory: 'C:\\Users\\A\\Documents\\PROJECTS\\Django\\sw2csvtest1.csv' ``` Here are the relevant lines from the snippet: ``` 31 in_file = dirname(__file__) + input_file_name 32 out_file = dirname(__file__) + input_file_name + ".json" 34 print "Converting %s from CSV to JSON as %s" % (in_file, out_file) 36 f = open(in_file, 'r' ) 37 fo = open(out_file, 'w') ``` It seems that the directory name and file name are combined. How can I make this script run? Thanks. **Edit:** Altering lines 31 and 32 as answered by Denis Otkidach worked fine. But I realized that the first column name needs to be pk and each row needs to start with an integer: ``` for row in reader: if not header_row: header_row = row continue pk = row[0] model = model_name fields = {} for i in range(len(row)-1): active_field = row[i+1] ``` So my csv row now looks like this (including the header row): ``` pk, firm_url, firm_name, first, last, school, year_graduated 1, http://www.graychase.com/aabbas, Gray & Chase, Amr A, Babas, The George Washington University Law School, 2005 ``` Is this a requirement of the django fixture or json format? If so, I need to find a way to add the pk numbers to each row. Can I delete this pk column? Any suggestions? **Edit 2** I keep getting this ValidationError: "This value must be an integer". There is only one integer field and that's the pk. Is there a way to find out from the traceback what the line numbers refer to? ``` Problem installing fixture 'C:\Users\A\Documents\Projects\Django\sw2\wkw2\fixtures\csvtest1.csv.json': Traceback (most recent call last): File "C:\Python26\Lib\site-packages\django\core\management\commands\loaddata.py", line 150, in handle for obj in objects: File "C:\Python26\lib\site-packages\django\core\serializers\json.py", line 41, in Deserializer for obj in PythonDeserializer(simplejson.load(stream)): File "C:\Python26\lib\site-packages\django\core\serializers\python.py", line 95, in Deserializer data[field.attname] = field.rel.to._meta.get_field(field.rel.field_name).to_python(field_value) File "C:\Python26\lib\site-packages\django\db\models\fields\__init__.py", line 356, in to_python _("This value must be an integer.")) ValidationError: This value must be an integer. ```
2009/12/12
[ "https://Stackoverflow.com/questions/1894099", "https://Stackoverflow.com", "https://Stackoverflow.com/users/215094/" ]
``` from os import path in_file = path.join(dirname(__file__), input_file_name ) out_file = path.join(dirname(__file__), input_file_name + ".json" ) [...] ```
You should be using `os.path.join` rather than just concatenating `dirname()` and filenames. ``` import os.path in_file = os.path.join(dirname(__file__), input_file_name) out_file = os.path.join(dirname(__file__), input_file_name + ".json") ``` will fix your problem, though depending on what exactly you're doing, there's probably a more elegant way to do it.
4,969
33,545,813
I am creating a Python class but it seems I can't get the constructor class to work properly. Here is my class: ``` class IQM_Prep(SBconcat): def __init__(self,project_dir): self.project_dir=project_dir #path to parent project dir self.models_path=self.__get_models_path__() #path to parent models dir self.experiments_path=self.__get_experiments_path__() #path to parent experiemnts dir def __get_models_path__(self): for i in os.listdir(self.project_dir): if i=='models': models_path=os.path.join(self.project_dir,i) return models_path def __get_experiments_path__(self): for i in os.listdir(self.project_dir): if i == 'experiments': experiments_path= os.path.join(self.project_dir,i) return experiments ``` When I initialize this class: ``` project_dir='D:\\MPhil\\Model_Building\\Models\\TGFB\\Vilar2006\\SBML_sh_ver\\vilar2006_SBSH_test7\\Python_project' IQM= Modelling_Tools.IQM_Prep(project_dir) ``` I get the following error: ``` Traceback (most recent call last): File "<ipython-input-49-7c46385755ce>", line 1, in <module> runfile('D:/MPhil/Python/My_Python_Modules/Modelling_Tools/Modelling_Tools.py', wdir='D:/MPhil/Python/My_Python_Modules/Modelling_Tools') File "C:\Anaconda1\lib\site-packages\spyderlib\widgets\externalshell\sitecustomize.py", line 585, in runfile execfile(filename, namespace) File "D:/MPhil/Python/My_Python_Modules/Modelling_Tools/Modelling_Tools.py", line 1655, in <module> import test File "test.py", line 19, in <module> print parameter_file File "Modelling_Tools.py", line 1536, in __init__ self.models_path=self.__get_models_path__() #path to parent models dir File "Modelling_Tools.py", line 1543, in __get_models_path__ return models_path UnboundLocalError: local variable 'models_path' referenced before assignment ``` `Modelling_Tools` is the name of my custom module.
2015/11/05
[ "https://Stackoverflow.com/questions/33545813", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3059024/" ]
Based on the traceback, it seems that either: ``` def __get_models_path__(self): for i in os.listdir(self.project_dir): # 1. this never loops; or if i=='models': # 2. this never evaluates True models_path=os.path.join(self.project_dir,i) # hence this never happens return models_path # and this causes an error ``` You should review the result of `os.listdir(self.project_dir)` to find out why; either the directory is empty or nothing in it is named `models`. You could initialise e.g. `models_path = None` at the start of the method, but that would just hide the problem until later. --- **Sidenote**: per my comments, you should check out the [style guide](https://www.python.org/dev/peps/pep-0008/), particularly on naming conventions for methods...
`models_path` is initialized only when: * `self.project_dir` has some files/dirs and * one of this file/dir has name `models` If one of this condition is not fullfiled, then `models_path` is not initialized.
4,972
21,319,261
I am trying to execute some code on a Beaglebone Black running ubuntu. The script has two primary functions: 1: count digital pulse 2: store the counted pulses in mySQL every 10s or so These two functions need to run idefinitely. My question is how to do get these two functions to run in parallel? Here is my latest code revision that doesn't work so you can see what I am attempting to do. Any help much appreciated -- as I new to python. Thank you in advance! ``` #!/usr/bin/python import Adafruit_BBIO.GPIO as GPIO import MySQLdb import time import thread from threading import Thread now = time.strftime('%Y-%m-%d %H:%M:%S') total1 = 0 total2 = 0 def insertDB_10sec(now, total1, total2): while True: conn = MySQLdb.connect(host ="localhost", user="root", passwd="password", db="dbname") cursor= conn.cursor() cursor.execute("INSERT INTO tablename VALUES (%s, %s, %s)", (now, total1, total2)) conn.commit() conn.close() print "DB Entry Made" time.sleep(10) def countPulse(): now = time.strftime('%Y-%m-%d %H:%M:%S') GPIO.setup("P8_12", GPIO.IN) total1 = 0 total2 = 0 while True: GPIO.wait_for_edge("P8_12", GPIO.RISING) GPIO.wait_for_edge("P8_12", GPIO.FALLING) now = time.strftime('%Y-%m-%d %H:%M:%S') print now total1 +=1 print total1 return now, total1, total2 t1 = Thread(target = countPulse) t2 = Thread(target = insertDB_10sec(now, total1, total2)) t1.start() t2.start() ```
2014/01/23
[ "https://Stackoverflow.com/questions/21319261", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2133624/" ]
This is a perfect problem for a `Queue`! ``` #!/usr/bin/python import Adafruit_BBIO.GPIO as GPIO import MySQLdb import time import thread import Queue from threading import Thread now = time.strftime('%Y-%m-%d %H:%M:%S') total1 = 0 total2 = 0 pulse_objects = Queue.Queue() def insertDB_10sec(pulse_objects): while True: now,total1,total2 = pulse_objects.get() conn = MySQLdb.connect(host ="localhost", user="root", passwd="password", db="dbname") cursor= conn.cursor() cursor.execute("INSERT INTO tablename VALUES (%s, %s, %s)", (now, total1, total2)) conn.commit() conn.close() print "DB Entry Made" time.sleep(10) def countPulse(pulse_objects): now = time.strftime('%Y-%m-%d %H:%M:%S') GPIO.setup("P8_12", GPIO.IN) total1 = 0 total2 = 0 while True: GPIO.wait_for_edge("P8_12", GPIO.RISING) GPIO.wait_for_edge("P8_12", GPIO.FALLING) now = time.strftime('%Y-%m-%d %H:%M:%S') print now total1 +=1 print total1 pulse_objects.put( (now, total1, total2) ) t1 = Thread(target = countPulse, args = (pulse_objects,)) t2 = Thread(target = insertDB_10sec, args = (pulse_objects,)) t1.start() t2.start() ```
Why do you need two threads? Move the insert to `countPulse`. That being said: You should not call `insertDB_10sec` here: ``` t2 = Thread(target = insertDB_10sec(now, total1, total2)) ``` Provide the arguments as actual arguments: ``` t2 = Thread(target = insertDB_10sec, args=(now, total1, total2)) ``` This will still not do what you intend, because `now, total1, total2` are local variables in `countPulse`. Instead declare them as global: ``` global now global total1 global total2 ``` The same holds for the other thread, because you pass the variables there at start. Remove the arguments to `insertDB_10sec` all together. Why would you want to store a new database entry every ten seconds regardless of what the other thread does? Your comment suggests that the field for `now` is unique. Therefore you should first check whether `now` changed before you try to insert again. Or replace `INSERT INTO` with `REPLACE INTO`.
4,973
3,023,136
Is it somehow possible to extract .cab files in python?
2010/06/11
[ "https://Stackoverflow.com/questions/3023136", "https://Stackoverflow.com", "https://Stackoverflow.com/users/341963/" ]
Not strictly answering what you asked, but if you are running on a windows platform you could spawn a process to do it for you. Taken from Wikipedia: > > Microsoft Windows provides two > command-line tools for creation and > extraction of CAB files. They are > MAKECAB.EXE (included within Windows > packages such as 'ie501sp2.exe' and > 'orktools.msi'; also available from > the SDK, see below) and EXTRACT.EXE > (included on the installation CD), > respectively. Windows XP also provides > the EXPAND.EXE command. > > >
Oddly, the [msilib](http://docs.python.org/library/msilib.html) can only create or append to .CAB files, but not extract them. :( However, the [hachoir](https://hachoir.readthedocs.io/en/latest/parser.html) parser module can apparently read & edit Cabinets. (I have not used it, though, so I couldn't tell you how fitting it is or not!)
4,974
66,283,314
I am writing a script to automate data collection and was having trouble clicking a link. The website is behind a login, but I navigated that successfully. I ran into problems when trying to navigate to the download page. This is in python using chrome webdriver. I have tried using: ``` find_element_by_partial_link_text('stuff').click() find_element_by_xpath('stuff').click() #and a few others ``` I get iterations of following message when I try a few of the selector statements. ``` NoSuchElementException: Message: no such element: Unable to locate element: {"method":"partial link text","selector":"download"} (Session info: chrome=88.0.4324.182) ``` Html source I'm trying to use is: ``` <a routerlink="/download" title="Download" href="/itron-mvweb/download"><i class="fa fa-lg fa-fw fa-download"></i><span class="menu-item-parent">Download</span></a> ``` Thank you!
2021/02/19
[ "https://Stackoverflow.com/questions/66283314", "https://Stackoverflow.com", "https://Stackoverflow.com/users/14024634/" ]
This is caused by a typo. `Download` is case-sensitive, make sure you capitalize the `D`!
To click on the element with text as **Download** you can use either of the following [Locator Strategies](https://stackoverflow.com/questions/48369043/official-locator-strategies-for-the-webdriver/48376890#48376890): * Using `css_selector`: ``` driver.find_element(By.CSS_SELECTOR, "a[title='Download'][href='/itron-mvweb/download'] span.menu-item-parent").click() ``` * Using `xpath`: ``` driver.find_element(By.XPATH, "//a[@title='Download' and @href='/itron-mvweb/download']//span[@class='menu-item-parent' and text()='Download']").click() ``` --- Ideally, to click on the element you need to induce [WebDriverWait](https://stackoverflow.com/questions/59130200/selenium-wait-until-element-is-present-visible-and-interactable/59130336#59130336) for the [`element_to_be_clickable()`](https://stackoverflow.com/questions/65604057/unable-to-locate-element-using-selenium-chrome-webdriver-in-python-selenium/65604613#65604613) and you can use either of the following [Locator Strategies](https://stackoverflow.com/questions/48369043/official-locator-strategies-for-the-webdriver/48376890#48376890): * Using `CSS_SELECTOR`: ``` WebDriverWait(driver, 20).until(EC.element_to_be_clickable((By.CSS_SELECTOR, "a[title='Download'][href='/itron-mvweb/download'] span.menu-item-parent"))).click() ``` * Using `XPATH`: ``` WebDriverWait(driver, 20).until(EC.element_to_be_clickable((By.XPATH, "//a[@title='Download' and @href='/itron-mvweb/download']//span[@class='menu-item-parent' and text()='Download']"))).click() ``` * **Note**: You have to add the following imports : ``` from selenium.webdriver.support.ui import WebDriverWait from selenium.webdriver.common.by import By from selenium.webdriver.support import expected_conditions as EC ``` --- References ---------- You can find a couple of relevant discussions on [NoSuchElementException](https://stackoverflow.com/questions/47993443/selenium-selenium-common-exceptions-nosuchelementexception-when-using-chrome/47995294#47995294) in: * [selenium.common.exceptions.NoSuchElementException: Message: no such element: Unable to locate element while trying to click Next button with selenium](https://stackoverflow.com/questions/50315587/selenium-common-exceptions-nosuchelementexception-message-no-such-element-una/50315715#50315715) * [selenium in python : NoSuchElementException: Message: no such element: Unable to locate element](https://stackoverflow.com/questions/53441658/selenium-in-python-nosuchelementexception-message-no-such-element-unable-to/53442511#53442511)
4,976
27,572,688
I have written the following code using Python 2.7 to search the list 'dem\_nums' for the first three characters from each element in the list 'dems', and if they are not present to append them. When I run the code the list 'dem\_nums' is returned as empty. I've tried using this article to help ([check if a number already exist in a list in python](https://stackoverflow.com/questions/14667578/check-if-a-number-already-exist-in-a-list-in-python)) but using the information there hasn't solved the problem. ``` dems = ["083c15", "083c16", "083f01", "083f02"] dem_nums = [] for dem in dems: dem_num = dem[0:3] if dem_num not in dem_nums: dem_nums.append(dem_num) ```
2014/12/19
[ "https://Stackoverflow.com/questions/27572688", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4289336/" ]
I am not sure I understand your requirement. Why write such a complicated stylesheet when the end result should simply be a total amount of numbers? Also, it seems you are already familiar with the relevant EXSLT functions and with converting strings into numbers. **Stylesheet** ``` <?xml version="1.0" encoding="UTF-8"?> <xsl:stylesheet xmlns:xsl="http://www.w3.org/1999/XSL/Transform" version="1.0" xmlns:exsl="http://exslt.org/common"> <xsl:output method="text" encoding="UTF-8" indent="no"/> <xsl:variable name="amounts"> <xsl:for-each select="//LI_Amount_display"> <amount> <xsl:value-of select="number(substring(translate(.,',',''),2))"/> </amount> </xsl:for-each> </xsl:variable> <xsl:template match="/"> <xsl:value-of select="sum(exsl:node-set($amounts)/amount)"/> </xsl:template> </xsl:stylesheet> ``` **Output** ``` 9000 ```
While I tend to go with the suggestion made by Mathias Müller, I wanted to show how you can do this using a recursive named template: **XSLT 1.0** ``` <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform"> <xsl:output method="xml" omit-xml-declaration="yes" version="1.0" encoding="utf-8" indent="yes"/> <xsl:template match="/"> <xsl:call-template name="sum-nodes" > <xsl:with-param name="nodes" select="query/results/result/columns/column/LI_Amount_display" /> </xsl:call-template> </xsl:template> <xsl:template name="sum-nodes" > <xsl:param name="nodes"/> <xsl:param name="sum" select="0"/> <xsl:param name="newSum" select="$sum + translate($nodes[1], '$,', '')"/> <xsl:choose> <xsl:when test="count($nodes) > 1"> <!-- recursive call --> <xsl:call-template name="sum-nodes" > <xsl:with-param name="nodes" select="$nodes[position() > 1]" /> <xsl:with-param name="sum" select="$newSum" /> </xsl:call-template> </xsl:when> <xsl:otherwise> <xsl:value-of select="format-number($newSum, '#,##0.00')"/> </xsl:otherwise> </xsl:choose> </xsl:template> </xsl:stylesheet> ``` **Result**: ``` 9,000.00 ```
4,977
2,051,526
As we all know (or should), you can use Django's template system to render email bodies: ``` def email(email, subject, template, context): from django.core.mail import send_mail from django.template import loader, Context send_mail(subject, loader.get_template(template).render(Context(context)), 'from@domain.com', [email,]) ``` This has one flaw in my mind: to edit the subject and content of an email, you have to edit both the view and the template. While I can justify giving admin users access to the templates, I'm not giving them access to the raw python! What would be really cool is if you could specify blocks in the email and pull them out separately when you send the email: ``` {% block subject %}This is my subject{% endblock %} {% block plaintext %}My body{% endblock%} {% block html %}My HTML body{% endblock%} ``` But how would you do that? How would you go about rendering just one block at a time?
2010/01/12
[ "https://Stackoverflow.com/questions/2051526", "https://Stackoverflow.com", "https://Stackoverflow.com/users/12870/" ]
This is my third working iteration. It assuming you have an email template like so: ``` {% block subject %}{% endblock %} {% block plain %}{% endblock %} {% block html %}{% endblock %} ``` I've refactored to iterate the email sending over a list by default and there are utility methods for sending to a single email and `django.contrib.auth` `User`s (single and multiple). I'm covering perhaps more than I'll sensibly need but there you go. I also might have gone over the top with Python-love. ``` def email_list(to_list, template_path, context_dict): from django.core.mail import send_mail from django.template import loader, Context nodes = dict((n.name, n) for n in loader.get_template(template_path).nodelist if n.__class__.__name__ == 'BlockNode') con = Context(context_dict) r = lambda n: nodes[n].render(con) for address in to_list: send_mail(r('subject'), r('plain'), 'from@domain.com', [address,]) def email(to, template_path, context_dict): return email_list([to,], template_path, context_dict) def email_user(user, template_path, context_dict): return email_list([user.email,], template_path, context_dict) def email_users(user_list, template_path, context_dict): return email_list([user.email for user in user_list], template_path, context_dict) ``` As ever, if you can improve on that, please do.
Just use two templates: one for the body and one for the subject.
4,978
62,191,724
Trying to make use of this package: <https://github.com/microsoft/Simplify-Docx> Can someone pls tell me the proper sequence of actions needed to install and use the package? What I've tried (as a separate commands from vscode terminal): ``` pip install python-docx Git clone <git link> python setup.py install ``` After the installation has been successfully completed I'm trying to run from VS Code terminal the file in which I've pasted the code from readme's "usage" section: ``` import docx from simplify_docx import simplify # read in a document my_doc = docx.Document("docxinaprojectfolder.docx") //I wonder how should I properly specify the path to file? # coerce to JSON using the standard options my_doc_as_json = simplify(my_doc) # or with non-standard options my_doc_as_json = simplify(my_doc,{"remove-leading-white-space":False}) ``` And I only get ``` ModuleNotFoundError: No module named 'docx' ``` But I've installed this module in the first place. What am I doing wrong? Am I missing some of the steps? (Like init or smth). Vscode status bar at the bottom left says that I'm using python 3.8.x, and I'm trying to run the script via "play" button. ``` python --version Python 3.6.5 py show's though that 3.8.x is being used. ``` Thanks
2020/06/04
[ "https://Stackoverflow.com/questions/62191724", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9130563/" ]
The problem is that your system doesn't have "docx" module. to install docx module you will have to install docx. steps to install: 1) open CMD prompt. 2) type "pip install docx" if your installation is fresh it may need "simplify" module too.
Like any python package that doesn't come with python, you need to install it before using it. In your terminal window you can install if from the Python package index like this: ```bash pip install simplify-docx ``` or you can install it directly from GitHub like this: ```bash pip install git+git://github.com/microsoft/Simplify-Docx.git ```
4,980
19,085,887
I searched and tried following stuff but could not found any solution, please let me know if this is possible: I am trying to develop a python module as wrapper where I call another 3rd party module with its .main() and provide the required parameter which I need to get from command line in my module. I need few parameter for my module too. I am using argparse to parse command line for calling module and my module. The calling parameter list is huge (more than 40) which are optional but may require anytime who will use my module. Currently I have declared few important parameters in my module to parse but I need to expand with all the parameter. I thought of providing all the parameter in my module without declaring in add\_argument. I tried with parse\_known\_args which also require declaration of all parameter is required. Is there any way where I can pass on all parameter to calling module without declaring in my module? If its possible please let me know how it can be done. Thanks in advance,
2013/09/30
[ "https://Stackoverflow.com/questions/19085887", "https://Stackoverflow.com", "https://Stackoverflow.com/users/948673/" ]
Does this scenario fit? Module B: ``` import argparse parser = argparse.... def main(args): .... if __name__ == '__main__': args = parser.parse_args() main(args) ``` Module A ``` import argparse import B parser = argparse.... # define arguments that A needs to use if _name__=='__main__': args,rest = parser.parse_known_args() # use args # rest - argument strings that A could not process argsB = B.parse_args(rest) # 'rest' does not have the strings that A used; # but could also use # argsB = B.parse_known_args() # using sys.argv; ignore what it does not recognize # or even # argsB = B.parse_known_args(rest) B.main(argsB) ``` Alternate A ``` import argparse import B parser = argparse.ArgumentParser(parents=[B.parser], add_help=False) # B.parser probably already defines -h # add arguments that A needs to use if _name__=='__main__': args = parser.parse_args() # use args that A needs B.main(args) ``` In one case, each parser handles only the strings that it recognizes. In the other A.parser handles everything, using the 'parents' parameter to 'learn' what B.parser recognizes.
Provided that you are calling third-party modules, a possible solution is to change **sys.argv** and **sys.argc** at runtime to reflect the correct parameters for the module you're calling, once you're done with your own parameters.
4,982
51,876,794
I have a text file named `file.txt` with some numbers like the following : ``` 1 79 8.106E-08 2.052E-08 3.837E-08 1 80 -4.766E-09 9.003E-08 4.812E-07 1 90 4.914E-08 1.563E-07 5.193E-07 2 2 9.254E-07 5.166E-06 9.723E-06 2 3 1.366E-06 -5.184E-06 7.580E-06 2 4 2.966E-06 5.979E-07 9.702E-08 2 5 5.254E-07 0.166E-02 9.723E-06 3 23 1.366E-06 -5.184E-03 7.580E-06 3 24 3.244E-03 5.239E-04 9.002E-08 ``` I want to build a python dictionary, where the first number in each row is the key, the second number is always ignored, and the last three numbers are put as values. But in a dictionary, a key can not be repeated, so when I write my code (attached at the end of the question), what I get is ``` '1' : [ '90' '4.914E-08' '1.563E-07' '5.193E-07' ] '2' : [ '5' '5.254E-07' '0.166E-02' '9.723E-06' ] '3' : [ '24' '3.244E-03' '5.239E-04' '9.002E-08' ] ``` All the other numbers are removed, and only the last row is kept as the values. What I need is to have all the numbers against a key, say 1, to be appended in the dictionary. For example, what I need is : ``` '1' : ['8.106E-08' '2.052E-08' '3.837E-08' '-4.766E-09' '9.003E-08' '4.812E-07' '4.914E-08' '1.563E-07' '5.193E-07'] ``` Is it possible to do it elegantly in python? The code I have right now is the following : ``` diction = {} with open("file.txt") as f: for line in f: pa = line.split() diction[pa[0]] = pa[1:] with open('file.txt') as f: diction = {pa[0]: pa[1:] for pa in map(str.split, f)} ```
2018/08/16
[ "https://Stackoverflow.com/questions/51876794", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8869818/" ]
You can use a `defaultdict`. ``` from collections import defaultdict data = defaultdict(list) with open("file.txt", "r") as f: for line in f: line = line.split() data[line[0]].extend(line[2:]) ```
Try this: ``` from collections import defaultdict diction = defaultdict(list) with open("file.txt") as f: for line in f: key, _, *values = line.strip().split() diction[key].extend(values) print(diction) ``` This is a solution for Python 3, because the statement `a, *b = tuple1` is invalid in Python 2. Look at the solution of @cha0site if you are using Python 2.
4,984
59,745,214
I have 2 files to copy from a folder to another folder and these are my codes: ``` import shutil src = '/Users/cadellteng/Desktop/Program Booklet/' dst = '/Users/cadellteng/Desktop/Python/' file = ['AI+Product+Manager+Nanodegree+Program+Syllabus.pdf','Artificial+Intelligence+with+Python+Nanodegree+Syllabus+9-5.pdf'] for i in file: shutil.copyfile(src+file[i], dst+file[i]) ``` When I tried to run the code I got the following error message: ``` /Users/cadellteng/venv/bin/python /Users/cadellteng/PycharmProjects/someProject/movingFiles.py Traceback (most recent call last): File "/Users/cadellteng/PycharmProjects/someProject/movingFiles.py", line 8, in <module> shutil.copyfile(src+file[i], dst+file[i]) TypeError: list indices must be integers or slices, not str Process finished with exit code 1 ``` I tried to find some solution on stackoverflow and one thread suggest to do this: ``` for i in range(file): shutil.copyfile(src+file[i], dst+file[i]) ``` and then I got the following error message: ``` /Users/cadellteng/venv/bin/python /Users/cadellteng/PycharmProjects/someProject/movingFiles.py Traceback (most recent call last): File "/Users/cadellteng/PycharmProjects/someProject/movingFiles.py", line 7, in <module> for i in range(file): TypeError: 'list' object cannot be interpreted as an integer Process finished with exit code 1 ``` So now I am thoroughly confused. If "i" can't be a string and it can't be an integer, what should it be? I am using PyCharm CE and very new to Python.
2020/01/15
[ "https://Stackoverflow.com/questions/59745214", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3910616/" ]
Just use the below code since `i` doesn't need an extra indexing `file[...]`, because it is not an index: ``` for i in file: shutil.copyfile(src + i, dst + i) ``` If you want to use `range`, use it this way with `len`: ``` for i in range(len(file)): shutil.copyfile(src+file[i], dst+file[i]) ``` But of course the first solution is preferred.
Try the code below, and read [for Statement in python](https://docs.python.org/3/tutorial/controlflow.html#for-statements) ``` import shutil src = '/Users/cadellteng/Desktop/Program Booklet/' dst = '/Users/cadellteng/Desktop/Python/' file = ['AI+Product+Manager+Nanodegree+Program+Syllabus.pdf','Artificial+Intelligence+with+Python+Nanodegree+Syllabus+9-5.pdf'] for i in file: shutil.copyfile(src + i, dst + i) ```
4,989
64,578,491
The other version of this question wasn't ever answered, the original poster didn't give a full example of their code... I have a function that's meant to import a spreadsheet for formatting purposes. Now, the spreadsheet can come in two forms: 1. As a filename string (excel, .csv, etc) to be imported as a DataFrame 2. Directly as a DataFrame (there's another function that may or may not be called to do some preprocessing) the code looks like ``` def func1(spreadsheet): if type(spreadsheet) == pd.DataFrame: df = spreadsheet else: df_ext = os.path.splitext(spreadsheet)[1] etc. etc. ``` If I run this function with a DataFrame, I get the following error: ``` ---> 67 if type(spreadsheet) == pd.DataFrame: df = spreadsheet 68 else: /opt/anaconda3/lib/python3.7/posixpath.py in splitext(p) 120 121 def splitext(p): --> 122 p = os.fspath(p) 123 if isinstance(p, bytes): 124 sep = b'/' TypeError: expected str, bytes or os.PathLike object, not DataFrame ``` Why is it doing this?
2020/10/28
[ "https://Stackoverflow.com/questions/64578491", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2954167/" ]
So, one way is to just compare with a string and reading the dataframe in the else condition. The other way would be to use `isinstance` ```py In [21]: dict1 Out[21]: {'a': [1, 2, 3, 4], 'b': [2, 4, 6, 7], 'c': [2, 3, 4, 5]} In [24]: df = pd.DataFrame(dict1) In [28]: isinstance(df, pd.DataFrame) Out[28]: True In [30]: isinstance(os.getcwd(), pd.DataFrame) Out[30]: False ``` So, in your case just do this ``` if isinstance(spreadsheet, pd.DataFrame) ```
This line is the problem: ``` if type(spreadsheet) == pd.DataFrame: ``` The type of a dataframe is `pandas.core.frame.DataFrame`. [pandas.DataFrame is a class](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.html) which returns a dataframe when you call it. Either of these would work: ``` if type(spreadsheet) == type(pd.DataFrame()): if type(spreadsheet) == pd.core.frame.DataFrame: ```
4,990
16,710,374
I am implementing a huge directed graph consisting of 100,000+ nodes. I am just beginning python so I only know of these two search algorithms. Which one would be more efficient if I wanted to find the shortest distance between any two nodes? Are there any other methods I'm not aware of that would be even better? Thank you for your time
2013/05/23
[ "https://Stackoverflow.com/questions/16710374", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2155605/" ]
There are indeed several other alternatives to BFS and DFS. One that is quite adequate to computing shortest path is: <http://en.wikipedia.org/wiki/Dijkstra>'s\_algorithm Dijsktra's Algorithm is basically an adaptation of a BFS algorithm, and it's much more efficient than searching the entire graph, if your graph is weighted. Like a @ThomasH said, Djikstra is only relevant if you have a weighted graph, if the weight of every edge is the same, it basically defaults back to BFS. If the choice is between BFS and DFS, then BFS is more adequate to finding shortest paths, because you explore the immediate vicinity of a node completely before moving on to nodes that are at a greater distance. This means that if there's a path of size 3, it'll be explored before the algorithm moves on to exploring nodes at distance 4, for instance. With DFS, you don't have such a guarantee, since you explore nodes in depth, you can find a longer path that just happened to be explored earlier, and you'll need to explore the entire graph to make sure that that is the shortest path. As to why you're getting downvotes, most SO questions should show a little effort has been put into finding a solution, for instance, there are several related questions on the pros and cons of DFS versus BFS. Next time try to make sure that you've searched a bit, and then ask questions about any specific doubts that you have.
Take a look at the following two algorithms: 1. [Dijkstra's algorithm](http://en.wikipedia.org/wiki/Dijkstra%27s_algorithm) - Single source shortest path 2. [Floyd-Warshall algorithm](http://en.wikipedia.org/wiki/Floyd%E2%80%93Warshall_algorithm) - All pairs shortest path
4,991
44,408,625
I am writing a python wrapper for calling programs of the AMOS package (specifically for merging genome assemblies from different sources using good ol' minimus2 from AMOS). The scripts should be called like this when using the shell directly: ``` toAmos -s myinput.fasta -o testoutput.afg minimus2 testoutput -D REFCOUNT=400 -D OVERLAP=500 ``` > > [*just for clarification:* > > > *-toAmos: converts my input.fasta file to .afg format and requires an input sequence argument ("-s") and an output argument ("-o")* > > > *-minimus2: merges a sequence dataset against reference contigs and requires an argument "-D REFCOUNT=x" for stating the number of rerference seqeunces in your input and an argument "-D OVERLAP=Y" for stating the minimum overlap between sequences*] > > > So within my script I use subprocess.call() to call the necessary AMOS tools. Basically I do this: ``` from subprocess import call: output_basename = "testoutput" inputfile = "myinput.fasta" call(["toAmos", "-s " + inputfile, "-o " + output_basename + ".afg"]) call(["minimus2", output_basename, "-D REFCOUNT=400", "-D OVERLAP=500"]) ``` But in this case the AMOS tools cannot interpret the arguments anymore. The arguments seem get modified by subprocess.call() and passed incorrectly. The error message I get is: ``` Unknown option: s myinput.fasta Unknown option: o testoutput.afg You must specify an output AMOS AFG file with option -o /home/jov14/tools/miniconda2/bin/runAmos: unrecognized option '-D REFCOUNT=400' Command line parsing failed, use -h option for usage info ``` It seems that the arguments get passed without the leading "-"? So I then tried passing the command as a single string (including arguments) like this: ``` call(["toAmos -s " + inputfile +" -o " + output_basename + ".afg"]) ``` But then I get this error... ``` OSError: [Errno 2] No such file or directory ``` ... presumably because subprocess.call is interpreting the whole string as the name for a single script. I guess I COULD probably try `shell=True` as a workaround, but the internet is FULL of instructions clearly advising against this. What seems to be the problem here? What can I do?
2017/06/07
[ "https://Stackoverflow.com/questions/44408625", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4685799/" ]
### answer Either do: ``` call("toAmos -s " + inputfile +" -o " + output_basename + ".afg") # single string ``` or do: ``` call(["toAmos", "-s", inputfile, "-o", output_basename + ".afg"]) # list of arguments ``` ### discussion In the case of your: ``` call(["toAmos", "-s " + inputfile, "-o " + output_basename + ".afg"]) ``` you should supply: * either `"-s" + inputfile` (no space after `-s`), `"-o" + output_basename + ".afg"` * or `"-s", inputfile` (separate arguments), `"-o", output_basename + ".afg"` In the case of your: ``` call(["minimus2", output_basename, "-D REFCOUNT=400", "-D OVERLAP=500"]) ``` the `"-D REFCOUNT=400"` and `"-D OVERLAP=500"` should be provided as two items each (`'-D', 'REFCOUNT=400', '-D', 'OVERLAP=500'`), or drop the spaces (`'-DREFCOUNT=400', '-DOVERLAP=500'`). ### additional info You seem to lack knowledge of how a shell splits a command-line; I suggest you always use the single string method, unless there are spaces in filenames or you have to use the `shell=False` option; in that case, I suggest you always supply a list of arguments.
`-s` and following input file name should be separate arguments to `call`, as they are in the command line: ``` call(["toAmos", "-s", inputfile, "-o", output_basename + ".afg"]) ```
4,996
46,460,218
im new in python and world of programming. get to the point. when i run this code and put input let say chicken, it will reply as two leg animal. but i cant get reply for two words things that has space in between like space monkey(althought it appear in my dictionary) so how do i solve it??? my dictionary: example.py ``` dictionary2 = { "chicken":"chicken two leg animal", "fish":"fish is animal that live under water", "cow":"cow is big vegetarian animal", "space monkey":"monkey live in space", ``` my code: test.py ``` from example import * print "how can i help you?" print user_input = raw_input() print print "You asked: " + user_input + "." response = "I will get back to you. " input_ls = user_input.split(" ") processor = { "dictionary2":False, "dictionary_lookup":[] } for w in input_ls: if w in dictionary2: processor["dictionary2"] = True processor["dictionary_lookup"].append(w) if processor["dictionary2"] is True: dictionary_lookup = processor["dictionary_lookup"][0] translation = dictionary2[dictionary_lookup] response = "what you were looking for is: " + translation print print "Response: " + response ```
2017/09/28
[ "https://Stackoverflow.com/questions/46460218", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8681243/" ]
Try this in your code . ``` @Override protected String doInBackground(Void... params) { RequestHandler rh = new RequestHandler(); String s = rh.sendGetRequest(konfigurasi.URL_GET_ALL); return s; } @Override protected void onPostExecute(String s) { // edited here try { JSONObject jsonObject = new JSONObject(s); JSONArray jsonArray = jsonObject.getJSONArray("result"); if(jsonArray.length() == 0){ Toast.makeText(getApplicationContext(), "No Data", Toast.LENGTH_LONG).show(); if (loading.isShowing()){ loading.dismiss(); } return; } } catch (JSONException e) { e.printStackTrace(); } Log.e("TAG",s); // Dismiss the progress dialog if (loading.isShowing()) loading.dismiss(); JSON_STRING = s; showEmployee(); } ``` 1.Determine return value whether is empty in `doInBackground` method 2.Determine param value whether is empty in `onPostExecute` method
You need to Debug this issue why your toast is not showing: You have correctly put show Toast code in onPostExecute Now to debug , first put a Log to know the value of s , whether it is ever null or empty. If yes and still toast is not showing, move the dialog dismiss dialog before Toast and check. If Toast still does not show debug your showEmployee() method as what it does.
4,997
50,388,396
I try to compile this code but I get this errror : ``` NameError: name 'dtype' is not defined ``` Here is the python code : ``` # -*- coding: utf-8 -*- from __future__ import division import pandas as pd import numpy as np import re import missingno as msno from functools import partial import seaborn as sns sns.set(color_codes=True) if dtype(data.OPP_CREATION_DATE)=="datetime64[ns]": print("OPP_CREATION_DATE is of datetime type") else: print("warning: the type of OPP_CREATION_DATE is not datetime, please fix this") ``` Any idea please to help me to resolve this problem? Thank you
2018/05/17
[ "https://Stackoverflow.com/questions/50388396", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9360453/" ]
As written by Amr Keleg, > > If `data` is a pandas dataframe then you can check the type of a > column as follows: > `df['colname'].dtype` or `df.colname.dtype` > > > In that case you need e.g. ``` df['colname'].dtype == np.dtype('datetime64') ``` or ``` df.colname.dtype == np.dtype('datetime64') ```
You should use `type` instead of `dtype`. `type` is a built-in function of python - <https://docs.python.org/3/library/functions.html#type> On the other hand, If `data` is a pandas dataframe then you can check the type of a column as follows: `df['colname'].dtype` or `df.colname.dtype`
4,998
52,607,623
I have 3 variables in python (age, gender, race) and I want to create a unique categorical binary code out of them. Firstly, the age is an integer and I want to threshold it for each decade 10-20, 20-30, 30-40 etc., gender 2 values and the race contains 4 values. How can I return a complete categorical code out of the three initial variables?
2018/10/02
[ "https://Stackoverflow.com/questions/52607623", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1194864/" ]
Here is a method returning a 7 bit code with first 4 bits for age bracket, next 2 for race, and 1 for gender. 4 bits for age imposes the constraint that there can be a total of 16 age brackets only, which is reasonable as it covers the age range 0-159. The 4 bit age code is simply the 4 bit representation of the integer `age//10`, which effectively discretizes the age value into ranges: 0-9, 10-19, ..., 150-159 The codes for race and gender are simply hard coded using the `race_dict` and `gender_dict` ``` def get_code(age, race, gender): #returns fixed size 7 bit code race_dict = {'African':'00','Hispanic':'01','European':'10','Cantonese':'11'} gender_dict = {'Male':'0','Female':'1'} age_code = '{0:b}'.format(age//10).zfill(4) race_code = race_dict[race] gender_code = gender_dict[gender] return age_code + race_code + gender_code ``` > > Input: age:25, race: 'Hispanic', gender: 'Female' > > > 7-bit code: 0010011 > > > If you would like this code to be an integer value between 0-127 for numerical purposes, you can use `int(code_str, 2)` to achieve that. **EDIT:** to get a numpy array from code string, use `np_code_arr = np.fromstring(' '.join(list(code_str)), dtype = int, sep = ' ')`
You can have a `n+1+4` dimensional vector encoding. Given binary code you require, this would be one way of doing it. You first `n` entries would encode decade. `1` if it belongs to that decade, `0` else. Next `(n+1)th` entry could be `1` if male and `0` if female. Similarly for race, `1` if it belongs to that category, `0` else. Let's say you have up to decades up 100. For 98 year old, male, white, you could do something like `[0 0 0 0 0 0 0 0 1 1 0 1 0 0 0]` assuming you start from `10` year to `100`. ``` import numpy as np def encodeAge(i, n): ageCode=np.zeros(n) ageCode[i]=1 return ageCode n=10 # number of decades dict_race={'w':[1,0,0,0],'b':[0,1,0,0],'a':[0,0,1,0],'l':[0,0,0,1]} # white, black, asian, latino dict_age={i:encodeAge(i, n) for i in range(n)} dict_gender={'m':[1],'f':[0]} def encodeAll(age, gender, race): # encode age code=[] code=np.concatenate([code, dict_age[age//10]]) # encode gender code=np.concatenate([code, dict_gender[gender]]) # encode race code=np.concatenate([code, dict_race[race]]) return code ``` e.g. `encodeAll(12,'m','w')` would return `array([0., 1., 0., 0., 0., 0., 0., 0., 0., 0., 1., 1., 0., 0., 0.])` This is slightly longer encoding than other encodings suggested.
5,001
30,296,531
So here is my first test for S3 buckets using boto: ``` import boto user_name, access_key, secret_key = "testing-user", "xxxxxxxxxxxxx", "xxxxxxxx/xxxxxxxxxxxx/xxxxxxxxxx(xxxxx)" conn = boto.connect_s3(access_key, secret_key) buckets = conn.get_all_buckets() ``` I get the following error: ``` Traceback (most recent call last): File "test-s3.py", line 9, in <module> buckets = conn.get_all_buckets() File "xxxxxx/lib/python2.7/site-packages/boto/s3/connection.py", line 440, in get_all_buckets response.status, response.reason, body) boto.exception.S3ResponseError: S3ResponseError: 403 Forbidden <?xml version="1.0" encoding="UTF-8"?> <Error><Code>SignatureDoesNotMatch</Code><Message>The request signature we calculated does not match the signature you provided. Check your key and signing method.</Message><AWSAccessKeyId>AKIAJMHSZXU6MORWA5GA</AWSAccessKeyId><StringToSign>GET Mon, 18 May 2015 06:21:58 GMT /</StringToSign><SignatureProvided>c/+YJAZVInsfmd5giMQmrh81DPA=</SignatureProvided><StringToSignBytes>47 45 54 0a 0a 0a 4d 6f 6e 2c 20 31 38 20 4d 61 79 20 32 30 31 35 20 30 36 3a 32 31 3a 35 38 20 47 4d 54 0a 2f</StringToSignBytes><RequestId>5733F9C8926497E6</RequestId><HostId>FXPejeYuvZ+oV2DJLh7HBpryOh4Ve3Mmj8g8bKA2f/4dTWDHJiG8Bpir8EykLYYW1OJMhZorbIQ=</HostId></Error> ``` How am I supposed to fix this? Boto version is 2.38.0
2015/05/18
[ "https://Stackoverflow.com/questions/30296531", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1221660/" ]
Had the same issue. In my case, my generated security key had a special character '+' in between. So I deleted my key and regenerated a new key and it worked with the new key with no '+'. [Source](https://stackoverflow.com/a/12262106)
Today, I saw an error response with `SignatureDoesNotMatch` while playing around an S3 API locally and replacing **localhost** with **127.0.0.1** fixed the problem in my case.
5,003
43,837,305
I have a GitHub repository containing a AWS Lambda function. I am currently using Travis CI to build, test and then deploy this function to Lambda if all the tests succeed using ``` deploy: provider: lambda (other settings here) ``` My function has the following dependencies specified in its `requirements.txt` ``` Algorithmia numpy networkx opencv-python ``` I have set the build script for Travis CI to build in the working directory using the below command so as to have the dependencies get properly copied over to my AWS Lambda function. `pip install --target=$TRAVIS_BUILD_DIR -r requirements.txt` The problem is that while the build in Travis CI succeeds and everything is deployed to the Lambda function successfully, testing my Lambda function results in the following error: ``` Unable to import module 'mymodule': Importing the multiarray numpy extension module failed. Most likely you are trying to import a failed build of numpy. If you're working with a numpy git repo, try `git clean -xdf` (removes all files not under version control). Otherwise reinstall numpy. ``` My best guess as to why this is happening is that numpy is being built in the Ubuntu distribution of linux that Travis CI uses but the Amazon Linux that it is running on when executing as a Lambda function isn't able to run it properly. There are numerous forum posts and blog posts such as [this](https://markn.ca/2015/10/python-extension-modules-in-aws-lambda/) one detailing that python modules that need to build C/C++ extensions must be built on a EC2 instance. My question is: This is a real hassle to have to add another complication to the CD pipeline and have to mess around with EC2 instances. Has Amazon come up with some better way to do this (because there really should be a better way to do this) or is there some way to have everything compiled properly in Travis CI or another CI solution? Also, I suppose it's possible that I've mis-identified the problem and that there is some other reason why importing numpy is failing. If anyone has suggestions on how to resolve this that would be great! --- **EDIT:** As suggested by @jordanm it looks like it may be possible to load a docker container with the amazonlinux image when running TravisCI and then perform my build and test inside that container. Unfortunately, while that certainly is easier than using EC2 - I don't think I can use the normal lambda deploy tools in TravisCI - I'll have to write my own deploy script using the aws cli which is a bit of a pain. Any other ideas - or ways to make this smoother? Ideally I would be specify what docker image my builds run on in TravisCI as their default build environment is already using docker...but they don't seem to support that functionality yet: <https://github.com/travis-ci/travis-ci/issues/7726>
2017/05/07
[ "https://Stackoverflow.com/questions/43837305", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3474089/" ]
After quite a bit of tinkering I think I've found something that works. I thought I'd post it here in case others have the same problem. I decided to use [Wercker](http://www.wercker.com/) as they have quite a generous free tier and allow you to customize the docker image for your builds. Turns out there is a docker image that has been created to replicate the exact environments that Lambda functions are executed on! See: <https://github.com/lambci/docker-lambda> When running your builds in this docker container, extensions will be built properly so they can execute successfully on Lambda. In case anyone does want to use Wercker here's the `wercker.yml` I used and it may be helpful as a template: ``` box: lambci/lambda:build-python3.6 build: steps: - script: name: Install Dependencies code: | pip install --target=$WERCKER_SOURCE_DIR -r requirements.txt pip install pytest - script: name: Test code code: pytest - script: name: Cleaning up code: find $WERCKER_SOURCE_DIR \( -name \*.pyc -o -name \*.pyo -o -name __pycache__ \) -prune -exec rm -rf {} + - script: name: Create ZIP code: | cd $WERCKER_SOURCE_DIR zip -r $WERCKER_ROOT/lambda_deploy.zip . -x *.git* deploy: box: golang:latest steps: - arjen/lambda: access_key: $AWS_ACCESS_KEY secret_key: $AWS_SECRET_KEY function_name: yourFunction region: us-west-1 filepath: $WERCKER_ROOT/lambda_deploy.zip ```
Although I appreciate you may not want to add further complications to your project, you could potentially use a Python-focused Lambda management tool for setting up your builds and deployments, say something like [Gordon](https://github.com/jorgebastida/gordon). You could also just use this tool to do your deployment from inside the Amazon Linux Docker container running within Travis. If you wish to change CI providers, [CodeShip](https://codeship.com) allows you to build with any Docker container of your choice, and then [deploy to Lambda](https://blog.codeship.com/integrating-aws-lambda-with-codeship/). [Wercker](https://wercker.com) also runs full Docker-based builds and has many user-submitted deploy "steps", some of which [support deployment to Lambda](https://app.wercker.com/explore/steps/search/lambda).
5,004
53,268,375
I have a use case which often requires to copy a blob (file) from one Azure region to another. The file size spans from 25 to 45GB. Needless to say, this sometimes goes very slowly, with inconsistent performance. This might take up to two hours, sometimes more. Distance plays a role, but it differs. Even within the same region copying is slower then I would expect. I've been trying: 1. [The Python SDK, and its copy blob method from the blob service.](https://learn.microsoft.com/en-us/python/api/azure-storage-blob/azure.storage.blob.baseblobservice.baseblobservice?view=azure-python#copy-blob) 2. [The rest API copy blob](https://learn.microsoft.com/en-us/rest/api/storageservices/copy-blob) 3. az copy from the CLI. Although I didn't really expect different results, since all of them use the same backend methods. Is there any approach I am missing? Is there any way to speed up this process, or any kind of blob sharing integrated in Azure? VHD/disk sharing could also do.
2018/11/12
[ "https://Stackoverflow.com/questions/53268375", "https://Stackoverflow.com", "https://Stackoverflow.com/users/794967/" ]
Data model is wrong. Should be something like this: ``` SQL> create table customer 2 (customer_id number primary key, 3 first_name varchar2(20), 4 last_name varchar2(20), 5 phone varchar2(20)); Table created. SQL> create table items 2 (item_id number primary key, 3 item_name varchar2(20)); Table created. SQL> create table orders 2 (order_id number primary key, 3 customer_id number constraint fk_ord_cust references customer (customer_id) 4 ); Table created. SQL> create table order_details 2 (order_det_id number primary key, 3 order_id number constraint fk_orddet_ord references orders (order_id), 4 item_id number constraint fk_orddet_itm references items (item_id), 5 amount number 6 ); Table created. ``` Some quick & dirty sample data: ``` SQL> insert all 2 into customer values (100, 'Little', 'Foot', '00385xxxyyy') 3 into items values (1, 'Apple') 4 into items values (2, 'Milk') 5 into orders values (55, 100) 6 into order_details values (1000, 55, 1, 5) -- I'm ordering 5 apples 7 into order_details values (1001, 55, 2, 2) -- and 2 milks 8 select * from dual; 6 rows created. SQL> select c.first_name, sum(d.amount) count_of_items 2 from customer c join orders o on o.customer_id = c.customer_id 3 join order_details d on d.order_id = o.order_id 4 group by c.first_name; FIRST_NAME COUNT_OF_ITEMS -------------------- -------------- Little 7 SQL> ``` Or, list of items: ``` SQL> select c.first_name, i.item_name, d.amount 2 from customer c join orders o on o.customer_id = c.customer_id 3 join order_details d on d.order_id = o.order_id 4 join items i on i.item_id = d.item_id; FIRST_NAME ITEM_NAME AMOUNT -------------------- -------------------- ---------- Little Apple 5 Little Milk 2 SQL> ```
There is no relation between the two tables which you wish to combine data from. Kindly create a foreign key relation between the two tables which would help you get a common value based on which you could extract data. For e.g. - The column Customer\_id from customers table could be the foreign key in table orders which would specify the order placed by each customer. The following query should return you the expected result: SELECT customer.first\_name, customer.last\_name, orders.item\_name, orders.quantity FROM customer, orders WHERE customer.customer\_id = orders.customer\_id ORDER BY customer.first\_name; The query specified by you does not return any result as there is no match for any order and customer id in the two tables as both depict two different values. Hope it helps. Cheers!
5,005
55,574,215
I'm logging some Unicode characters to a file using "logging" in Python 3. The code works in the terminal, but fails with a UnicodeEncodeError in PyCharm. I load my logging configuration using `logging.config.fileConfig`. In the configuration, I specify a file handler with `encoding = utf-8`. Logging to console works fine. I'm using PyCharm 2019.1.1 (Community Edition). I don't think I've changed any relevant setting, but when I ran the same code in the PyCharm on another computer, the error was **not** reproduced. Therefore, I suspect the problem is related to a PyCharm setting. Here is a minimal example: ```py import logging from logging.config import fileConfig # ok print('1. café') # ok logging.error('2. café') # UnicodeEncodeError fileConfig('logcfg.ini') logging.error('3. café') ``` The content of logcfg.ini (in the same directory) is the following: ``` [loggers] keys = root [handlers] keys = file_handler [formatters] keys = formatter [logger_root] level = INFO handlers = file_handler [handler_file_handler] class = logging.handlers.RotatingFileHandler formatter = formatter args = ('/tmp/test.log',) encoding = utf-8 [formatter_formatter] format = %(levelname)s: %(message)s ``` I expect to see the first two logging messages in the console, and the third one in the logging file. The first two logging statements worked fine, but the third one failed. Here is the complete console output in PyCharm: ``` 1. café ERROR:root:2. café --- Logging error --- Traceback (most recent call last): File "/anaconda3/lib/python3.6/logging/__init__.py", line 996, in emit stream.write(msg) UnicodeEncodeError: 'ascii' codec can't encode character '\xe9' in position 13: ordinal not in range(128) Call stack: File "/Users/klkh/test.py", line 12, in <module> logging.error('3. café') Message: '3. café' Arguments: () ```
2019/04/08
[ "https://Stackoverflow.com/questions/55574215", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1654411/" ]
on different os need different solutions: on Windows: 1. download the libfile, <http://www.rarlab.com/rar/UnRARDLL.exe>, install it; 2. you'd better choose the default path, C:\Program Files (x86)\UnrarDLL\ 3. the most important is add the environment path, the varname enter UNRAR\_LIB\_PATH, pay attention, it must be!!!. then if your system is 64bit enter C:\Program Files (x86)\UnrarDLL\x64\UnRAR64.dll, if your system is 32bit enter C:\Program Files (x86)\UnrarDLL\UnRAR.dll. 4. after save the environment path, rerun your pycharm. on Linux you need to make so file, which is a little difficult. 1. the same, download the libfile <http://www.rarlab.com/rar/unrarsrc-5.4.5.tar.gz>, you can choose the latest version. 2. after download extract the file get the file unrar, `cd unrar` ,then `make lib`, then `make install-lib`, we'll get file `libunrar.so`(in /usr/lib). 3. last, you also need to set the environment path, `vim /etc/profile` open file `profile`, add `export UNRAR_LIB_PATH=/usr/lib/libunrar.so` in the end of the file. then save the file, use `source /etc/profile` to make the environment successful. 4. rerun the .py file. the resource website:<https://blog.csdn.net/ysy950803/article/details/52939708>
Additionally, after you do the things as mentioned by Tom.chen.kang and balandongiv, if you're using a 32bit DLL with 64bit Python, or vice-versa, then you'll probably get an error like this when you try to import unrar:- > > OSError: [WinError 193] %1 is not a valid Win32 application > > > In that case do this: For 32 Python & 32 bit DLL Change your Environment variables for variable ***UNRAR\_LIB\_PATH*** to : ``` C:\Program Files (x86)\UnrarDLL\UnRAR.dll ``` For 64 bit Python & 64 bit DLL Change your Environment variables for variable ***UNRAR\_LIB\_PATH*** to : ``` C:\Program Files (x86)\UnrarDLL\x64\UnRAR.dll ``` Restart your Pycharm or other Development Environment.
5,008
58,799,259
I am using Windows 10, PostgreSQL 12, Python 3.7.5 . I create username `odoo`, password `odoo`, create database `mydb`. Source code is <https://github.com/odoo/odoo/tree/aa0554d224337e1d966479a351a3ed059d297765> I run command ``` python odoo-bin -r odoo -w odoo --addons-path=addons --db-filter=mydb$ ``` Error ``` E:\source_code\github.com\xxxxxxxx\odoo>python odoo-bin -r odoo -w odoo --addons-path=addons --db-filter=mydb$ 2019-11-11 09:20:31,240 7372 INFO ? odoo: Odoo version 13.0 2019-11-11 09:20:31,240 7372 INFO ? odoo: addons paths: ['E:\\source_code\\github.com\\xxxxxxxx\\odoo\\odoo\\addons', 'c:\\users\\xxxxxxxx\\appdata\\local\\openerp s.a\\odoo\\addons\\13.0', 'e:\\source_code\\github.com\\xxxxxxxx\\odoo\\addons', 'e:\\source_code\\github.com\\xxxxxxxx\\odoo\\odoo\\addons'] 2019-11-11 09:20:31,241 7372 INFO ? odoo: database: odoo@default:default 2019-11-11 09:20:31,397 7372 INFO ? odoo.addons.base.models.ir_actions_report: You need Wkhtmltopdf to print a pdf version of the reports. 2019-11-11 09:20:31,517 7372 INFO ? odoo.service.server: HTTP service (werkzeug) running on D1CMPS_VyDN.mptelecom.com:8069 2019-11-11 09:20:45,250 7372 INFO ? odoo.http: HTTP Configuring static files 2019-11-11 09:20:45,296 7372 INFO ? odoo.sql_db: Connection to the database failed 2019-11-11 09:20:45,299 7372 INFO ? werkzeug: 192.168.50.215 - - [11/Nov/2019 09:20:45] "GET / HTTP/1.1" 500 - 0 0.000 0.039 2019-11-11 09:20:45,333 7372 ERROR ? werkzeug: Error on request: Traceback (most recent call last): File "C:\Program Files\Python37\lib\site-packages\werkzeug\serving.py", line 270, in run_wsgi execute(self.server.app) File "C:\Program Files\Python37\lib\site-packages\werkzeug\serving.py", line 258, in execute application_iter = app(environ, start_response) File "E:\source_code\github.com\xxxxxxxx\odoo\odoo\service\server.py", line 414, in app return self.app(e, s) File "E:\source_code\github.com\xxxxxxxx\odoo\odoo\service\wsgi_server.py", line 142, in application return application_unproxied(environ, start_response) File "E:\source_code\github.com\xxxxxxxx\odoo\odoo\service\wsgi_server.py", line 117, in application_unproxied result = odoo.http.root(environ, start_response) File "E:\source_code\github.com\xxxxxxxx\odoo\odoo\http.py", line 1281, in __call__ return self.dispatch(environ, start_response) File "E:\source_code\github.com\xxxxxxxx\odoo\odoo\http.py", line 1251, in __call__ return self.app(environ, start_wrapped) File "C:\Program Files\Python37\lib\site-packages\werkzeug\wsgi.py", line 766, in __call__ return self.app(environ, start_response) File "E:\source_code\github.com\xxxxxxxx\odoo\odoo\http.py", line 1415, in dispatch self.setup_db(httprequest) File "E:\source_code\github.com\xxxxxxxx\odoo\odoo\http.py", line 1338, in setup_db httprequest.session.db = db_monodb(httprequest) File "E:\source_code\github.com\xxxxxxxx\odoo\odoo\http.py", line 1499, in db_monodb dbs = db_list(True, httprequest) File "E:\source_code\github.com\xxxxxxxx\odoo\odoo\http.py", line 1466, in db_list dbs = odoo.service.db.list_dbs(force) File "E:\source_code\github.com\xxxxxxxx\odoo\odoo\service\db.py", line 378, in list_dbs with closing(db.cursor()) as cr: File "E:\source_code\github.com\xxxxxxxx\odoo\odoo\sql_db.py", line 649, in cursor return Cursor(self.__pool, self.dbname, self.dsn, serialized=serialized) File "E:\source_code\github.com\xxxxxxxx\odoo\odoo\sql_db.py", line 186, in __init__ self._cnx = pool.borrow(dsn) File "E:\source_code\github.com\xxxxxxxx\odoo\odoo\sql_db.py", line 532, in _locked return fun(self, *args, **kwargs) File "E:\source_code\github.com\xxxxxxxx\odoo\odoo\sql_db.py", line 600, in borrow **connection_info) File "C:\Program Files\Python37\lib\site-packages\psycopg2\__init__.py", line 126, in connect conn = _connect(dsn, connection_factory=connection_factory, **kwasync) psycopg2.OperationalError: FATAL: role "odoo" is not permitted to log in - - - 2019-11-11 09:20:45,442 7372 INFO ? odoo.sql_db: Connection to the database failed 2019-11-11 09:20:45,443 7372 INFO ? werkzeug: 192.168.50.215 - - [11/Nov/2019 09:20:45] "GET /favicon.ico HTTP/1.1" 500 - 0 0.000 0.057 2019-11-11 09:20:45,449 7372 ERROR ? werkzeug: Error on request: Traceback (most recent call last): File "C:\Program Files\Python37\lib\site-packages\werkzeug\serving.py", line 270, in run_wsgi execute(self.server.app) File "C:\Program Files\Python37\lib\site-packages\werkzeug\serving.py", line 258, in execute application_iter = app(environ, start_response) File "E:\source_code\github.com\xxxxxxxx\odoo\odoo\service\server.py", line 414, in app return self.app(e, s) File "E:\source_code\github.com\xxxxxxxx\odoo\odoo\service\wsgi_server.py", line 142, in application return application_unproxied(environ, start_response) File "E:\source_code\github.com\xxxxxxxx\odoo\odoo\service\wsgi_server.py", line 117, in application_unproxied result = odoo.http.root(environ, start_response) File "E:\source_code\github.com\xxxxxxxx\odoo\odoo\http.py", line 1281, in __call__ return self.dispatch(environ, start_response) File "E:\source_code\github.com\xxxxxxxx\odoo\odoo\http.py", line 1251, in __call__ return self.app(environ, start_wrapped) File "C:\Program Files\Python37\lib\site-packages\werkzeug\wsgi.py", line 766, in __call__ return self.app(environ, start_response) File "E:\source_code\github.com\xxxxxxxx\odoo\odoo\http.py", line 1415, in dispatch self.setup_db(httprequest) File "E:\source_code\github.com\xxxxxxxx\odoo\odoo\http.py", line 1338, in setup_db httprequest.session.db = db_monodb(httprequest) File "E:\source_code\github.com\xxxxxxxx\odoo\odoo\http.py", line 1499, in db_monodb dbs = db_list(True, httprequest) File "E:\source_code\github.com\xxxxxxxx\odoo\odoo\http.py", line 1466, in db_list dbs = odoo.service.db.list_dbs(force) File "E:\source_code\github.com\xxxxxxxx\odoo\odoo\service\db.py", line 378, in list_dbs with closing(db.cursor()) as cr: File "E:\source_code\github.com\xxxxxxxx\odoo\odoo\sql_db.py", line 649, in cursor return Cursor(self.__pool, self.dbname, self.dsn, serialized=serialized) File "E:\source_code\github.com\xxxxxxxx\odoo\odoo\sql_db.py", line 186, in __init__ self._cnx = pool.borrow(dsn) File "E:\source_code\github.com\xxxxxxxx\odoo\odoo\sql_db.py", line 532, in _locked return fun(self, *args, **kwargs) File "E:\source_code\github.com\xxxxxxxx\odoo\odoo\sql_db.py", line 600, in borrow **connection_info) File "C:\Program Files\Python37\lib\site-packages\psycopg2\__init__.py", line 126, in connect conn = _connect(dsn, connection_factory=connection_factory, **kwasync) psycopg2.OperationalError: FATAL: role "odoo" is not permitted to log in - - - ``` How to start Odoo 13 success from source code?
2019/11/11
[ "https://Stackoverflow.com/questions/58799259", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3728901/" ]
I think you need to configure the DB to trust your IP address: make the following chages in `pg_hba.conf`: ``` # IPv4 local connections: host all all 127.0.0.1/32 trust host all all MY_IP/24 trust ``` see also [this](https://www.odoo.com/documentation/13.0/setup/install.html#id3)
odoo 13 a default user name odoo that user with postgress it use a recently db created. you can pass a database configuration on your config file odoo 13 /debian/odoo.conf ``` [options] ; This is the password that allows database operations: ; admin_passwd = admin db_host = False db_port = False db_user = odoo db_password = False ;addons_path = /usr/lib/python3/dist-packages/odoo/addons ``` you can set db host port user password and use it
5,010
54,525,141
I have a python environment (it could be conda, virtualenv, venv or global python) - I have a python script - hello.py - that I want to execute within that environment. If I get the path to the python binary within the environment, for example, in windows with a conda environment called myenv, `/path/to/myenv/Scripts/python.exe`, and if I execute the script using that python, as shown below, am I guaranteed that the script is executed in that environment, independent of the type of virtual environment? If not, what can I do to ensure such a guarantee? `/path/to/myenv/Scripts/python.exe path/to/hello.py`
2019/02/04
[ "https://Stackoverflow.com/questions/54525141", "https://Stackoverflow.com", "https://Stackoverflow.com/users/456735/" ]
Yes, you're right! Furthermore you can evaluate the used executable by using the following snippet: ``` import sys print(sys.executable) ``` Then you will see the absolute path, e.g. `/opt/miniconda/envs/epm/bin/python`. If you're using a Unix system, you can run: ``` $ echo "import sys; print(sys.version); print(sys.executable)" | /opt/miniconda/envs/epm/bin/python 2.7.15 |Anaconda, Inc.| (default, Dec 14 2018, 13:10:39) [GCC 4.2.1 Compatible Clang 4.0.1 (tags/RELEASE_401/final)] /opt/miniconda/envs/epm/bin/python ```
I suspect not. There are a few environment variables (e.g. `PATH`) which are changed when you activate a virtualenv. You can open up `myenv/bin/activate` in a text editor to see what it does. Is there a particular reason you want to call the executable directly, rather than use the environment as designed? (e.g. `. ./myenv/bin/activate; python hello.py`)
5,012
36,911,060
I have a JSON file containing various objects each containing elements. With my python script, I only keep the objects I want, and then put the elements I want in a list. But the element has a prefix, which I'd like to suppress form the list. The post-script JSON looks like that: ``` { "ip_prefix": "184.72.128.0/17", "region": "us-east-1", "service": "EC2" } ``` The "IP/mask" is what I'd like to keep. The List looks like that: '"ip\_prefix": **"23.20.0.0/14"**,' So what can I do to only keep **"23.20.0.0/14"** in the list? Here is the code: ``` json_data = open(jsonsourcefile) data = json.load(json_data) print (destfile) d=[] for objects in (data['prefixes']): if servicerequired in json.dumps(objects): #print(json.dumps(objects, sort_keys=True, indent=4)) with open(destfile, 'a') as file: file.write(json.dumps(objects, sort_keys=True, indent=4 )) with open(destfile, 'r') as reads: liste = list() for strip in reads: if "ip_prefix" in strip: strip = strip.strip() liste.append(strip) print(liste) ``` Thanks, dersoi
2016/04/28
[ "https://Stackoverflow.com/questions/36911060", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5532788/" ]
You can also add this variable by using a preprocess hook. The following code will add the `is_front` variable so it can be used in the `html.html.twig` template: ``` // Adds the is_front variable to html.html.twig template. function mytheme_preprocess_html(&$variables) { $variables['is_front'] = \Drupal::service('path.matcher')->isFrontPage(); } ``` Then inside the twig template you could check the variable like so: ``` {% if is_front %} THIS IS THE FRONTPAGE {% endif %} ```
If you want to show a node within the front page and it should look just like the actual node page, you can create a new display for the node, like "On Frontpage". For that display you create a new node template (be careful to use the right naming convention for the twig file, otherwise it won't work). Then you tell the front page to display nodes using the "On Frontpage" display, which will use a different template (including your desired div). Twig templates naming convention: <https://www.drupal.org/node/2354645> So the steps: 1. create a new display mode for your nodes 2. create a new template for that particular display mode 3. tell the frontpage to display nodes using that display
5,013
31,573,399
I have a largish pandas dataframe (1.5gig .csv on disk). I can load it into memory and query it. I want to create a new column that is combined value of two other columns, and I tried this: ``` def combined(row): row['combined'] = row['col1'].join(str(row['col2'])) return row df = df.apply(combined, axis=1) ``` This results in my python process being killed, presumably because of memory issues. A more iterative solution to the problem seems to be: ``` df['combined'] = '' col_pos = list(df.columns).index('combined') crs_pos = list(df.columns).index('col1') sub_pos = list(df.columns).index('col2') for row_pos in range(0, len(df) - 1): df.iloc[row_pos, col_pos] = df.iloc[row_pos, sub_pos].join(str(df.iloc[row_pos, crs_pos])) ``` This of course seems very unpandas. And is very slow. Ideally I would like something like `apply_chunk()` which is the same as apply but only works on a piece of the dataframe. I thought `dask` might be an option for this, but `dask` dataframes seemed to have other issues when I used them. This has to be a common problem though, is there a design pattern I should be using for adding columns to large pandas dataframes?
2015/07/22
[ "https://Stackoverflow.com/questions/31573399", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3137396/" ]
I would try using list comprehension + [`itertools`](https://docs.python.org/2/library/itertools.html): ``` df = pd.DataFrame({ 'a': ['ab'] * 200, 'b': ['ffff'] * 200 }) import itertools [a.join(b) for (a, b) in itertools.izip(df.a, df.b)] ``` It might be "unpandas", but pandas doesn't seem to have a `.str` method that helps you here, and it isn't "unpythonic". To create another column, just use: ``` df['c'] = [a.join(b) for (a, b) in itertools.izip(df.a, df.b)] ``` Incidentally, you can also get your chunking using: ``` [a.join(b) for (a, b) in itertools.izip(df.a[10: 20], df.b[10: 20])] ``` If you'd like to play with parallelization. I would first try the above version, as list comprehension and itertools are often surprisingly fast, and parallelization would require an overhead that would need to be outweighed.
One nice way to create a new column in [`pandas`](http://pandas.pydata.org) or [`dask.dataframe`](http://dask.pydata.org/en/latest/dataframe.html) is with the [`.assign`](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.assign.html) method. ``` In [1]: import pandas as pd In [2]: df = pd.DataFrame({'x': [1, 2, 3, 4], 'y': ['a', 'b', 'a', 'b']}) In [3]: df Out[3]: x y 0 1 a 1 2 b 2 3 a 3 4 b In [4]: df.assign(z=df.x * df.y) Out[4]: x y z 0 1 a a 1 2 b bb 2 3 a aaa 3 4 b bbbb ``` However, if your operation is highly custom (as it appears to be) and if Python iterators are fast enough (as they seem to be) then you might just want to stick with that. Anytime you find yourself using `apply` or `iloc` in a loop it's likely that Pandas is operating much slower than is optimal.
5,014
48,125,575
I am trying to read the following code for back propagation in python ``` probs = exp_scores /np.sum(exp_scores, axis=1, keepdims=True) #Backpropagation delta3 = probs delta3[range(num_examples), y] -= 1 dW2 = (a1.T).dot(delta3) .... ``` but I cannot understand the following line of the code: ``` delta3[range(num_examples), y] -= 1 ``` could you please tell me what does this do? Thank you very much for your help!
2018/01/06
[ "https://Stackoverflow.com/questions/48125575", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7962244/" ]
There are two things here. First it is using numpy slicing to select only a fraction of `delta3`. Secondly it is removing 1 to every element of this fraction of the matrix. More precisely, `delta3[range(num_example), y]` is selecting lines of the matrix `delta3` ranging from 0 to `num_examples` but only selecting column `y`.
If you're interested, *why* it's computed this way, it's the backpropagation through cross-entropy loss: * `probs` is the vector of class probabilities (computed in a forward pass via softmax). * `delta3` is the error signal from the loss function. * `y` holds the ground truth classes for the mini-batch. Everything else is just a math, which is well explained in [this post](https://deepnotes.io/softmax-crossentropy) and they end up with the same numpy expression.
5,015
49,958,177
I am a beginner to python and am working on python 3.6.5 , I was trying to create a Chatbot but I don't understand how to use a comma to separate the two strings(red and Red) because the shell says that it is an invalid syntax(the comma is highlighted but nothing else). What have I done wrong?: ``` colour=input("What is your favourite colour? ") if colour=="red", "Red": print("Red is my favourite colour as well") ``` note:I know this question is very similar to others on the forum but considering I am only a beginner (I literally started learning python on friday) the answers for the other question were a bit confusing because they had different code,so I asked this question using what I am learning.
2018/04/21
[ "https://Stackoverflow.com/questions/49958177", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9679321/" ]
Use `in` ``` colour= input("What is your favourite colour? ") if colour in ("red", "Red"): print("Red is my favourite colour as well") ```
You could you use if colour in ['red', 'Red', 'RED', 'ReD'] as mentionned earlier, or you could just sanitize the input: ``` colour= input("What is your favourite colour? ") if colour.lower() == "red": print("Red is my favourite colour as well") ```
5,016
20,322,969
I am not sure if there is a solution for this on stack overflow; so apologies if this is a duplicate. There are number of ways of converting the string: ``` s = '[1, 2, 3]' ``` to a list ``` t = [1, 2, 3] ``` but I am looking for the most straightforward pythonic way of doing this. Also, performance matters.
2013/12/02
[ "https://Stackoverflow.com/questions/20322969", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1778980/" ]
One should use [ast.literal\_eval](http://docs.python.org/2/library/ast.html#ast.literal_eval): ``` >>> import ast >>> ast.literal_eval('[1,2,3]') [1, 2, 3] ```
Why never use json library. ``` import json # convert str to list t = json.loads(s) # back to string s2 = json.dumps(t) ```
5,017
49,949,398
I am facing an issue while importing java code which uses some external jar say selenium\_standalone\_server jar. I tried with normal code with no jars used in java, in this case i am able to import and run the code, but when i uses some jars in java code and then try to import that class to jython it gives error. Here is the sample code i used. i created jar of the code below "jython\_test.jar" ``` package Jython_workspace; import org.openqa.selenium.WebDriver; import org.openqa.selenium.firefox.FirefoxDriver; public class selenium_try { public void launch_browser() { WebDriver driver = new FirefoxDriver(); System.out.println("Hello Google..."); driver.get("http://google.com"); } } ``` this code uses the selenium\_server-standalone-3.11.0.jar. importing java jar in jython. ``` import sys sys.path.append("jython_test.jar") from jython_test import selenium_try as sel beach = sel.launch_browser() ``` the error encountered. ``` Traceback (most recent call last): File "D:\PD\sublime_code\Jython_workspace\try_selenium_python.py", line 5, in <module> from jython_test import selenium_try as sel java.lang.NoClassDefFoundError: org/openqa/selenium/WebDriver at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Unknown Source) at org.python.core.Py.loadAndInitClass(Py.java:991) at org.python.core.Py.findClassInternal(Py.java:926) at org.python.core.Py.findClassEx(Py.java:977) at org.python.core.packagecache.SysPackageManager.findClass(SysPackageManager.java:133) at org.python.core.packagecache.PackageManager.findClass(PackageManager.java:33) at org.python.core.packagecache.SysPackageManager.findClass(SysPackageManager.java:122) at org.python.core.PyJavaPackage.__findattr_ex__(PyJavaPackage.java:134) at org.python.core.PyObject.__findattr__(PyObject.java:946) at org.python.core.imp.importFromAs(imp.java:1160) at org.python.core.imp.importFrom(imp.java:1132) at org.python.pycode._pyx0.f$0(D:\PD\sublime_code\Jython_workspace\try_selenium_python.py:7) at org.python.pycode._pyx0.call_function(D:\PD\sublime_code\Jython_workspace\try_selenium_python.py) at org.python.core.PyTableCode.call(PyTableCode.java:167) at org.python.core.PyCode.call(PyCode.java:18) at org.python.core.Py.runCode(Py.java:1386) at org.python.util.PythonInterpreter.execfile(PythonInterpreter.java:296) at org.python.util.jython.run(jython.java:362) at org.python.util.jython.main(jython.java:142) Caused by: java.lang.ClassNotFoundException: org.openqa.selenium.WebDriver at org.python.core.SyspathJavaLoader.findClass(SyspathJavaLoader.java:131) at java.lang.ClassLoader.loadClass(Unknown Source) at java.lang.ClassLoader.loadClass(Unknown Source) ... 20 more java.lang.NoClassDefFoundError: java.lang.NoClassDefFoundError: org/openqa/selenium/WebDriver ``` How this issue can be solved when java uses 3rd party jars and then we want to import in jython.
2018/04/20
[ "https://Stackoverflow.com/questions/49949398", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3814582/" ]
**1)** We group by Name (assuming `rollapply` should be done separately for each `Name`) and then use `width = list(-seq(4))` with `rollapply` which uses offsets -1, -2, -3, -4 for each application of `mean`. (Offset 0 would be the current point but we want the 4 prior here.) Not clear what you are referring to regarding start time so that part has been left out. Also I have assumed that the data is sorted (which is the case in the question). You might also want to convert the dates to `"Date"` class but that isn't needed to answer the question if the rows are already sorted. ``` library(zoo) roll <- function(x) rollapply(x, list(-seq(4)), mean, fill = NA) transform(DF, Average = ave(Values, Name, FUN = roll)) ``` **2)** or if you like dplyr then using `roll` from above: ``` library(dplyr) library(zoo) DF %>% group_by(Name) %>% mutate(Average = roll(Values)) %>% ungroup() ```
An option is to use `zoo::rollapply` along with `dplyr::lag` as: ``` library(dplyr) library(lubridate) library(zoo) df %>% mutate(DATE = mdy(DATE)) %>% #Convert to Date arrange(Name, DATE) %>% #Order on Name and DATE mutate(Avg = rollapply(Values, 4, mean, fill= NA, align = "right")) %>% mutate(Average = lag(Avg)) %>% # This shows mean for previous 4 rows select(-Avg) # Name DATE Values Average # 1 TestA 2017-03-03 50 NA # 2 TestA 2017-03-04 75 NA # 3 TestA 2017-03-05 25 NA # 4 TestA 2017-03-06 100 NA # 5 TestA 2017-03-07 100 62.50 # 6 TestA 2017-03-08 50 75.00 # 7 TestA 2017-03-09 80 68.75 # 8 TestA 2017-03-10 90 82.50 # 9 TestA 2017-03-11 25 80.00 # 10 TestA 2017-03-12 0 61.25 # 11 TestA 2017-03-13 0 48.75 # 12 TestA 2017-03-14 0 28.75 # 13 TestA 2017-03-15 0 6.25 # 14 TestA 2017-03-16 50 0.00 ``` **Data:** ``` df <- read.table(text = "Name DATE Values TestA '3/3/2017' 50 TestA '3/4/2017' 75 TestA '3/5/2017' 25 TestA '3/6/2017' 100 TestA '3/7/2017' 100 TestA '3/8/2017' 50 TestA '3/9/2017' 80 TestA '3/10/2017' 90 TestA '3/11/2017' 25 TestA '3/12/2017' 0 TestA '3/13/2017' 0 TestA '3/14/2017' 0 TestA '3/15/2017' 0 TestA '3/16/2017' 50", header = TRUE, stringsAsFactors = FALSE) ```
5,018
53,846,322
I am exporting LOG\_INTERVAL value as 5. How can I add this env value in python as `time.sleep`? ``` import os import time print("Goodbye, World!") time.sleep(os.environ.get('LOG_INTERVAL')) ``` ``` error:- Goodbye, World! Traceback (most recent call last): File "test.py", line 4, in time.sleep(os.environ.get('LOG_INTERVAL')) TypeError: a float is required ``` `
2018/12/19
[ "https://Stackoverflow.com/questions/53846322", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10809642/" ]
The value you get from the environment is a string. You have to convert it to a number in order for it to be an acceptable value for `time.sleep()` ``` time.sleep(float(os.environ.get('LOG_INTERVAL')) ```
I think `LOG_INTERVAL` will be returned as a string. Check it's type with `type(os.environ.get('LOG_INTERVAL'))` If it is an int or a string containing nothing but numbers or fullstops `time.sleep(float(os.environ.get('LOG_INTERVAL')))` should convert it to a float and do the trick.
5,019
5,559,810
**Question** It seems that PyWin32 is comfortable with giving null-terminated unicode strings as return values. I would like to deal with these strings the 'right' way. Let's say I'm getting a string like: `u'C:\\Users\\Guest\\MyFile.asy\x00\x00sy'`. This appears to be a C-style null-terminated string hanging out in a Python unicode object. I want to trim this bad boy down to a regular ol' string of characters that I could, for example, display in a window title bar. Is trimming the string off at the first null byte the right way to deal with it? I didn't expect to get a return value like this, so I wonder if I'm missing something important about how Python, Win32, and unicode play together... or if this is just a PyWin32 bug. **Background** I'm using the Win32 file chooser function [`GetOpenFileNameW`](http://docs.activestate.com/activepython/2.7/pywin32/win32gui__GetSaveFileNameW_meth.html) from the PyWin32 package. According to the documentation, this function returns a tuple containing the full filename path as a Python unicode object. When I open the dialog with an existing path and filename set, I get a strange return value. For example I had the default set to: `C:\\Users\\Guest\\MyFileIsReallyReallyReallyAwesome.asy` In the dialog I changed the name to `MyFile.asy` and clicked save. The full path part of the return value was: u'C:\Users\Guest\MyFile.asy\x00wesome.asy'` I expected it to be: `u'C:\\Users\\Guest\\MyFile.asy'` The function is returning a recycled buffer without trimming off the terminating bytes. Needless to say, the rest of my code wasn't set up for handling a C-style null-terminated string. **Demo Code** The following code demonstrates null-terminated string in return value from GetSaveFileNameW. Directions: In the dialog change the filename to 'MyFile.asy' then click Save. Observe what is printed to the console. The output I get is `u'C:\\Users\\Guest\\MyFile.asy\x00wesome.asy'`. ``` import win32gui, win32con if __name__ == "__main__": initial_dir = 'C:\\Users\\Guest' initial_file = 'MyFileIsReallyReallyReallyAwesome.asy' filter_string = 'All Files\0*.*\0' (filename, customfilter, flags) = \ win32gui.GetSaveFileNameW(InitialDir=initial_dir, Flags=win32con.OFN_EXPLORER, File=initial_file, DefExt='txt', Title="Save As", Filter=filter_string, FilterIndex=0) print repr(filename) ``` Note: If you don't shorten the filename enough (for example, if you try MyFileIsReally.asy) the string will be complete without a null byte. **Environment** Windows 7 Professional 64-bit (no service pack), Python 2.7.1, PyWin32 Build 216 **UPDATE: PyWin32 Tracker Artifact** Based on the comments and answers I have received so far, this is likely a pywin32 bug so I filed a [tracker artifact](https://sourceforge.net/tracker/?func=detail&aid=3277647&group_id=78018&atid=551954). **UPDATE 2: Fixed!** Mark Hammond reported in the tracker artifact that this is indeed a bug. A fix was checked in to rev f3fdaae5e93d, so hopefully that will make the next release. I think Aleksi Torhamo's answer below is the best solution for versions of PyWin32 before the fix.
2011/04/05
[ "https://Stackoverflow.com/questions/5559810", "https://Stackoverflow.com", "https://Stackoverflow.com/users/182642/" ]
I'd say it's a bug. The right way to deal with it would probably be fixing pywin32, but in case you aren't feeling adventurous enough, just trim it. You can get everything before the first `'\x00'` with `filename.split('\x00', 1)[0]`.
This doesn't happen on the version of PyWin32/Windows/Python I tested; I don't get any nulls in the returned string even if it's very short. You might investigate if a newer version of one of the above fixes the bug.
5,022
49,737,459
Forgive me the possibly trivial question, but: *How do I run the script published by pybuilder?* --- I'm trying to follow the official [Pybuilder Tutorial](http://pybuilder.github.io/documentation/tutorial.html#.WsqupUuYNhE). I've walked through the steps and successfully generated a project that * runs unit tests * computes coverage * generates `setup.py` * generates a `.tar.gz` that can be used by `pip install`. That's all very nice, but I still don't see what the actual runnable artifact is? All that is contained in the `target` directory seems to look more or less exactly the same as the content in `src` directory + additional reports and installable archives. The tutorial itself at the end of the "Adding a runnable Script"-section concludes that "the script was picked up". Ok, it was picked up, now how do I run it? At no point does the tutorial demonstrate that we can actually print the string "Hello, World!" on the screen, despite the fact that the whole toy-project is about doing exactly that. --- **MCVE** Below is a Bash Script that generates the following directory tree with Python source files and build script: ``` projectRoot ├── build.py └── src └── main ├── python │   └── pkgRoot │   ├── __init__.py │   ├── pkgA │   │   ├── __init__.py │   │   └── modA.py │   └── pkgB │   ├── __init__.py │   └── modB.py └── scripts └── entryPointScript.py 7 directories, 7 files ================================================================================ projectRoot/build.py -------------------------------------------------------------------------------- from pybuilder.core import use_plugin use_plugin("python.core") use_plugin("python.distutils") default_task = "publish" ================================================================================ projectRoot/src/main/scripts/entryPointScript.py -------------------------------------------------------------------------------- #!/usr/bin/env python from pkgRoot.pkgB.modB import b if __name__ == "__main__": print(f"Hello, world! 42 * 42 - 42 = {b(42)}") ================================================================================ projectRoot/src/main/python/pkgRoot/pkgA/modA.py -------------------------------------------------------------------------------- def a(n): """Computes square of a number.""" return n * n ================================================================================ projectRoot/src/main/python/pkgRoot/pkgB/modB.py -------------------------------------------------------------------------------- from pkgRoot.pkgA.modA import a def b(n): """Evaluates a boring quadratic polynomial.""" return a(n) - n ``` --- **The full script that generates example project (Disclaimer: provided as-is, modifies files and directories, execute at your own risk):** ``` #!/bin/bash # Creates a very simple hello-world like project # that can be build with PyBuilder, and describes # the result. # Uses BASH heredocs and `cut -d'|' -f2-` to strip # margin from indented code. # strict mode set -eu # set up directory tree for packages and scripts ROOTPKG_PATH="projectRoot/src/main/python/pkgRoot" SCRIPTS_PATH="projectRoot/src/main/scripts" mkdir -p "$ROOTPKG_PATH/pkgA" mkdir -p "$ROOTPKG_PATH/pkgB" mkdir -p "$SCRIPTS_PATH" # Touch bunch of `__init__.py` files touch "$ROOTPKG_PATH/__init__.py" touch "$ROOTPKG_PATH/pkgA/__init__.py" touch "$ROOTPKG_PATH/pkgB/__init__.py" # Create module `modA` in package `pkgA` cut -d'|' -f2- <<__HEREDOC > "$ROOTPKG_PATH/pkgA/modA.py" |def a(n): | """Computes square of a number.""" | return n * n | __HEREDOC # Create module `modB` in package `pkgB` cut -d'|' -f2- <<__HEREDOC > "$ROOTPKG_PATH/pkgB/modB.py" |from pkgRoot.pkgA.modA import a | |def b(n): | """Evaluates a boring quadratic polynomial.""" | return a(n) - n | __HEREDOC # Create a hello-world script in `scripts`: cut -d'|' -f2- <<__HEREDOC > "$SCRIPTS_PATH/entryPointScript.py" |#!/usr/bin/env python | |from pkgRoot.pkgB.modB import b | |if __name__ == "__main__": | print(f"Hello, world! 42 * 42 - 42 = {b(42)}") | __HEREDOC # Create a simple `build.py` build script for PyBuilder cut -d'|' -f2- <<__HEREDOC > "projectRoot/build.py" |from pybuilder.core import use_plugin | |use_plugin("python.core") |use_plugin("python.distutils") | |default_task = "publish" | __HEREDOC ################################################# # Directory tree construction finished, only # # debug output below this box. # ################################################# # show the layout of the generater result tree "projectRoot" # walk through each python file, show path and content find "projectRoot" -name "*.py" -print0 | \ while IFS= read -r -d $'\0' pathToFile do if [ -s "$pathToFile" ] then printf "=%.0s" {1..80} # thick horizontal line echo "" echo "$pathToFile" printf -- "-%.0s" {1..80} echo "" cat "$pathToFile" fi done ``` The simplest way that I've found to run the freshly built project was as follows (used from the directory that contains `projectRoot`): ``` virtualenv env source env/bin/activate cd projectRoot pyb cd target/dist/projectRoot-1.0.dev0/dist/ pip install projectRoot-1.0.dev0.tar.gz entryPointScript.py ``` This indeed successfully runs the script with all its dependencies on user-defined packages, and prints: ``` Hello, world! 42 * 42 - 42 = 1722 ``` but the whole procedure seems quite complicated. For comparison, in an analogous situation in SBT, I would just issue the single ``` run ``` command from the SBT-shell -- that's why the seven-step recipe above seems a bit suspicios to me. Is there something like a `pyb run` or `pyb exec` plugin that does the same, but does not require from me that I set up all these environments and install anything? What I'm looking for is the analogon of `sbt run` in SBT or `mvn exec:java` in Maven that will build everything, set up all the classpaths, and then run the class with the `main` method, without leaving any traces outside the project directory. Since there is essentially no difference between the source code and the output of the target, I'm probably missing some obvious way how to run the script. If the `PyBuilder` itself is not needed at all for that, it's fine too: all I want is to somehow get `Hello, world! 42 * 42 - 42 = 1722`-string printed in the terminal.
2018/04/09
[ "https://Stackoverflow.com/questions/49737459", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2707792/" ]
Apparently the following workflow: * pyb publish * pip install .tar.gz * runMyScript.py * uninstall is exactly what is proposed by the creator of PyBuilder [in this talk](http://www.youtube.com/watch?v=iQU18hAjux4&t=14m42s). **Note that the linked video is from 2014. If someone can propose a more streamlined recently provided solution, I'll of course accept that.**
Create task in build.py ``` @task def run(project): path.append("src/main/python") from test_pack import test_app test_app.main() ``` Try: `pyb run`
5,025
16,650,680
The following was ported from the pseudo-code from the Wikipedia article on [Newton's method](http://en.wikipedia.org/wiki/Newton%27s_method): ``` #! /usr/bin/env python3 # https://en.wikipedia.org/wiki/Newton's_method import sys x0 = 1 f = lambda x: x ** 2 - 2 fprime = lambda x: 2 * x tolerance = 1e-10 epsilon = sys.float_info.epsilon maxIterations = 20 for i in range(maxIterations): denominator = fprime(x0) if abs(denominator) < epsilon: print('WARNING: Denominator is too small') break newtonX = x0 - f(x0) / denominator if abs(newtonX - x0) < tolerance: print('The root is', newtonX) break x0 = newtonX else: print('WARNING: Not able to find solution within the desired tolerance of', tolerance) print('The last computed approximate root was', newtonX) ``` **Question** Is there an automated way to calculate some form of `fprime` given some form of `f` in Python 3.x?
2013/05/20
[ "https://Stackoverflow.com/questions/16650680", "https://Stackoverflow.com", "https://Stackoverflow.com/users/216356/" ]
A common way of approximating the derivative of `f` at `x` is using a finite difference: ``` f'(x) = (f(x+h) - f(x))/h Forward difference f'(x) = (f(x+h) - f(x-h))/2h Symmetric ``` The best choice of `h` depends on `x` and `f`: mathematically the difference approaches the derivative as h tends to 0, but the method suffers from loss of accuracy due to catastrophic cancellation if `h` is too small. Also x+h should be distinct from x. Something like `h = x*1e-15` might be appropriate for your application. See also [implementing the derivative in C/C++](https://stackoverflow.com/questions/1559695/implementing-the-derivative-in-c-c). You can avoid approximating f' by using the [secant method](http://en.wikipedia.org/wiki/Secant_method). It doesn't converge as fast as Newton's, but it's computationally cheaper and you avoid the problem of having to calculate the derivative.
You can approximate `fprime` any number of ways. One of the simplest would be something like: ``` lambda fprime x,dx=0.1: (f(x+dx) - f(x-dx))/(2*dx) ``` the idea here is to sample `f` around the point `x`. The sampling region (determined by `dx`) should be small enough that the variation in `f` over that region is approximately linear. The algorithm that I've used is known as the midpoint method. You could get more accurate by using higher order polynomial fits for most functions, but that would be more expensive to calculate. Of course, you'll always be more accurate and efficient if you know the analytical derivative.
5,026
21,397,757
Personally I think it's better to distribute .py files as these will then be compiled by the end-user's own python, which may be more patched. What are the pros and cons of distributing .pyc files versus .py files for a commercial, closed-source python module? In other words, are there any compelling reasons to distribute .pyc files? Edit: In particular, if the .py/.pyc is accompanied by a DLL/SO module which is compiled against a certain version of Python.
2014/01/28
[ "https://Stackoverflow.com/questions/21397757", "https://Stackoverflow.com", "https://Stackoverflow.com/users/906984/" ]
Close the unused file descriptors it will work fine In the inner most child ``` close(f1[1]); ``` In the parent process ``` close(f1[0]); ``` And also syntax error in the line write is called change it to ``` write(f1[1], M1, sizeof(M1)) < 0) ```
change your `if` statement to ``` if (write(f1[1], M1, sizeof(M1)) < 0) ``` instead of ``` if(write(f1[1], M1, sizeof(M1) < 0)) ```
5,028
2,177,250
I have a folder with 100k text files. I want to put files with over 20 lines in another folder. How do I do this in python? I used os.listdir, but of course, there isn't enough memory for even loading the filenames into memory. Is there a way to get maybe 100 filenames at a time? Here's my code: ``` import os import shutil dir = '/somedir/' def file_len(fname): f = open(fname,'r') for i, l in enumerate(f): pass f.close() return i + 1 filenames = os.listdir(dir+'labels/') i = 0 for filename in filenames: flen = file_len(dir+'labels/'+filename) print flen if flen > 15: i = i+1 shutil.copyfile(dir+'originals/'+filename[:-5], dir+'filteredOrigs/'+filename[:-5]) print i ``` And Output: ``` Traceback (most recent call last): File "filterimage.py", line 13, in <module> filenames = os.listdir(dir+'labels/') OSError: [Errno 12] Cannot allocate memory: '/somedir/' ``` Here's the modified script: ``` import os import shutil import glob topdir = '/somedir' def filelen(fname, many): f = open(fname,'r') for i, l in enumerate(f): if i > many: f.close() return True f.close() return False path = os.path.join(topdir, 'labels', '*') i=0 for filename in glob.iglob(path): print filename if filelen(filename,5): i += 1 print i ``` it works on a folder with fewer files, but with the larger folder, all it prints is "0"... Works on linux server, prints 0 on mac... oh well...
2010/02/01
[ "https://Stackoverflow.com/questions/2177250", "https://Stackoverflow.com", "https://Stackoverflow.com/users/183487/" ]
A couple thoughts. First, you might use the [`glob`](http://docs.python.org/library/glob.html) module to get smaller groups of files. Second, sorting by line count is going to be very time consuming, as you have to open every file and count lines. If you can partition by byte count, you can avoid opening the files by using the [`stat`](http://docs.python.org/library/stat.html) module. If it's crucial that the split happens at 20 lines, you can at least cut out large swaths of files by figuring out a minimum number of characters that a 20 line file of your type will have, and not opening any file smaller than that.
``` import os,shutil os.chdir("/mydir/") numlines=20 destination = os.path.join("/destination","dir1") for file in os.listdir("."): if os.path.isfile(file): flag=0 for n,line in enumerate(open(file)): if n > numlines: flag=1 break if flag: try: shutil.move(file,destination) except Exception,e: print e else: print "%s moved to %s" %(file,destination) ```
5,029
50,916,340
I'm looking for some general advice on how to either re-write application code to be non-naive, or whether to abandon neo4j for another data storage model. This is not *only* "subjective", as it relates significantly to specific, correct usage of the neo4j driver in Python and why it performs the way it does with my code. Background: ----------- My team and I have been using neo4j to store graph-friendly data that is initially stored in Python objects. Originally, we were advised by a local/in-house expert to use neo4j, as it seemed to fit our data storage and manipulation/querying requirements. The data are always specific instances of a set of carefully-constructed ontologies. For example (pseudo-data): ``` Superclass1 -contains-> SubclassA Superclass1 -implements->SubclassB Superclass1 -isAssociatedWith-> Superclass2 SubclassB -hasColor-> Color1 Color1 -hasLabel-> string::"Red" ``` ...and so on, to create some rather involved and verbose hierarchies. For prototyping, we were storing these data as sequences of grammatical triples (subject->verb/predicate->object) using RDFLib, and using RDFLib's graph-generator to construct a graph. Now, since this information is just a complicated hierarchy, we just store it in some custom Python objects. We also do this in order to provide an easy API to others devs that need to interface with our core service. We hand them a Python library that is our Object model, and let them populate it with data, or, we populate it and hand it to them for easy reading, and they do what they want with it. To store these objects permanently, and to hopefully accelerate the writing and reading (querying/filtering) of these data, we've built custom object-mapping code that utilizes the official neo4j python driver to write and read these Python objects, recursively, to/from a neo4j database. The Problem: ------------ For large and complicated data sets (e.g. 15k+ nodes and 15k+ relations), the object relational mapping (ORM) portion of our code is too slow, and scales poorly. But neither I, nor my colleague are experts in databases or neo4j. I think we're being naive about how to accomplish this ORM. We began to wonder if it even made sense to use neo4j, when more traditional ORMs (e.g. SQL Alchemy) might just be a better choice. For example, the ORM commit algorithm we have now is a recursive function that commits an object like this (pseudo code): ``` def commit(object): for childstr in object: # For each child object child = getattr(object, childstr) # Get the actual object if attribute is <our object base type): # Open transaction, make nodes and relationship with session.begin_transaction() as tx: <construct Cypher query with: MERGE object (make object node) MERGE child (make its child node) MERGE object-[]->child (create relation) > tx.run(<All 3 merges>) commit(child) # Recursively write the child and its children to neo4j ``` Is it naive to do it like this? Would an OGM library like [Py2neo's OGM](http://py2neo.org/v3/ogm.html) be better, despite ours being customized? I've seen [this](https://stackoverflow.com/questions/8356626/orm-with-graph-databases-like-neo4j-in-python?rq=1) and similar questions that recommend this or that OGM method, but in [this](https://neo4j.com/blog/cypher-write-fast-furious/) article, it says not to use OGMs at all. Must we really just implement every method and benchmark for performance? It seems like there must be some best-practices (other than using the [batch IMPORT](https://neo4j.com/blog/bulk-data-import-neo4j-3-0/), which doesn't fit our use cases). And we've read through articles like those linked, and seen the various tips on writing better queries, but it seems better to step back and examine the case more generally before attempting to optimize code line-by line. Although it's clear that we can improve the ORM algorithm to some degree. Does it make sense to write and read large, deep hierarchical objects to/from neo4j using a recursive strategy like this? Is there something in Cypher, or the neo4j drivers that we're missing? Or is it better to use something like Py2neo's OGM? Is it best to just abandon neo4j altogether? The benefits of neo4j and Cypher are difficult to ignore, and our data *does* seem to fit well in a graph. Thanks.
2018/06/18
[ "https://Stackoverflow.com/questions/50916340", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1507854/" ]
You could use a capturing group or to not have `DataHelper.ExecuteProc` in matching result put it in lookbehind: ``` (?<=DataHelper\.ExecuteProc\(")[^\\"]*(?:\\.[^\\"]*)* ``` See live [demo here](https://regex101.com/r/i3AgFx/1) Breakdown: * `(?<=` Start of positive lookbehind + `DataHelper\.ExecuteProc\("` Match it but not consume it * `)` End of lookbehind * `[^\\"]*(?:\\.[^\\"]*)*` Match string enclosed in double qoutation marks
You can do it like this: ``` var pattern = "\bDataHelper\..+?\(\"(?<procedure>[^\"]*?)\""; var result = Regex.Match(input, pattern).Cast<Match>().Select(x=> x.Groups["procedure"].Value).ToList(); ```
5,038
41,053,784
I am new to python and trying to implement graph data structure in Python. I have written this code, but i am not getting the desired result i want. Code: ``` class NODE: def __init__(self): self.distance=0 self.colournode="White" adjlist={} def addno(A,B): global adjlist adjlist[A]=B S=NODE() R=NODE() V=NODE() W=NODE() T=NODE() X=NODE() U=NODE() Y=NODE() addno(S,R) for keys in adjlist: print keys ``` I want the code to print {'S':R} on the final line but it is printing this: ``` <__main__.NODE instance at 0x00000000029E6888> ``` Can anybody guide me what am i doing wrong? Also what to do if i want to add another function call like addnode(S,E) and printing should be {S:[R,E]}
2016/12/09
[ "https://Stackoverflow.com/questions/41053784", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4825150/" ]
You node needs to have a label to print. You can't use just the variable name. The node has no way knowing name of your variable. ``` class NODE: def __init__(self, name): self.name=name def __repr__(self): return self.name adjlist={} def addno(A,B): global adjlist adjlist[A]=B S=NODE('S') R=NODE('R') addno(S,R) print adjlist >>> {S: R} ``` However python dict may have only one value for each key, so you wont be able to save `{S: R}` and `{S: V}` at the same time. Instead you will need to save aj array of nodes: ``` class NODE: def __init__(self, name): self.name=name def __repr__(self): return self.name adjlist={} def addno(A,B): global adjlist if A not in adjlist: adjlist[A] = [] adjlist[A].append(B) S=NODE('S') R=NODE('R') V=NODE('V') W=NODE('W') addno(S,R) addno(S,V) addno(R,W) print adjlist {S: [R, V], R: [W]} ``` As a side note, using unnessesary global variables is a bad habit. Instead make a class for the graph: ``` class Node: def __init__(self, name): self.name=name def __repr__(self): return self.name class Graph: def __init__(self): self.adjlist={} def addno(self, a, b): if a not in self.adjlist: self.adjlist[a] = [] self.adjlist[a].append(b) def __repr__(self): return str(self.adjlist) G=Graph() S=Node('S') R=Node('R') V=Node('V') W=Node('W') G.addno(S,R) G.addno(S,V) G.addno(R,W) print G >>> {R: [W], S: [R, V]} ```
You get that output because `Node` is an instance of a class ( you get that hint form the output of your program itself see this: `<main.NODE instance at 0x00000000029E6888>` ). i think you are trying to implement `adjacency list` for some graph algorithm. in those cases you will mostly need the `color` and ``distance`to other`nodes`. which you can get by doing : ``` for keys in adjlist: print keys.colournode , keys.distance ```
5,039
64,024,941
I am doing object detection using TensorFlow Object Detection API in Google colab. This is my directory structure. ``` object_detection/ training/ exported_model/ pipeline.config model_main_tf2.py exporter_main_v2.py ``` I run bellow for training. ``` !python model_main_tf2.py --model_dir=training --pipeline_config_path=pipeline.config ``` I run bellow for exporting model. > > !python exporter\_main\_v2.py > > --input\_type image\_tensor > > --pipeline\_config\_path pipeline.config > > --trained\_checkpoint\_dir training/ > > --output\_directory exported\_model > > > None of the above will produce any error but after running both I am not able to see exported model in desired directory in my case(exported\_model). I didn't understand what is wrong ?
2020/09/23
[ "https://Stackoverflow.com/questions/64024941", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7907965/" ]
I found that while I run training even though it didn't produce any error It also not successful. Because It didn't generate files which should be generated after successful training like checkpoints. The `training/` directory was blank. [this](https://github.com/tensorflow/models/blob/master/research/object_detection/exporter_main_v2.py) file's documentation helped me to figure out my problem. You can check it for further guidance.
i follow the [struction](https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/running_on_mobile_tf2.md) using export\_tflite\_graph\_tf2.py
5,040
45,934,942
I have just started using Tkinter and trying to create a simple pop-up box in python. I have copy pasted a simple code from a website: ``` from Tkinter import * master = Tk() Label(master, text="First Name").grid(row=0) Label(master, text="Last Name").grid(row=1) e1 = Entry(master) e2 = Entry(master) e1.grid(row=0, column=1) e2.grid(row=1, column=1) mainloop( ) ``` This code is taking really long time to run, it has been almost 5 minutes! Is it not possible to just run this snippet? Can anybody tell me how to use Tkinter? I am using jupyter notebook and python version 2.7. I would request a solution for this version only.
2017/08/29
[ "https://Stackoverflow.com/questions/45934942", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8368577/" ]
``` from Tkinter import * def printData(firstName, lastName): print(firstName) print(lastName) root.destroy() def get_input(): firstName = entry1.get() lastName = entry2.get() printData(firstName, lastName) root = Tk() #Label 1 label1 = Label(root,text = 'First Name') label1.pack() label1.config(justify = CENTER) entry1 = Entry(root, width = 30) entry1.pack() label3 = Label(root, text="Last Name") label3.pack() label1.config(justify = CENTER) entry2 = Entry(root, width = 30) entry2.pack() button1 = Button(root, text = 'submit') button1.pack() button1.config(command = get_input) root.mainloop() ``` Copy paste the above code into a editor, save it and run using the command, ``` python sample.py ``` *Note: The above code is very vague. Have written it in that way for you to understand.*
You can create a popup information window as follow: `showinfo("Window", "Hello World!")` If you want to create a real popup window with input mask, you will need to generate a new TopLevel mask and open a second window. ``` win = tk.Toplevel() win.wm_title("Window") label = tk.Label(win, text="User input") label.grid(row=0, column=0) button = ttk.Button(win, text="Done", command=win.destroy) button.grid(row=1, column=0) ```
5,041
60,985,999
This code works correctly in python 2.X version. I am trying to use the similar code in python version 3. The problem is that I do not want to use requests module. I need to make it work using "urllib3". ``` import requests import urllib event = {'url':'http://google.com', 'email':'abc@gmail.com', 'title':'test'} url = event['url'] if event['email']: email=event['email'] if event['title']: title=event['title'] url1 = urllib.parse.unquote(url) myfile=urllib.request.urlopen(url1) requests.post("https://api.mailgun.net/v3/xxx.mailgun.org/messages", auth=("api", "key-xxx"), files=[("attachment", myfile) ], data={"from": "Excited User <excited-user@example.com>", "to": email, "subject": title, "text": "Testing some awesomness with attachments!", "html": myfile}) ``` I am getting TypeError while trying to run this code: ``` import urllib3 event = {'url':'http://oksoft.blogspot.com', 'email':'abc@gmail.com', 'title':'test'} url = event['url'] if event['email']: email=event['email'] if event['title']: title=event['title'] url1 = urllib.parse.unquote(url) myfile=urllib.request.urlopen(url1) http = urllib3.PoolManager() url = "https://api.mailgun.net/v3/xxx.mailgun.org/messages" params={"from": "Excited User <excited-user@example.com>", "to": email, "subject": title, "text": "Testing some awesomness with attachments!", "html": myfile} http.request( "POST", url, headers={"Content-Type": "application/json", "api":"key-xxx"}, body= params ) ```
2020/04/02
[ "https://Stackoverflow.com/questions/60985999", "https://Stackoverflow.com", "https://Stackoverflow.com/users/139150/" ]
You can do something like this: ``` Where x.RoleId == 2 && (loc == null || s.LocationId == loc) ```
Simply extract your managers and filter them if needed. That way you can as well easily apply more filters and code readability isn't hurt. ``` var managers = CSDB.Managers.AsQueryable(); if(loc > 0) managers = managers.Where(man => man.LocationId == loc); var myResult = from allocation in CSDB.Allocations join manager in managers on allocation.ManagerId equals manager.Id where allocation.RoleId == 2 select new { allocation.name, allocation.Date }; ```
5,046
32,829,504
in python is a mathematical operator classed as an interger. for example why isnt this code working ``` import random score = 0 randomnumberforq = (random.randint(1,10)) randomoperator = (random.randint(0,2)) operator = ['*','+','-'] answer = (randomnumberforq ,operator[randomoperator], randomnumberforq) useranswer = input(int(randomnumberforq)+int(operator[randomoperator])+ int(randomnumberforq)) if answer == useranswer: print('correct') else: print('wrong') ```
2015/09/28
[ "https://Stackoverflow.com/questions/32829504", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5080233/" ]
You can't just concatenate an operator to a couple of numbers and expect it to be evaluated. You could use `eval` to evaluate the final string. ``` answer = eval(str(randomnumberforq) + operator[randomoperator] + str(randomnumberforq)) ``` A better way to accomplish what you're attempting is to use the functions found in the `operator` module. By assigning the functions to a list, you can choose which one to call randomly: ``` import random from operator import mul, add, sub if __name__ == '__main__': score = 0 randomnumberforq = random.randint(1,10) randomoperator = random.randint(0,2) operator = [[mul, ' * '], [add, ' + '], [sub, ' - ']] answer = operator[randomoperator][0](randomnumberforq, randomnumberforq) useranswer = input(str(randomnumberforq) + operator[randomoperator][1] + str(randomnumberforq) + ' = ') if answer == useranswer: print('correct') else: print('wrong') ```
You try to convert a string to an integer, but which isn't a number: ``` int(operator[randomoperator]) ``` Your operatators in the array "operator" are strings, which don't represent numbers and can't be converted to integer values. On the other hand the input() function desires string as parameter value. So write: ``` ... = input(str(numberValue) + operatorString + str(nubmerValue)) ``` The + operator can be used to concat strings. But Python requires that the operands on both sides are strings. Thats why i added the str() functions to cast the number values to strings.
5,049
26,453,920
My problem is that I'm trying to pass a `list` as a variable to a function, and I'd like to mutlti-thread the function processing. I can't seem to use `pool.map` because it only accepts iterables. I can't seem to use `pool.apply` because it seems to block the pool while it works, so I don't really understand how it allow mutli-threading at all (admittedly, I don't seem to understand anything about multi-threading). I tried `pool.apply_async`, but the program finishes in seconds, and only appears to process about 20000 total computations. Here's some psuedo-code for it. ``` import MySQLdb from multiprocessing import Pool def some_math(x, y): f(x[1], x[2], y[1], y[2]) return f def distance(x): x_distances = [] for y in all_y: distance = some_math(x, y) if distance > 1000000: continue else: x_distances.append(x[0], y[0],distance) mysql.executemany(sql_update, x_distances) mydb.commit() all_x = [] all_y = [] sql_x = 'SELECT id, lat, lng FROM table' sql_y = 'SELECT id, lat, lng FROM table' sql_update = 'INSERT INTO distances (id_x, id_y, distance) VALUES (%s, %s, %S)' cursor.execute(sql_x) all_x = cursor.fetchall() cursor.execute(sql_y) all_y = cursor.fetchall() p = Pool(4) for x in all_x: p.apply_async(distance, x) ``` OR, if using map: ``` p = Pool(4) for x in all_x: p.map(distance, x) ``` The error returns: Processing A for distances... ``` Traceback (most recent call last): File "./distance-house.py", line 94, in <module> p.map(range, row) File "/usr/lib/python2.7/multiprocessing/pool.py", line 251, in map return self.map_async(func, iterable, chunksize).get() File "/usr/lib/python2.7/multiprocessing/pool.py", line 558, in get raise self._value TypeError: 'float' object has no attribute '__getitem__' ``` I am trying to multi-thread a long computation - calculating a the distance between something like 10,000 points on a many-to-many basis. Currently, the process is taking several days, and I figure that multiprocessing the results could really improve the efficiency. I'm all ears for suggestions.
2014/10/19
[ "https://Stackoverflow.com/questions/26453920", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3972123/" ]
You can use `pool.map`: ``` p = Pool(4) p.map(distance, all_x) ``` as per the first example in the [doc](https://docs.python.org/2/library/multiprocessing.html). It will do the iteration for you!
Another way to Approach it is to pack your variables inside a tuble and unpack inside the function. example: ``` def Add(z): x,y = z return x + y a = [ 0 , 1, 2, 3] b = [ 5, 6, 7, 8] ab = (a,b) Add(ab) ```
5,052
27,830,428
I have been trying to compact my code for a primality test in python so that it makes use of list comprehensions, but for some reason it doesn't return the correct results: ``` def isPrime(n): if n > 1: for i in range(2, int(n ** 0.5) + 1): if n % i == 0: return False return True ``` That's the code for my current primality test, but I want to condense it: ``` def isPrime(n): if n > 1: return [False for i in range(2, int(n ** 0.5) + 1) if n % i == 0] return True ``` I tried the above, but it outputs all non-prime integers up to n. What am I doing wrong?
2015/01/07
[ "https://Stackoverflow.com/questions/27830428", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4430875/" ]
As you want `False` if **any** lesser number is a divisor, code it directly that way: ``` def isPrime(n): return n<=1 or not any(i for i in range(2, int(n ** 0.5) + 1) if n % i == 0) ``` Note that this uses a **genexp**, not a **listcomp**, because that allows `any` to terminate the whole operation as soon as it finds any suitable `i` divisor and thus knows `n` cannot be prime. List comprehensions generate an in-memory list of all their items, while generator expressions yield items one at a time, and only as long as they're being asked for "the next one" (by a `for` loop, an accumulator such as `any` or `all`, or directly by the `next` built-in).
you can use `all`: ``` >>> def prime_check(n): ... if n > 1: ... return all(False for i in range(2, int(n ** 0.5) + 1) if n % i == 0) ... >>> prime_check(6) False >>> prime_check(23) True >>> prime_check(108) False >>> prime_check(111) False >>> prime_check(101) True ```
5,053
32,622,825
I need to get some numbers from this website <http://www.preciodolar.com/> But the data I need, takes a little time to load and shows a message of 'wait' until it completely loads. I used find all and some regular expressions to get the data I need, but when I execute, `python` gives me the 'wait' message that appears before the data loads. Is there a way to make python 'wait' until all data is loaded? my code looks like this, ``` import urllib.request from re import findall def divisas(): pag = urllib.request.urlopen('http://www.preciodolar.com/') html = str(pag.read()) brasil = findall('<td class="usdbrl_buy">(.*?)</td>',html) return brasil ```
2015/09/17
[ "https://Stackoverflow.com/questions/32622825", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3435341/" ]
As you are using the ASP.NET you can use the following two options in Page Load. Option : 1 `Request.ServerVariables["HTTP_REFERER"]` Although note on the above it is possible for browsers to block the value (empty value). Option : 2 You can check the `Request.UrlReferrer` of the current `HttpRequest`: it will usually contain the page from where the user is coming from (depends on the browser, though). Reference: [how do I determine where the user came from in asp.net?](https://stackoverflow.com/questions/3166113/how-do-i-determine-where-the-user-came-from-in-asp-net) <https://msdn.microsoft.com/en-us/library/system.web.httprequest.urlreferrer%28v=vs.110%29.aspx>
Session\_Start event is not suitable for these kind of things. Session\_start runs when a user first enters in your applications, think it like the first page load. You can use a query string parameter to determine where the user redirected from. For example, if user redirected from sso.aspx to default.aspx, use url like that : default.aspx?previouspage=sso then, check the previouspage query string parameter at default.aspx, to show or hide your panel.
5,055
48,996,494
I have two network interfaces (wifi and ethernet) both with internet access. Let's say my interfaces are `eth` (ethernet) and `wlp2` (wifi). I need specific requests to go through `eth` interface and others through `wpl2`. Something like: ``` // Through "eth" request.post(url="http://myapi.com/store_ip", iface="eth") // Through "wlp2" request.post(url="http://myapi.com/log", iface="wlp2") ``` I'm using `requests`, but I can use `pycurl` or `urllib` if there isn't any way to do it with `requests`. [How to specify source interface in python requests module?](https://stackoverflow.com/questions/26335544/how-to-specify-source-interface-in-python-requests-module) refers to [Requests, bind to an ip](https://stackoverflow.com/questions/12585317/requests-bind-to-an-ip) and it doesn't work.
2018/02/26
[ "https://Stackoverflow.com/questions/48996494", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4585081/" ]
I found a way using `pycurl`. This works like a charm. ``` import pycurl from io import BytesIO import json def curl_post(url, data, iface=None): c = pycurl.Curl() buffer = BytesIO() c.setopt(pycurl.URL, url) c.setopt(pycurl.POST, True) c.setopt(pycurl.HTTPHEADER, ['Content-Type: application/json']) c.setopt(pycurl.TIMEOUT, 10) c.setopt(pycurl.WRITEFUNCTION, buffer.write) c.setopt(pycurl.POSTFIELDS, data) if iface: c.setopt(pycurl.INTERFACE, iface) c.perform() # Json response resp = buffer.getvalue().decode('UTF-8') # Check response is a JSON if not there was an error try: resp = json.loads(resp) except json.decoder.JSONDecodeError: pass buffer.close() c.close() return resp if __name__ == '__main__': dat = {"id": 52, "configuration": [{"eno1": {"address": "192.168.1.1"}}]} res = curl_post("http://127.0.0.1:5000/network_configuration/", json.dumps(dat), "wlp2") print(res) ``` I'm leaving the question opened hopping that someone can give an answer using `requests`.
Try changing the internal IP (192.168.0.200) to the corresponding iface in the code below. ``` import requests from requests_toolbelt.adapters import source def check_ip(inet_addr): s = requests.Session() iface = source.SourceAddressAdapter(inet_addr) s.mount('http://', iface) s.mount('https://', iface) url = 'https://emapp.cc/get_my_ip' resp = s.get(url) print(resp.text) if __name__ == '__main__': check_ip('192.168.0.200') ```
5,056
17,477,394
I'm trying to install the M2Crypto on Python26 in Windows, but I am getting the below error. > > **error**: command 'swig.exe' failed: No such file or directory > > > This error occurs both using the "Easy Install" or "PIP Install" command. Follows the Log: > > running build > > > running build\_py > > > running build\_ext > > > building 'M2Crypto.\_\_m2crypto' extension > > > swigging SWIG/\_m2crypto.i to SWIG/\_m2crypto\_wrap.c > > > swig.exe -python -IC:\Python26\include -IC:\Python26\PC > -Ic:\pkg\include -includeall -o SWIG/\_m2crypto\_wrap.c SWIG/\_m2crypto.i > > > error: command 'swig.exe' failed: No such file or directory > > > Any Help?
2013/07/04
[ "https://Stackoverflow.com/questions/17477394", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1347355/" ]
This worked for me: (using winpython2.7) ``` pip install M2CryptoWin32 ``` reference: <https://github.com/dsoprea/M2CryptoWin32>
Putting this in answer format: You could try to install a binary build from <http://chandlerproject.org/Projects/MeTooCrypto> from mata's comment that resolved OP's issue
5,061
48,888,239
Here is my image: ![enter image description here](https://i.stack.imgur.com/ffDLD.jpg) I want to find the center of mass in this image. I can find the approximate location of the center of mass by drawing two perpendicular lines as shown in this image: ![enter image description here](https://i.stack.imgur.com/d3yBp.jpg) I want to find it using an image processing tool in python. I have a little experience in the image processing library of python (scikit-image) but, I am not sure if this library could help finding the center of mass in my image. I was wondering if anybody could help me to do it. I will be happy if it is possible to find the center of mass in my image using any other library in python. Thanks in advance for your help!
2018/02/20
[ "https://Stackoverflow.com/questions/48888239", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8653226/" ]
[`skimage.measure.regionprops`](http://scikit-image.org/docs/dev/api/skimage.measure.html#skimage.measure.regionprops) will do what you want. Here's an example: ``` import imageio as iio from skimage import filters from skimage.color import rgb2gray # only needed for incorrectly saved images from skimage.measure import regionprops image = rgb2gray(iio.imread('eyeball.png')) threshold_value = filters.threshold_otsu(image) labeled_foreground = (image > threshold_value).astype(int) properties = regionprops(labeled_foreground, image) center_of_mass = properties[0].centroid weighted_center_of_mass = properties[0].weighted_centroid print(center_of_mass) ``` On my machine and with your example image, I get `(228.48663375508113, 200.85290046969845)`. We can make a pretty picture: ``` import matplotlib.pyplot as plt from skimage.color import label2rgb colorized = label2rgb(labeled_foreground, image, colors=['black', 'red'], alpha=0.1) fig, ax = plt.subplots() ax.imshow(colorized) # Note the inverted coordinates because plt uses (x, y) while NumPy uses (row, column) ax.scatter(center_of_mass[1], center_of_mass[0], s=160, c='C0', marker='+') plt.show() ``` That gives me this output: [![eye center of mass](https://i.stack.imgur.com/0cCSn.png)](https://i.stack.imgur.com/0cCSn.png) You'll note that there's some bits of foreground that you probably don't want in there, like at the bottom right of the picture. That's a whole nother answer, but you can look at `scipy.ndimage.label`, `skimage.morphology.remove_small_objects`, and more generally at `skimage.segmentation`.
You need to know about **[Image Moments](https://en.wikipedia.org/wiki/Image_moment)**. [Here](https://docs.opencv.org/3.1.0/dd/d49/tutorial_py_contour_features.html) there's a tutorial of how use it with opencv and python
5,062
70,152,772
I'm using an AWS Lambda function (in Python) to connect to an Oracle database (RDS) using cx\_Oracle library. But it is giving me the below error - "DPI-1047: Cannot locate a 64-bit Oracle Client library: "libclntsh.so: cannot open shared object file: No such file or directory". Steps I've followed - 1. Created a python virtual environment and downloaded the cx\_Oracle library on an EC2 instance. 2. Uploaded the downloaded library on S3 bucket and created a Lambda Layer with it 3. Used this layer in the Lambda function to connect to the Oracle RDS 4. Used below command to connect to the DB - conn = cx\_Oracle.connect(user="*user-name*", password="*password*", dsn="*DB-Endpoint*:1521/"*database-name*",encoding="UTF-8") Please help me in resolving this issue.
2021/11/29
[ "https://Stackoverflow.com/questions/70152772", "https://Stackoverflow.com", "https://Stackoverflow.com/users/17394264/" ]
Set the environment variable `DPI_DEBUG_LEVEL` to the value `64` and then rerun your code. The debugging output should help you figure out what is being searched. Note that you need to have the 64-bit instant client installed as well!
The reason I faced this issue was that I just downloaded cx\_Oracle library. In order to connect to the Oracle database from the Lambda function, we need to download the Oracle client and libaio libraries as well and club them with cx\_Oracle to create a Lambda Layer. Once I followed these steps, I was able to connect to the Oracle database and query its tables. I've created a video for this process, so that others need not go through the issues I faced. Hope it will be helpful to all - <https://youtu.be/BYiueNog-TI>
5,065
33,337,302
this is a follow-up from [https://stackoverflow.com/questions/33336963/use-a-python-dictionary-to-insert-into-mysql/33337128#33337128](https://stackoverflow.com/questions/33336963/use-a-python-dictionary-to-insert-into-mysql/33337128#33337128/). ``` import pymysql conn = pymysql.connect(server, user , password, "db") cur = conn.cursor() ORFs={'E7': '562', 'E6': '83', 'E1': '865', 'E2': '2756 '} table="genome" cols = ORFs.keys() vals = ORFs.values() sql = "INSERT INTO %s (%s) VALUES(%s)" % ( table, ",".join(cols), ",".join(vals)) print sql print ORFs.values() cur.execute(sql) cur.close() conn.close() ``` Thanks to Xiaohen, my program works (i.e. it does not throw any errors), but when I go and check the mysql database, the data is not inserted. I noticed that the autoincrement ID column does increase with every failed attempt. So this suggests that I am at least making contact with the database? As always, any help is much appreciated EDIT: I included the output from `mysql> show create table genome;` ``` | genome | CREATE TABLE `genome` ( `ID` int(11) NOT NULL AUTO_INCREMENT, `state` char(255) DEFAULT NULL, `CG` text, `E1` char(25) DEFAULT NULL, `E2` char(25) DEFAULT NULL, `E6` char(25) DEFAULT NULL, `E7` char(25) DEFAULT NULL, PRIMARY KEY (`ID`) ) ENGINE=InnoDB AUTO_INCREMENT=15 DEFAULT CHARSET=latin1 | +--------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 1 row in set (0.00 sec) ```
2015/10/26
[ "https://Stackoverflow.com/questions/33337302", "https://Stackoverflow.com", "https://Stackoverflow.com/users/751241/" ]
Think I figured it out. I will add the info here in case someone else comes across this question: I need to add `conn.commit()` to the script
You can use ``` try: cur.execute(sql) except Exception, e: print e ``` If your code is wrong, the exception can tell you. And it has another question. the cols and vals are not match. The values should be ``` vals = [dict[col] for col in cols] ```
5,066
5,948,110
I have been using python for a while now and Im happy using it in most forms but I am wondering which form is more pythonic. Is it right to emulate objects and types or is it better to subclass or inherit from these types. I can see advantages for both and also the disadvantages. Whats the correct method to be doing this? Subclassing method ``` class UniqueDict(dict): def __init__(self, *args, **kwargs): dict.__init__(self, *args, **kwargs) def __setitem__(self, key, value): if key not in self: dict.__setitem__(self, key, value) else: raise KeyError("Key already exists") ``` Emulating method ``` class UniqueDict(object): def __init__(self, *args, **kwargs): self.di = dict(*args, **kwargs) def __setitem__(self, key, value): if key not in self.di: self.di[key] = value else: raise KeyError("Key already exists") ```
2011/05/10
[ "https://Stackoverflow.com/questions/5948110", "https://Stackoverflow.com", "https://Stackoverflow.com/users/462604/" ]
Key question you have to ask yourself here is: > > "How should my class change if the 'parent' class changes?" > > > Imagine new methods are added to `dict` which you don't override in your `UniqueDict`. If you want to express that **`UniqueDict` is simply a small derivation** in behaviour from `dict`'s behaviour, then you'd **go with inheritance** since you will get changes to the base class automatically. If you want to express that **`UniqueDict` kinda looks like a `dict` but actually isn't**, you should go with the **'emulation' mode**.
Subclassing is better as you won't have to implement a proxy for every single dict method.
5,067
71,297,371
Ok so I am trying to mass format a large text document to convert ``` #{'000','001','002','003','004','005','006','007','008','009'} ``` into ``` #{'000':'001','002':'003','004':'005','006':'007','008':'009'} ``` using python and have my function working, however it will only work if I run it line by line. and was wondering how to get it to run for each line on my input so that it will work on a multi line document ``` with open("input") as core: a = core.read() K = '!' N = 12 res = '' for idx, ele in enumerate(a): if idx % N == 0 and idx != 0: res = res + K else: res = res + ele b = (str(res).replace(",",":").replace("!",",")) l = len(b) c = b[:l-1] d = c + "}" print(d) ``` here is the current result for a multiline text file ``` {'000':'001','002':'003','004':'005','006':'007','008':'009', {'001':'00,':'003':'00,':'005':'00,':'007':'00,':'009':'00,'} {'002':',03':'004':',05':'006':',07':'008':',09':'000':',01'} {'003','004':'005','006':'007','008':'009','000':'001','002'} ``` So Far I have tried ``` with open('input', "r") as a: for line in a: K = '!' N = 12 res = '' for idx, ele in enumerate(a): if idx % N == 0 and idx != 0: res = res + K else: res = res + ele b = (str(res)) l = len(b) c = b[:l-1] d = c + "}" print(d) ``` but no luck FOUND A SOLUTION ``` import re with open("input") as core: coords = core.read() sword = coords.replace("\n",",\n") dung = re.sub('(,[^,]*),', r'\1 ', sword).replace(",",":").replace(" ",",").replace(",\n","\n") print(dung) ``` I know my solution works, but i cant quite apply this to other situations where I am applying different formats based on the need. Its easy enough to work out how to format a single line of text as there is so much documentation out there. Does anybody know of any plugins or particular python elements where you can write your format function and then apply it to all lines. like a kind of applylines() extension instead of readlines()
2022/02/28
[ "https://Stackoverflow.com/questions/71297371", "https://Stackoverflow.com", "https://Stackoverflow.com/users/18335232/" ]
Here is a possible solution: ``` result = [(str(dt.timetuple()[:6])[1:-1], s.split('_')[0]) for dt, s in OUTPUT] ```
> > Eventually I hope to pass the new list of tuples to a pandas dataframe. > > > You can use `.read_sql_query()` to pull the information directly into a DataFrame: ```py import pandas as pd import sqlalchemy as sa connection_url = sa.engine.URL.create( "mssql+pyodbc", username="scott", password="tiger^5HHH", host="192.168.0.199", database="test", query={ "driver": "ODBC Driver 18 for SQL Server", "TrustServerCertificate": "Yes", } ) engine = sa.create_engine(connection_url) table_name = "so71297370" # set up example environment with engine.begin() as conn: conn.exec_driver_sql(f"DROP TABLE IF EXISTS {table_name}") conn.exec_driver_sql(f"CREATE TABLE {table_name} (col1 datetime2, col2 nvarchar(50))") conn.exec_driver_sql(f"""\ INSERT INTO {table_name} (col1, col2) VALUES ('2003-03-26 15:12:15', '490002_space'), ('2003-03-27 16:13:14', '490002_space') """) # example df = pd.read_sql_query( # suffix '_space' is 6 characters in length f"SELECT col1, LEFT(col2, LEN(col2) - 6) AS col2 FROM {table_name}", engine, ) print(df) """ col1 col2 0 2003-03-26 15:12:15 490002 1 2003-03-27 16:13:14 490002 """ ```
5,070
20,054,030
I have been getting the below error while using pxssh to get into remote servers to run unix commands ( like uptime ) ``` Traceback (most recent call last): ``` File "./ssh\_pxssh.py", line 33, in login\_remote(hostname, username, password) File "./ssh\_pxssh.py", line 12, in login\_remote if not s.login(hostname, username, password): File "/Library/Frameworks/Python.framework/Versions/3.3/lib/python3.3/site-packages/pexpect/pxssh.py", line 278, in login ``` **raise ExceptionPxssh ('could not synchronize with original prompt') ``` pexpect.pxssh.ExceptionPxssh: could not synchronize with original prompt\*\* Line 33 is where I call this function in main. The function I am using is here : ``` def login_remote(hostname, username, password): s = pxssh.pxssh() s.force_password = True if not s.login(hostname, username, password, auto_prompt_reset=False): print("ssh to host :"+ host + " failed") print(str(s)) else: print("SSH to remote host " + hostname + " successfull") s.sendline('uptime') s.prompt() print(s.before) s.logout() ``` The error does not come each time I run the script. Rather it is intermittent. It comes 7 out of 10 times I run my script.
2013/11/18
[ "https://Stackoverflow.com/questions/20054030", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3005660/" ]
I have solved it by adding **sync\_multiplier** argument to the login function. ``` s.login(hostname, username, password, sync_multiplier=5 auto_prompt_reset=False) ``` note that **sync\_multiplier** is a communication timeout argument to perform successful synchronization. it tries to read prompt for at least **sync\_multiplier** seconds. Worst case performance for this method is **sync\_multiplier** \* 3 seconds. I personally set sync\_multiplier=2 but it depends on the communication speed on the system I work on.
I had the same problem when pxssh tried to login on a very slow connection. The pexpect lib apparently was fooled by the remote motd prompt. This remote motd prompt contained a uname -svr prompt, which itself contained a # character inside. Apparently, pexpect saw it like a prompt. From that point, the lib was not in line anymore with the ssh session. The following workaround was working for me: just remove the # char inside /var/run/motd.dynamic (debian), or in /var/run/motd (ubuntu). Another solution is to ask ssh to no prompt the motd while logging in . But this is not working for me: i added the following: PrintMotd no in /etc/ssh/sshd\_config => not working Another workaround: create a file in the home directory: .hushlogin in the ~ directory
5,071
28,329,596
please help me. I have the string (json request) : ``` {"jsonrpc":"2.0","result":[{"hostid":"10158"}],"id":1} ``` i try to parsing it with command : ``` reference_id2=`echo "$reference_id" | python -c 'import json, sys; print json.load(sys.stdin)["result"]'` ``` and still have `[{u'hostid': u'10158'}]` How i can get only 10158 (like in example) THanks. P.S. this is doesn't work too : ``` reference_id2=`echo "$reference_id" | python -c 'import json, sys; print json.load(sys.stdin)["result"]["hostid"]'` ```
2015/02/04
[ "https://Stackoverflow.com/questions/28329596", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3731374/" ]
You keep two complete copies of the file in memory at the same time, `@lines` and `$lines`. You might consider instead: ``` open (my $FH, "<", $file) or die "Can't open $file for read: $!"; $FH->input_record_separator(undef); # slurp entire file my $lines = <$FH>; close $FH or die "Cannot close $file: $!"; ``` On sufficiently obsolete versions of Perl you may need to explicitly `use IO::Handle`. Note also: I've switched to lexical file handles from the bare word versions. I presume you aren't striving for compatibility with Perl v4. Of course if cutting your memory requirements by half isn't enough, you could always iterate through the file...
Working with XML using regexes is error prone and inefficient, as code which slurps the whole file as a string shows. To deal with XML you should be using an XML parser. In particular, you want a SAX parser which will work on the XML a piece at a time as opposed to a DOM parser which much read the whole file. I'm going to answer your question as is because there's some value in knowing how to work line by line. If you can avoid it, don't read a whole file into memory. Work line by line. Your task seems to be to remove a handful of lines from an XML file for reasons. Everything between `<DOCUMENT>\n<TYPE>EX-` and `<\/DOCUMENT>`. We can do that line-by-line by keeping a bit of state. ``` use autodie; open (my $infh, "<", $file); open (my $outfh, ">", "$file.tmp"); my $in_document = 0; my $in_type_ex = 0; while( my $line = <$infh> ) { if( $line =~ m{<DOCUMENT>\n}i ) { $in_document = 1; next; } elsif( $line =~ m{</DOCUMENT>}i ) { $in_document = 0; next; } elsif( $line =~ m{<TYPE>EX-}i ) { $in_type_ex = 1; next; } elsif( $in_document and $in_type_ex ) { next; } else { print $outfh $line; } } rename "$file.tmp", $file; ``` Using a temp file allows you to read the file while you construct its replacement. Of course this will fail if the XML document isn't formatted just so (I helpfully added the `/i` flag to the regexes to allow lower case tags), you should really use a SAX XML parser.
5,072
62,585,876
Our python Dataflow pipeline works locally but not when deployed using the Dataflow managed service on Google Cloud Platform. It doesn't show signs that it is connected to the PubSub subscription. We have tried subscribing to both subscription and topic, neither of them worked. The messages accumulate in the PubSub subscription and the Dataflow pipeline doesn't show signs of being called or anything. We have double-checked the project is the same Any directions on this would be very much appreciated Here is the code to connect to a pull subscription ```py with beam.Pipeline(options=options) as p: something = p | "ReadPubSub" >> beam.io.ReadFromPubSub( subscription="projects/PROJECT_ID/subscriptions/cloudflow" ) ``` Here goes the options used ```py options = PipelineOptions() file_processing_options = PipelineOptions().view_as(FileProcessingOptions) if options.view_as(GoogleCloudOptions).project is None: print(sys.argv[0] + ": error: argument --project is required") sys.exit(1) options.view_as(SetupOptions).save_main_session = True options.view_as(StandardOptions).streaming = True ``` The PubSub subscription has this configuration: ``` Delivery type: Pull Subscription expiration: Subscription expires in 31 days if there is no activity. Acknowledgement deadline: 57 Seconds Subscription filter: — Message retention duration: 7 Days Retained acknowledged messages: No Dead lettering: Disabled Retry policy : Retry immediately ```
2020/06/25
[ "https://Stackoverflow.com/questions/62585876", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6019494/" ]
Very late answer, it may still help someone else. I had the same problem, solved it like this: 1. Thanks to user Paramnesia1 who wrote [this](https://www.reddit.com/r/googlecloud/comments/srh28m/dataflow_pipeline_not_consuming_messages_from/) answer, I figured out that I was not observing all the logs on Logs Explorer. Some default job\_name query filters were preventing me from that. I am quoting & claryfing the steps to follow to be able to see all logs: > > Open the Logs tab in the Dataflow Job UI, section Job Logs > > > Click the "View in Logs Explorer" button > > > In the new Logs Explorer screen, in your Query window, remove all the existing "logName" filters, keep only resource.type and resource.labels.job\_id > > > 2. Now you will be able to see all the logs and investigate further your error. In my case, I was getting some 'Syncing Pod' errors, which were due to importing the wrong data file in my setup.py.
I think for Pulling from subscription we need to pass with\_attributes parameter as True. with\_attributes – True - output elements will be PubsubMessage objects. False - output elements will be of type bytes (message data only). Found similar one here: [When using Beam IO ReadFromPubSub module, can you pull messages with attributes in Python? It's unclear if its supported](https://stackoverflow.com/questions/55320682/when-using-beam-io-readfrompubsub-module-can-you-pull-messages-with-attributes)
5,078
22,488,763
I have been trying to insert data into the database using the following code in python: ``` import sqlite3 as db conn = db.connect('insertlinks.db') cursor = conn.cursor() db.autocommit(True) a="asd" b="adasd" cursor.execute("Insert into links (link,id) values (?,?)",(a,b)) conn.close() ``` The code runs without any errors. But no updation to the database takes place. I tried adding the `conn.commit()` but it gives an error saying module not found. Please help?
2014/03/18
[ "https://Stackoverflow.com/questions/22488763", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2923505/" ]
You do have to commit after inserting: ``` cursor.execute("Insert into links (link,id) values (?,?)",(a,b)) conn.commit() ``` or use the [connection as a context manager](http://docs.python.org/2/library/sqlite3.html#using-the-connection-as-a-context-manager): ``` with conn: cursor.execute("Insert into links (link,id) values (?,?)", (a, b)) ``` or set autocommit correctly by setting the `isolation_level` keyword parameter to the `connect()` method to `None`: ``` conn = db.connect('insertlinks.db', isolation_level=None) ``` See [Controlling Transactions](http://docs.python.org/2/library/sqlite3.html#controlling-transactions).
It can be a bit late but set the `autocommit = true` save my time! especially if you have a script to run some bulk action as `update/insert/delete`... **Reference:** <https://docs.python.org/2/library/sqlite3.html#sqlite3.Connection.isolation_level> it is the way I usually have in my scripts: ``` def get_connection(): conn = sqlite3.connect('../db.sqlite3', isolation_level=None) cursor = conn.cursor() return conn, cursor def get_jobs(): conn, cursor = get_connection() if conn is None: raise DatabaseError("Could not get connection") ``` I hope it helps you!
5,079
51,696,395
I'm trying to install gogle-assistant-sdk on Windows 10, and I'm getting a weird error which I can't understand. After installing python for all users and setting ENV variables when i run this command - ``` py -m pip install google-assistant-sdk[samples] ``` I got following error - ``` Command ""C:\Program Files (x86)\Python37-32\python.exe" -u -c "import setuptools, tokenize;__file__='C:\\Users\\ramji\\AppData\\Local\\Temp\\pip- install-p29c8ggl\\grpcio\\setup.py';f=getattr(tokenize, 'open', open) (__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" install --record C:\Users\ramji\AppData\Local\Temp\pip- record-7znlf3rz\install-record.txt --single-version-externally-managed -- compile" failed with error code 1 in C:\Users\ramji\AppData\Local\Temp\pip- install-p29c8ggl\grpcio\ ``` Detailed output - ``` During handling of the above exception, another exception occurred: Traceback (most recent call last): File "C:\Users\ramji\AppData\Local\Temp\pip-install-p29c8ggl\grpcio\src\python\grpcio\commands.py", line 292, in build_extensions build_ext.build_ext.build_extensions(self) File "C:\Program Files (x86)\Python37-32\lib\distutils\command\build_ext.py", line 448, in build_extensions self._build_extensions_serial() File "C:\Program Files (x86)\Python37-32\lib\distutils\command\build_ext.py", line 473, in _build_extensions_serial self.build_extension(ext) File "C:\Program Files (x86)\Python37-32\lib\site-packages\setuptools\command\build_ext.py", line 199, in build_extension _build_ext.build_extension(self, ext) File "C:\Program Files (x86)\Python37-32\lib\distutils\command\build_ext.py", line 533, in build_extension depends=ext.depends) File "C:\Program Files (x86)\Python37-32\lib\distutils\_msvccompiler.py", line 345, in compile self.initialize() File "C:\Program Files (x86)\Python37-32\lib\distutils\_msvccompiler.py", line 238, in initialize vc_env = _get_vc_env(plat_spec) File "C:\Program Files (x86)\Python37-32\lib\site-packages\setuptools\msvc.py", line 185, in msvc14_get_vc_env return EnvironmentInfo(plat_spec, vc_min_ver=14.0).return_env() File "C:\Program Files (x86)\Python37-32\lib\site-packages\setuptools\msvc.py", line 844, in __init__ self.si = SystemInfo(self.ri, vc_ver) File "C:\Program Files (x86)\Python37-32\lib\site-packages\setuptools\msvc.py", line 486, in __init__ self.vc_ver = vc_ver or self._find_latest_available_vc_ver() File "C:\Program Files (x86)\Python37-32\lib\site-packages\setuptools\msvc.py", line 493, in _find_latest_available_vc_ver raise distutils.errors.DistutilsPlatformError(err) distutils.errors.DistutilsPlatformError: Microsoft Visual C++ 14.0 is required. Get it with "Microsoft Visual C++ Build Tools": http://landinghub.visualstudio.com/visual-cpp-build-tools During handling of the above exception, another exception occurred: Traceback (most recent call last): File "<string>", line 1, in <module> File "C:\Users\ramji\AppData\Local\Temp\pip-install-p29c8ggl\grpcio\setup.py", line 348, in <module> cmdclass=COMMAND_CLASS, File "C:\Program Files (x86)\Python37-32\lib\site-packages\setuptools\__init__.py", line 129, in setup return distutils.core.setup(**attrs) File "C:\Program Files (x86)\Python37-32\lib\distutils\core.py", line 148, in setup dist.run_commands() File "C:\Program Files (x86)\Python37-32\lib\distutils\dist.py", line 966, in run_commands self.run_command(cmd) File "C:\Program Files (x86)\Python37-32\lib\distutils\dist.py", line 985, in run_command cmd_obj.run() File "C:\Program Files (x86)\Python37-32\lib\site-packages\setuptools\command\install.py", line 61, in run return orig.install.run(self) File "C:\Program Files (x86)\Python37-32\lib\distutils\command\install.py", line 545, in run self.run_command('build') File "C:\Program Files (x86)\Python37-32\lib\distutils\cmd.py", line 313, in run_command self.distribution.run_command(command) File "C:\Program Files (x86)\Python37-32\lib\distutils\dist.py", line 985, in run_command cmd_obj.run() File "C:\Program Files (x86)\Python37-32\lib\distutils\command\build.py", line 135, in run self.run_command(cmd_name) File "C:\Program Files (x86)\Python37-32\lib\distutils\cmd.py", line 313, in run_command self.distribution.run_command(command) File "C:\Program Files (x86)\Python37-32\lib\distutils\dist.py", line 985, in run_command cmd_obj.run() File "C:\Program Files (x86)\Python37-32\lib\site-packages\setuptools\command\build_ext.py", line 78, in run _build_ext.run(self) File "C:\Program Files (x86)\Python37-32\lib\distutils\command\build_ext.py", line 339, in run self.build_extensions() File "C:\Users\ramji\AppData\Local\Temp\pip-install-p29c8ggl\grpcio\src\python\grpcio\commands.py", line 295, in build_extensions support.diagnose_build_ext_error(self, error, formatted_exception) File "C:\Users\ramji\AppData\Local\Temp\pip-install-p29c8ggl\grpcio\src\python\grpcio\support.py", line 109, in diagnose_build_ext_error "backtrace).\n\n{}".format(formatted)) commands.CommandError: We could not diagnose your build failure. If you are unable to proceed, please file an issue at http://www.github.com/grpc/grpc with `[Python install]` in the title; please attach the whole log (including everything that may have appeared above the Python backtrace). Traceback (most recent call last): File "C:\Program Files (x86)\Python37-32\lib\site-packages\setuptools\msvc.py", line 490, in _find_latest_available_vc_ver return self.find_available_vc_vers()[-1] IndexError: list index out of range During handling of the above exception, another exception occurred: Traceback (most recent call last): File "C:\Users\ramji\AppData\Local\Temp\pip-install-p29c8ggl\grpcio\src\python\grpcio\commands.py", line 292, in build_extensions build_ext.build_ext.build_extensions(self) File "C:\Program Files (x86)\Python37-32\lib\distutils\command\build_ext.py", line 448, in build_extensions self._build_extensions_serial() File "C:\Program Files (x86)\Python37-32\lib\distutils\command\build_ext.py", line 473, in _build_extensions_serial self.build_extension(ext) File "C:\Program Files (x86)\Python37-32\lib\site-packages\setuptools\command\build_ext.py", line 199, in build_extension _build_ext.build_extension(self, ext) File "C:\Program Files (x86)\Python37-32\lib\distutils\command\build_ext.py", line 533, in build_extension depends=ext.depends) File "C:\Program Files (x86)\Python37-32\lib\distutils\_msvccompiler.py", line 345, in compile self.initialize() File "C:\Program Files (x86)\Python37-32\lib\distutils\_msvccompiler.py", line 238, in initialize vc_env = _get_vc_env(plat_spec) File "C:\Program Files (x86)\Python37-32\lib\site-packages\setuptools\msvc.py", line 185, in msvc14_get_vc_env return EnvironmentInfo(plat_spec, vc_min_ver=14.0).return_env() File "C:\Program Files (x86)\Python37-32\lib\site-packages\setuptools\msvc.py", line 844, in __init__ self.si = SystemInfo(self.ri, vc_ver) File "C:\Program Files (x86)\Python37-32\lib\site-packages\setuptools\msvc.py", line 486, in __init__ self.vc_ver = vc_ver or self._find_latest_available_vc_ver() File "C:\Program Files (x86)\Python37-32\lib\site-packages\setuptools\msvc.py", line 493, in _find_latest_available_vc_ver raise distutils.errors.DistutilsPlatformError(err) distutils.errors.DistutilsPlatformError: Microsoft Visual C++ 14.0 is required. Get it with "Microsoft Visual C++ Build Tools": http://landinghub.visualstudio.com/visual-cpp-build-tools ---------------------------------------- Command ""C:\Program Files (x86)\Python37-32\python.exe" -u -c "import setuptools, tokenize;__file__='C:\\Users\\ramji\\AppData\\Local\\Temp\\pip- install-p29c8ggl\\grpcio\\setup.py';f=getattr(tokenize, 'open', open) (__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" install --record C:\Users\ramji\AppData\Local\Temp\pip- record-7znlf3rz\install-record.txt --single-version-externally-managed -- compile" failed with error code 1 in C:\Users\ramji\AppData\Local\Temp\pip- `enter code here`install-p29c8ggl\grpcio\ ``` I even tried executing this commmand on another windows 10 machine And i got the same result. Does anyone know what is the problem and solution for it? Thanks and Regards.
2018/08/05
[ "https://Stackoverflow.com/questions/51696395", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6007248/" ]
Try this one In the platforms/android/cordova-safe/starter-conceal.gradle change this compile('com.facebook.conceal:conceal:1.0.0@aar') to this compile('com.facebook.conceal:conceal:2.0.1@aar') This has worked for me.
Open `platforms/android/cordova-safe/starter-conceal.gradle`, then update the version of **com.facebook.conceal:conceal** from **1.0.0** to **1.1.3**, so the code should now be ``` dependencies { compile('com.facebook.conceal:conceal:1.1.3@aar') { transitive = true } } ```
5,080
48,644,767
I'm looking at [\_math.c](https://github.com/python/cpython/blob/master/Modules/_math.c) in git (line 25): ``` #if !defined(HAVE_ACOSH) || !defined(HAVE_ASINH) static const double ln2 = 6.93147180559945286227E-01; static const double two_pow_p28 = 268435456.0; /* 2**28 */ ``` and I noticed that ln2 value is different from the what [wolframalpha](https://www.wolframalpha.com/input/?i=ln(2)) value for ln2. (bald part is the difference) ln2 = 0.693147180559945**286227** (cpython) ln2 = 0.693147180559945**3094172321214581** (wolframalpha) ln2 = 0.693147180559945**309417232121458** (wikipedia) so my question is why there is a difference? what am I missing?
2018/02/06
[ "https://Stackoverflow.com/questions/48644767", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4828285/" ]
As user2357112 noted, this code came from FDLIBM. That was carefully written for IEEE-754 machines, where C doubles have 53 bits of precision. It doesn't really care what the actual log of 2 is, but cares a whole lot about the best 53-bit approximation to `log(2)`. To reproduce the intended 53-bit-precise value, [17 decimal digits would have sufficed](http://www.exploringbinary.com/number-of-digits-required-for-round-trip-conversions/). So why did they use 21 decimal digits instead? My guess: 21 decimal digits is the minimum needed to guarantee that the converted result will be correct to 64 bits of precision. Which *may* have been an issue at the time, if a compiler somehow decided to convert the literal to a Pentium's 80-bit float format (which has 64 bits of precision). So they displayed the 53-bit result with enough decimal digits so that *if* it were converted to a binary float format with 64 bits of precision, the trailing 11 bits (=64-53) would all be zeroes, thus ensuring they'd be working with the 53-bit value they intended from the start. ``` >>> import mpmath >>> x = mpmath.log(2) >>> x mpf('0.69314718055994529') >>> mpmath.mp.prec = 64 >>> y = mpmath.mpf("0.693147180559945286227") >>> x == y True >>> y mpf('0.693147180559945286227') ``` In English, `x` is the 53-bit precise value of `log(2)`, and `y` is the result of converting the decimal string in the code to a binary float format with 64 bits of precision. They're identical. In current reality, I expect all compilers now convert the literal to the native IEEE-754 double format, with 53 bits of precision. Either way, the code ensures the best 53-bit approximation to `log(2)` will be used.
Python seems wrong, although I'm not sure it is an oversight or it has a deeper meaning. The explanation of BlackJack seems reasonable, but I don't understand, why they would give additional digits that are wrong. You can check this yourself by using the formula under [More efficient series](https://en.wikipedia.org/wiki/Logarithm#Power_series). In Mathematica, you can calculate it up to 70 (35 summands) with ``` log2 = 2*Sum[1/i*(1/3)^i, {i, 1, 70, 2}] (* 79535292197135923776615186805136682215642574454974413288086/ 114745171628462663795273979107442710223059517312975273318225 *) ``` With `N[log2,30]` you get the correct digits ``` 0.693147180559945309417232121458 ``` which supports the correctness of Wikipedia and W|A. If you like, you can do the same calculation for machine precision numbers. In Mathematica, this usually means `double`. ``` logC = Compile[{{z, _Real, 0}}, 2.0*Sum[1/i*((z - 1)/(z + 1))^i, {i, 1, 100, 2}] ] ``` Note that this code gets completely compiled to a normal iteration and does not use some error reducing summation scheme. So there is no magical compiled `Sum` function. This gives on my machine: ``` logC[2]//FullForm (* 0.6931471805599451` *) ``` and is correct up to the digits you pointed out. This has the precision that was suggested by BlackJack ``` $MachinePrecision (* 15.9546 *) ``` Edit ---- As pointed out in comments and answers, the value you see in `_math.c` might be the 53 bit representation ``` digits = RealDigits[log2, 2, 53]; N[FromDigits[digits, 2], 21] (* 0.693147180559945286227 *) ```
5,081
73,348,659
I've recently had to implement a simple bruteforce software in python, and I was getting terrible execution times (even for a O(n^2) time complexity), topping the 10 minutes of runtime for a total of 3700 \* 11125 \* 2 = 82325000 access operations on numpy arrays (intel i5 4300U). I'm talking about access operations because I initialized all the arrays beforehand (out of the bruteforce loops) and then I just recycle them by just overwriting without reallocating anything. I get that bruteforce algorithms are supposed to be very slow, but 82 million acesses on contiguous-memory float arrays should not take 10 minutes even on the oldest of cpus... After some research I found this post: [Why is numpy.array so slow?](https://stackoverflow.com/questions/6559463/why-is-numpy-array-so-slow), which lead me to think that maybe numpy arrays with length of 3700 are too small to overcome some sort of overhead (which I do not know since, as I said, I recycle and not reallocate my arrays), therefore I tried substituting arrays with lists and... voilà, run times were down to the minute (55 seconds) for a total of 10x decrease! But now I'm kinda baffled on why on earth a list can be faster than an array, but more importantly, where is the threshold? The cited post said that numpy arrays are very efficient on large quantities of data, but where does "a lot of data" becomes "enough data" to actually get advantages? Is there a safe way to determine whether to use arrays or lists? Should I get worried about writing my functions two times, one for each case? For anyone wondering, the original script was something like (list version, mockup data): ``` X = [0] * 3700 #list of 3700 elements y = [0] * 3700 combinations = [(0, 0)] * 11125 grg_results = [[0, 0, 0]] * len(combinations) rgr_results = [[0, 0, 0]] * len(combinations) grg_temp = [100] * (3700 + 1) rgr_temp = [100] * (3700 + 1) for comb in range(len(combinations)): pivot_a = combinations[comb][0] pivot_b = combinations[comb][1] for i in range(len(X)): _x = X[i][0] _y = y[i][0] if _x < pivot_a: grg_temp[i + 1] = _y * grg_temp[i] rgr_temp[i + 1] = (2 - _y) * rgr_temp[i] elif _x >= pivot_a and _x <= pivot_b: grg_temp[i + 1] = (2 - _y) * grg_temp[i] rgr_temp[i + 1] = _y * rgr_temp[i] else: grg_temp[i + 1] = _y * grg_temp[i] rgr_temp[i + 1] = (2 - _y) * rgr_temp[i] grg_results[comb][0] = pivot_a grg_results[comb][1] = pivot_b rgr_results[comb][0] = pivot_a rgr_results[comb][1] = pivot_b grg_results[comb][2] = metrics[0](grg_temp) rgr_results[comb][2] = metrics[0](rgr_temp) ```
2022/08/14
[ "https://Stackoverflow.com/questions/73348659", "https://Stackoverflow.com", "https://Stackoverflow.com/users/17474667/" ]
Let's do some simple list and array comparisons. Make a list of 0s (as you do): ``` In [108]: timeit [0]*1000 2.83 µs ± 0.399 ns per loop (mean ± std. dev. of 7 runs, 100,000 loops each) ``` Make an array from that list - a lot more time: ``` In [109]: timeit np.array([0]*1000) 84.9 µs ± 103 ns per loop (mean ± std. dev. of 7 runs, 10,000 loops each) ``` Make the array of 0s in the correct numpy way: ``` In [110]: timeit np.zeros(1000) 735 ns ± 0.727 ns per loop (mean ± std. dev. of 7 runs, 1,000,000 loops each) ``` Now sum all the elements of the list: ``` In [111]: %%timeit alist = [0]*1000 ...: sum(alist) 5.74 µs ± 215 ns per loop (mean ± std. dev. of 7 runs, 100,000 loops each) ``` Sum of the array: ``` In [112]: %%timeit arr = np.zeros(1000) ...: arr.sum() 6.41 µs ± 26.5 ns per loop (mean ± std. dev. of 7 runs, 100,000 loops each) ``` In this case the list is a bit faster; but make a much larger list/array: ``` In [113]: %%timeit alist = [0]*100000 ...: sum(alist) 545 µs ± 17.9 µs per loop (mean ± std. dev. of 7 runs, 1,000 loops each) In [114]: %%timeit arr = np.zeros(100000) ...: arr.sum() 56.7 µs ± 37 ns per loop (mean ± std. dev. of 7 runs, 10,000 loops each) ``` The list sum scales O(n); the array scales much better (it has a higher "setup", but lower O(n) dependency). The iteration is done in c code. Add 1 to all elements of the list: ``` In [115]: %%timeit alist = [0]*100000 ...: [i+1 for i in alist] 4.64 ms ± 168 µs per loop (mean ± std. dev. of 7 runs, 100 loops each) ``` A similar list comprehension on the array is slower: ``` In [116]: %%timeit arr = np.zeros(100000) ...: np.array([i+1 for i in arr]) 24.2 ms ± 37.6 µs per loop (mean ± std. dev. of 7 runs, 10 loops each) ``` But doing a whole-array operation (where it iteration and addition is done in `c` code): ``` In [117]: %%timeit arr = np.zeros(100000) ...: arr+1 64.1 µs ± 85.3 ns per loop (mean ± std. dev. of 7 runs, 10,000 loops each) ``` For smaller size the [115] comprehension will be faster; there's no clear threshold.
These are the 4 main advantages of an ndarray as far as i know : 1. It uses less storage for the pointers (1 byte instead of 8) because its a raw python object and not an array. It also only allows homogeneous numeric data types which also lead to a increase in performance. 2. Slicing doesnt copy the array (which is a slow opeation) just returns a view on the same array 3. The same goes for joining and splitting arrays (doesnt copy) and other array operations 4. Some operations like adding the values of two arrays together are written in c and of course faster as a result Conclusion : Manually looping over a ndarray is not fast as there is no optimization implemented. A numpy array offers memory and execution efficiency, and not to forget that the array is serving as an interchange format for many existing libraries. Sources: <https://pythoninformer.com/python-libraries/numpy/advantages/> , <https://numpy.org/devdocs/user/absolute_beginners.html>
5,083
22,425,567
I'm using [Loggly](https://www.loggly.com/) in order to have a centralized logs aggregator for my app running on AWS (Elastic beanstalk). However I'm not able to save my application logs using the Python logging library and the django logging configuration. In my Loggly control panel I can see a lot of logs coming from the underlying OS and software stack of my EC2 instance, but spefic logs from my app are not displayed and I don't understand why. I configured Loggly by configuring RSYSLOG on my EC2 instance (using the automated python script provided by Loggly itself), then I defined the following in my django settings: ``` LOGGING = { 'version': 1, 'disable_existing_loggers': False, 'formatters': { 'standard': { 'format': '[%(asctime)s] [%(levelname)s] [%(name)s:%(lineno)s] %(message)s', 'datefmt': '%d/%b/%Y %H:%M:%S' }, 'loggly': { 'format': 'loggly: %(message)s', }, }, 'filters': { 'require_debug_false': { '()': 'django.utils.log.RequireDebugFalse' } }, 'handlers': { 'mail': { 'level': 'ERROR', 'filters': ['require_debug_false'], 'class': 'django.utils.log.AdminEmailHandler', 'formatter': 'standard', }, 'syslog': { 'level': 'INFO', 'class': 'logging.handlers.SysLogHandler', 'facility': 'local5', 'formatter': 'loggly', }, 'cygora': { 'level': 'DEBUG', 'class': 'logging.handlers.RotatingFileHandler', 'filename': '/tmp/cygora.log', 'maxBytes': 1024 * 1024 * 5, # 5 mb 'backupCount': 10, 'formatter': 'standard', }, 'django': { 'level': 'INFO', 'class': 'logging.handlers.RotatingFileHandler', 'filename': '/tmp/django.log', 'maxBytes': 1024 * 1024 * 5, # 5 mb 'backupCount': 10, 'formatter': 'standard', }, 'celery': { 'level': 'DEBUG', 'class': 'logging.handlers.RotatingFileHandler', 'filename': '/tmp/celery.log', 'maxBytes': 1024 * 1024 * 5, # 5 mb 'backupCount': 10, 'formatter': 'standard', }, }, 'loggers': { 'loggly': { 'handlers': ['syslog'], 'propagate': True, 'format': 'loggly: %(message)s', 'level': 'DEBUG', }, 'django': { 'handlers': ['syslog', 'django'], 'level': 'WARNING', 'propagate': True, }, 'django.db.backends': { 'handlers': ['syslog', 'django'], 'level': 'INFO', 'propagate': True, }, 'django.request': { 'handlers': ['syslog', 'mail', 'django'], 'level': 'INFO', 'propagate': True, }, 'celery': { 'handlers': ['syslog', 'mail', 'celery'], 'level': 'DEBUG', 'propagate': True, }, 'com.cygora': { 'handlers': ['syslog', 'cygora'], 'level': 'INFO', 'propagate': True, }, } } ``` In my classes I use the "standard" approach of having a module-level logger: ``` import logging logger = logging.getLogger(__name__) logger.info('This message is not displayed on Loggly!! :(') ``` but it doesn't work, neither using: ``` import logging logger = logging.getLogger('loggly') logger.info('This message is not displayed on Loggly!! :(') ``` Any idea? (is there someone using Django + Loggly with RSYSLOG)
2014/03/15
[ "https://Stackoverflow.com/questions/22425567", "https://Stackoverflow.com", "https://Stackoverflow.com/users/267719/" ]
The problem is either in the local rsyslog service *receiving* the logs or in *sending* them. Your `LOGGING` setting is solid, but since you are taking control of everything (like the Django loggers) you should set `'disable_existing_loggers': True`. (Minor point: you can drop 'format' from the `loggly` loggers; the syslog handler will do that with its 'formatter'.) Following the Loggly Django [example](http://community.loggly.com/customer/portal/articles/1256304-logging-from-python#setup-django), there are two steps left. 1. Setup Syslog with these [instructions](http://community.loggly.com/customer/portal/articles/1271671-rsyslog-basic-configuration). You've done this and it sounds like it's working. 2. Make sure the service is listening by uncommenting these lines in `rsyslog.conf`: `# provides UDP syslog reception $ModLoad imudp $UDPServerRun 514` Verify rsyslog's config with `rsyslog -N1` and restart the service. Rsyslog [troubleshooting](http://www.rsyslog.com/doc/v7-stable/troubleshooting/troubleshoot.html) mentions looking at the rsyslog log and running the service interactively; hopefully you don't have to go to those depths. Look at Loggly's [gotchas](http://community.loggly.com/customer/portal/articles/1271671-rsyslog-basic-configuration#gotchas) section first. Start a Django shell and test it (which you're already doing—nice work!). ``` > manage.py shell import logging logger = logging.getLogger('loggly') logger.info('This message is not displayed on Loggly!! :(') ``` Also, you do not have a root logger. So it's quite likely that `logging.getLogger(__name__)` won't be caught and handled. That's a general note; your efforts are thorough and not limited by this.
Googled around and saw your post on loggly's support. Did you see their reply and did it help you? <http://community.loggly.com/customer/portal/questions/5898190-django-loggly-app-logs-not-saved>
5,084
24,148,039
I'm trying to use in python a shared\_ptr of a fundamental type (for instance int or double), but I don't know how to export it to python: I have the following class: ``` class Holder { public: Holder(int v) : value(new int(v)) {}; boost::shared_ptr<int> value; }; ``` The class is being exported in this way: ``` class_<Holder>("Holder", init<int>()) .def_readwrite("value", &Holder::value); ``` In the python code, I'm trying to set the "holder instance .value" using an instance that already exists. ``` h1 = mk.Holder(10) h2 = mk.Holder(20) h1.value = h2.value ``` The following has occured: ``` TypeError: No to_python (by-value) converter found for C++ type: class boost::shared_ptr<int> ``` My question is: how can I export `boost::shared_ptr<int>` to python?
2014/06/10
[ "https://Stackoverflow.com/questions/24148039", "https://Stackoverflow.com", "https://Stackoverflow.com/users/697884/" ]
One can use [`boost::python::class_`](http://www.boost.org/doc/libs/release/libs/python/doc/v2/class.html#class_-spec) to export `boost::shared_ptr<int>` to Python in the same manner as other types: ```cpp boost::python::class_<boost::shared_ptr<int> >(...); ``` However, be careful in the semantics introduced when exposing `shared_ptr`. For example, consider the case where assigning one `Holder.value` to another `Holder.value` simply invokes `boost::shared_ptr<int>`'s assignment operator, and `Holder.increment()` manipulates the `int` pointed to by `value` rather than having `value` point to a new `int`: ``` h1 = Holder(10) h2 = Holder(20) h1.value = h2.value # h1.value and h2.value point to the same int. h1.increment() # Change observed in h2.value. The semantics may be unexpected # by Python developers. ``` --- Here is a complete example exposing `boost::shared_ptr<int>` based on the original code: ```cpp #include <sstream> #include <string> #include <boost/python.hpp> #include <boost/shared_ptr.hpp> class Holder { public: Holder(int v) : value(new int(v)) {}; boost::shared_ptr<int> value; }; std::string holder_value_str(const boost::shared_ptr<int>& value) { std::stringstream stream; stream << value.get() << " contains " << *value; return stream.str(); } BOOST_PYTHON_MODULE(example) { namespace python = boost::python; { python::scope holder = python::class_<Holder>( "Holder", python::init<int>()) .def_readwrite("value", &Holder::value) ; // Holder.Value python::class_<boost::shared_ptr<int> >("Value", python::no_init) .def("__str__", &holder_value_str) ; } } ``` Interactive usage: ```python >>> import example >>> h1 = example.Holder(10) >>> h2 = example.Holder(20) >>> print h1.value 0x25f4bd0 contains 10 >>> print h2.value 0x253f220 contains 20 >>> h1.value = h2.value >>> print h1.value 0x253f220 contains 20 ``` --- Alternatively, if one considers `shared_ptr` as nothing more than a C++ memory management proxy, then it may be reasonable to expose `Holder.value` as an `int` in Python, even though `Holder::value` is a `boost::shared_ptr<int>`. This would provide Python developers with the expected semantics and permit statements such `h1.value = h2.value + 5`. Here is an example that uses auxiliary functions to transform `Holder::value` to and from an `int` instead of exposing the `boost::shared_ptr<int>` type: ```cpp #include <boost/python.hpp> #include <boost/shared_ptr.hpp> class Holder { public: Holder(int v) : value(new int(v)) {}; boost::shared_ptr<int> value; }; /// @brief Auxiliary function used to get Holder.value. int get_holder_value(const Holder& self) { return *(self.value); } /// @brief Auxiliary function used to set Holder.value. void set_holder_value(Holder& self, int value) { *(self.value) = value; } BOOST_PYTHON_MODULE(example) { namespace python = boost::python; python::scope holder = python::class_<Holder>( "Holder", python::init<int>()) .add_property("value", python::make_function(&get_holder_value), python::make_function(&set_holder_value)) ; } ``` Interactive usage: ```python >>> import example >>> h1 = example.Holder(10) >>> h2 = example.Holder(20) >>> assert(h1.value == 10) >>> assert(h2.value == 20) >>> h1.value = h2.value >>> assert(h1.value == 20) >>> h1.value += 22 >>> assert(h1.value == 42) >>> assert(h2.value == 20) ```
Do you need to? Python has its own reference counting mechanism, and it might be simpler just to use that. (But a lot depends on what is going on on the C++ side.) Otherwise: you probably need to define a Python object to contain the shared pointer. This is relatively straightforward: just define something like: ``` struct PythonWrapper { PyObject_HEAD boost::shared_ptr<int> value; // Constructors and destructors can go here, to manage // value. }; ``` And declare and manage it like you would any other object; just make sure you do a `new` when ever objects of the type are created (and they must be created in functions you provide to Python, if nothing else in the function in the `tp_new` field), and a `delete` in the function you put in the `tp_dealloc` field of the type object you register.
5,085
51,201,658
I am trying to learn to code using python on my own but I ran into a problem. I am using python's subprocess module to execute a .bat file, but the process seems to get stuck at the bat file. The python code currently looks like this: ``` import getpass username = getpass.getuser() from subprocess import Popen p = Popen("hidefolder.bat", cwd=r"C:\Users\%s\Desktop" % username) stdout, stderr = p.communicate() import sys sys.exit() ``` And the .bat file looks like this: ``` if exist "C:\Users\%username%\Desktop\HiddenFolder\" ( attrib -s -h "HiddenFolder" rename "HiddenFolder" "Projects" exit ) if exist "C:\Users\%username%\Desktop\Projects\" ( rename "Projects" "HiddenFolder" attrib +s +h "HiddenFolder" exit ) if not exist "C:\Users\%username%\Desktop\HiddenFolder\" ( mkdir "C:\Users\%username%\Desktop\HiddenFolder\" ) exit ``` Is there a way to kill the child process even if the python script is waiting for the child process to be terminated before continuing? Or is the problem in the child process to start with? Thank you in advance.
2018/07/06
[ "https://Stackoverflow.com/questions/51201658", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10039856/" ]
You need to use `subprocess.PIPE` for `stdout` and `stderr`, or else they can't be fetched through `Popen.communicate`, and is the reason why your process is stuck. ``` from subprocess import Popen, PIPE import getpass username = getpass.getuser() p = Popen("hidefolder.bat", cwd=r"C:\Users\%s\Desktop" % username, stdout=PIPE, stderr=PIPE) stdout, stderr = p.communicate() import sys sys.exit() ```
I am a new programmer but i could solve my problem writting below code. ``` import subprocess subprocess.call([r'ProcurementSoftwareRun.bat']) print ('Software run successful') ``` My bat file was like: ``` @ECHO OFF cmd /c start "" "C:\Program Files (x86)\UserName\ERPModule\PROCUREMENT.exe exit ```
5,086
45,010,682
I wanted to convert an object of type bytes to binary representation in python 3.x. For example, I want to convert the bytes object `b'\x11'` to the binary representation `00010001` in binary (or 17 in decimal). I tried this: ``` print(struct.unpack("h","\x11")) ``` But I'm getting: ``` error struct.error: unpack requires a bytes object of length 2 ```
2017/07/10
[ "https://Stackoverflow.com/questions/45010682", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1987575/" ]
Starting from Python 3.2, you can use [`int.from_bytes`](https://docs.python.org/3/library/stdtypes.html#int.from_bytes). Second argument, `byteorder`, specifies [endianness](https://en.wikipedia.org/wiki/Endianness) of your bytestring. It can be either `'big'` or `'little'`. You can also use `sys.byteorder` to get your host machine's native byteorder. ```py import sys int.from_bytes(b'\x11', byteorder=sys.byteorder) # => 17 bin(int.from_bytes(b'\x11', byteorder=sys.byteorder)) # => '0b10001' ```
Iterating over a bytes object gives you 8 bit ints which you can easily format to output in binary representation: ```py import numpy as np >>> my_bytes = np.random.bytes(10) >>> my_bytes b'_\xd9\xe97\xed\x06\xa82\xe7\xbf' >>> type(my_bytes) bytes >>> my_bytes[0] 95 >>> type(my_bytes[0]) int >>> for my_byte in my_bytes: >>> print(f'{my_byte:0>8b}', end=' ') 01011111 11011001 11101001 00110111 11101101 00000110 10101000 00110010 11100111 10111111 ``` A function for a hex string representation is builtin: ``` >>> my_bytes.hex(sep=' ') '5f d9 e9 37 ed 06 a8 32 e7 bf' ```
5,087
72,703,006
I am trying to have this repo on docker: <https://github.com/facebookresearch/detectron2/tree/main/docker> but when I want to docker compose it, I receive this error: ``` ERROR: Package 'detectron2' requires a different Python: 3.6.9 not in '>=3.7' ``` The default version of the python I am using is 3.10 but I don't know why through docker it's trying to run it on python 3.6.9. Is there a way for me to change it to a higher version of python while running the following dockerfile? ``` FROM nvidia/cuda:11.1.1-cudnn8-devel-ubuntu18.04 # use an older system (18.04) to avoid opencv incompatibility (issue#3524) ENV DEBIAN_FRONTEND noninteractive RUN apt-get update && apt-get install -y \ python3-opencv ca-certificates python3-dev git wget sudo ninja-build RUN ln -sv /usr/bin/python3 /usr/bin/python # create a non-root user ARG USER_ID=1000 RUN useradd -m --no-log-init --system --uid ${USER_ID} appuser -g sudo RUN echo '%sudo ALL=(ALL) NOPASSWD:ALL' >> /etc/sudoers USER appuser WORKDIR /home/appuser ENV PATH="/home/appuser/.local/bin:${PATH}" RUN wget https://bootstrap.pypa.io/pip/3.6/get-pip.py && \ python3 get-pip.py --user && \ rm get-pip.py # install dependencies # See https://pytorch.org/ for other options if you use a different version of CUDA RUN pip install --user tensorboard cmake # cmake from apt-get is too old RUN pip install --user torch==1.10 torchvision==0.11.1 -f https://download.pytorch.org/whl/cu111/torch_stable.html RUN pip install --user 'git+https://github.com/facebookresearch/fvcore' # install detectron2 RUN git clone https://github.com/facebookresearch/detectron2 detectron2_repo # set FORCE_CUDA because during `docker build` cuda is not accessible ENV FORCE_CUDA="1" # This will by default build detectron2 for all common cuda architectures and take a lot more time, # because inside `docker build`, there is no way to tell which architecture will be used. ARG TORCH_CUDA_ARCH_LIST="Kepler;Kepler+Tesla;Maxwell;Maxwell+Tegra;Pascal;Volta;Turing" ENV TORCH_CUDA_ARCH_LIST="${TORCH_CUDA_ARCH_LIST}" RUN pip install --user -e detectron2_repo # Set a fixed model cache directory. ENV FVCORE_CACHE="/tmp" WORKDIR /home/appuser/detectron2_repo # run detectron2 under user "appuser": # wget http://images.cocodataset.org/val2017/000000439715.jpg -O input.jpg # python3 demo/demo.py \ #--config-file configs/COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x.yaml \ #--input input.jpg --output outputs/ \ #--opts MODEL.WEIGHTS detectron2://COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x/137849600/model_final_f10217.pkl ```
2022/06/21
[ "https://Stackoverflow.com/questions/72703006", "https://Stackoverflow.com", "https://Stackoverflow.com/users/13334873/" ]
This is an [open issue with facebookresearch/detectron2](https://github.com/facebookresearch/detectron2/issues/4335). The developers updated the base Python requirement from 3.6+ to 3.7+ with [commit 5934a14](https://github.com/facebookresearch/detectron2/commit/5934a1452801e669bbf9479ae222ce1a8a51f52e) last week but didn't modify the `Dockerfile`. I've created a `Dockerfile` based on Nvidia CUDA's CentOS8 image (rather than Ubuntu) that should work. ``` FROM nvidia/cuda:11.1.1-cudnn8-devel-centos8 RUN cd /etc/yum.repos.d/ && \ sed -i 's/mirrorlist/#mirrorlist/g' /etc/yum.repos.d/CentOS-* && \ sed -i 's|#baseurl=http://mirror.centos.org|baseurl=http://vault.centos.org|g' /etc/yum.repos.d/CentOS-* && \ dnf check-update; dnf install -y ca-certificates python38 python38-devel git sudo which gcc-c++ mesa-libGL && \ dnf clean all RUN alternatives --set python /usr/bin/python3 && alternatives --install /usr/bin/pip pip /usr/bin/pip3 1 # create a non-root user ARG USER_ID=1000 RUN useradd -m --no-log-init --system --uid ${USER_ID} appuser -g wheel RUN echo '%wheel ALL=(ALL) NOPASSWD:ALL' >> /etc/sudoers USER appuser WORKDIR /home/appuser ENV PATH="/home/appuser/.local/bin:${PATH}" # install dependencies # See https://pytorch.org/ for other options if you use a different version of CUDA ARG CXX="g++" RUN pip install --user tensorboard ninja cmake opencv-python opencv-contrib-python # cmake from apt-get is too old RUN pip install --user torch==1.10 torchvision==0.11.1 -f https://download.pytorch.org/whl/cu111/torch_stable.html RUN pip install --user 'git+https://github.com/facebookresearch/fvcore' # install detectron2 RUN git clone https://github.com/facebookresearch/detectron2 detectron2_repo # set FORCE_CUDA because during `docker build` cuda is not accessible ENV FORCE_CUDA="1" # This will by default build detectron2 for all common cuda architectures and take a lot more time, # because inside `docker build`, there is no way to tell which architecture will be used. ARG TORCH_CUDA_ARCH_LIST="Kepler;Kepler+Tesla;Maxwell;Maxwell+Tegra;Pascal;Volta;Turing" ENV TORCH_CUDA_ARCH_LIST="${TORCH_CUDA_ARCH_LIST}" RUN pip install --user -e detectron2_repo # Set a fixed model cache directory. ENV FVCORE_CACHE="/tmp" WORKDIR /home/appuser/detectron2_repo # run detectron2 under user "appuser": # curl -o input.jpg http://images.cocodataset.org/val2017/000000439715.jpg # python3 demo/demo.py \ #--config-file configs/COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x.yaml \ #--input input.jpg --output outputs/ \ #--opts MODEL.WEIGHTS detectron2://COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x/137849600/model_final_f10217.pkl ``` --- Alternatively, this is untested as the following images don't work on my machine (because I run `arm64`) so I can't debug... In the original `Dockerfile`, changing your `FROM` line to this *might* resolve it, but I haven't verified this (and the image mentioned in the issue (`pytorch/pytorch:1.10.0-cuda11.3-cudnn8-devel`) might work as well. ``` FROM nvidia/cuda:11.1.1-cudnn8-devel-ubuntu20.04 ```
You can use pyenv: <https://github.com/pyenv/pyenv> Just google `docker pyenv container`, will give you some entries like: <https://gist.github.com/jprjr/7667947> If you follow the gist you can see how it has been updated, very easy to update to latest python that pyenv support. anything since 2.2 to 3.11 Only drawback is that container becomes quite large because it holds all glibc development tools and libraries to compile cpython, but often it helps in case you need modules without wheels and need to compile because it is already there. Below is a minimal Pyenv Dockerfile Just change the PYTHONVER or set a --build-arg to anything pythonversion pyenv support have (`pyenv install -l`): ``` FROM ubuntu:22.04 ARG MYHOME=/root ENV MYHOME ${MYHOME} ARG PYTHONVER=3.10.5 ENV PYTHONVER ${PYTHONVER} ARG PYTHONNAME=base ENV PYTHONNAME ${PYTHONNAME} ARG DEBIAN_FRONTEND=noninteractive RUN apt-get update && apt-get upgrade -y && \ apt-get install -y locales wget git curl zip vim apt-transport-https tzdata language-pack-nb language-pack-nb-base manpages \ build-essential libjpeg-dev libssl-dev xvfb zlib1g-dev libbz2-dev libreadline-dev libreadline6-dev libsqlite3-dev tk-dev libffi-dev libpng-dev libfreetype6-dev \ libx11-dev libxtst-dev libfontconfig1 lzma lzma-dev RUN git clone https://github.com/pyenv/pyenv.git ${MYHOME}/.pyenv && \ git clone https://github.com/yyuu/pyenv-virtualenv.git ${MYHOME}/.pyenv/plugins/pyenv-virtualenv && \ git clone https://github.com/pyenv/pyenv-update.git ${MYHOME}/.pyenv/plugins/pyenv-update SHELL ["/bin/bash", "-c", "-l"] COPY ./.bash_profile /tmp/ RUN cat /tmp/.bash_profile >> ${MYHOME}/.bashrc && \ cat /tmp/.bash_profile >> ${MYHOME}/.bash_profile && \ rm -f /tmp/.bash_profile && \ source ${MYHOME}/.bash_profile && \ pyenv install ${PYTHONVER} && \ pyenv virtualenv ${PYTHONVER} ${PYTHONNAME} && \ pyenv global ${PYTHONNAME} ``` and the pyenv config to be saved as .bash\_profile in Dockerfile directory: ``` # profile for pyenv export PYENV_ROOT="$HOME/.pyenv" export PATH="$PYENV_ROOT/bin:$PATH" eval "$(pyenv init -)" eval "$(pyenv init --path)" eval "$(pyenv virtualenv-init -)" ``` build with: `docker build -t pyenv:3.10.5 .` Will build the image, but as said it is quite big: ``` docker images REPOSITORY TAG IMAGE ID CREATED SIZE pyenv 3.10.5 64a4b91364d4 2 minutes ago 1.04GB ``` very easy to test any python version only changing PYTHONVER ``` docker run -ti pyenv:3.10.5 /bin/bash (base) root@968fd2178c8a:/# python --version Python 3.10.5 (base) root@968fd2178c8a:/# which python /root/.pyenv/shims/python ``` if I build with `docker build -t pyenv:3.12-dev --build-arg PYTHONVER=3.12.dev .` or change the PYTHONVER in the Dockerfile: ``` docker run -ti pyenv:3.12-dev /bin/bash (base) root@c7245ea9f52e:/# python --version Python 3.12.0a0 ```
5,088
46,830,144
There seem to be two kinds of generator-based coroutine: 1. From [a reply](https://stackoverflow.com/a/46203922/156458) by Jim Fasarakis Hilliard: > > **Generator-based coroutine**: A generator (`def` + `yield`) that is wrapped by [`types.coroutine`](https://docs.python.org/3/library/types.html#types.coroutine) . You need to wrap it in > `types.coroutine` if you need it to be considered a coroutine object. > > > 2. From Python in a Nutshell, which doesn't explicitly call it "generator-based coroutine": > > When you write Python code based on `asyncio` (ideally also using > add-on modules from asyncio.org), you’ll usually be writing > coroutine functions. Up to Python 3.4 included, such functions > are generators using the `yield from` statement covered in “yield > from (v3-only)” on page 95, decorated with `@asyncio.coroutine` , > covered in “asyncio coroutines” on page 518; > > > From <https://www.python.org/dev/peps/pep-0492/#differences-from-generators> > > generator-based coroutines (for asyncio code must be decorated with @asyncio.coroutine) > > > <http://masnun.com/2015/11/13/python-generators-coroutines-native-coroutines-and-async-await.html> also calls it "generator-based coroutine". Are the two kinds of generator-based coroutines the same concept? If not, what are their differences in purposes and usages? Thanks.
2017/10/19
[ "https://Stackoverflow.com/questions/46830144", "https://Stackoverflow.com", "https://Stackoverflow.com/users/156458/" ]
They're the same kind of coroutine. `types.coroutine` and `asyncio.coroutine` are just two separate ways to create them. `asyncio.coroutine` is older, predating the introduction of `async` coroutines, and its functionality has shifted somewhat from its original behavior now that `async` coroutines exist. `asyncio.coroutine` and `types.coroutine` have subtly different behavior, especially if applied to anything other than a generator function, or if asyncio is in [debug mode](https://docs.python.org/3/using/cmdline.html#envvar-PYTHONASYNCIODEBUG).
As far as I’m concerned, `async def` is the **proper** way to define a coroutine. `yield` and `yield from` have their purpose in generators, and they are also used to implement “futures”, which are the low-level mechanism that handles switching between different coroutine contexts. I did [this diagram](https://default-cube.deviantart.com/art/Van-Rossum-s-Triangle-679791228) a few months ago to summarize the relationships between them. But frankly, you can safely ignore the whole business. Event loops have the job of handling all the low-level details of managing the execution of coroutines, so use one of those, like [asyncio](https://docs.python.org/3/library/asyncio.html). There are also `asyncio`-compatible wrappers for other event loops, like my own [`glibcoro`](https://github.com/ldo/glibcoro) for GLib/GTK. In other words, stick to the `asyncio` API, and you can write “event-loop-agnostic” coroutines!
5,089
49,922,073
I just installed termcolor for python 2.7 on windows8.1. When I try to print colored text, I get the strange output. ``` from termcolor import colored print colored('Hello world','red') ``` Here is the result: ``` [31mHello world[0m ``` Help to get out from this problem.Thanks,In advance
2018/04/19
[ "https://Stackoverflow.com/questions/49922073", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9467325/" ]
See this [stackOverflow](https://stackoverflow.com/questions/287871/how-to-print-colored-text-in-terminal-in-python) post. It basically says that in order to get the escape sequences working in Windows, you need to run os.system('color') first. For example: ``` import termcolor import os os.system('color') print(termcolor.colored("Stack Overflow", "green") ```
`termcolor` or `colored` works perfectly fine under python 2.7 and I can't replicate your error on my Mac/Linux. If you looks into the source code of `colored`, it basically print the string in the format as ``` \033[%dm%s\033[0m' % (COLORS[color], text) ``` Somehow your terminal environment does not recognise the non-printing escape sequences that is used in the unix/linux system for setting the foreground color of xterm.
5,090
3,079,684
As you know, Windows has a "Add/Remove Programs" system in the Control Panel. Let's say I am preparing an installer and I want to register my program to list of installed programs and want it to be uninstallable from "Add/Remove Programs"? Which protocols should I use. Any tutorials or docs about registering programs to that list? I am coding with python and I can use WMI (Windows Management Instrument) or Win32 API. IMHO, it is done with Registry keys but I am not sure with it. I also want to execute an uninstaller upon the Uninstallation to remove installed files. Any related docs or tutorials are highly appreciated. Thanks.
2010/06/20
[ "https://Stackoverflow.com/questions/3079684", "https://Stackoverflow.com", "https://Stackoverflow.com/users/54929/" ]
As stated on IRC: "Windows keeps its uninstall information in the registry" Its in HLLM\Software\Microsoft\Windows\CurrentVersion\uninstall\ keys. You need a few things from the Win32 API, but I belive there's a fair amount of Python support for the win32 API. Basically, a key in ...\Uninstall\ with a unique name (like "MyApp") with a few special values stashed in there. Add/Remove programs looks through there. Its pretty self-explanatory.
Inno Setup is open source so perhaps you can get some ideas from that.
5,091
53,119,083
In the [`xonsh`](https://github.com/xonsh/xonsh/) shell how can I receive from a pipe to a python expression? Example with a `find` command as pipe provider: ``` find $WORKON_HOME -name pyvenv.cfg -print | for p in <stdin>: $(ls -dl @(p)) ``` The `for p in <stdin>:` is obviously pseudo code. What do I have to replace it with? Note: In bash I would use a construct like this: ``` ... | while read p; do ... done ```
2018/11/02
[ "https://Stackoverflow.com/questions/53119083", "https://Stackoverflow.com", "https://Stackoverflow.com/users/65889/" ]
The easiest way to pipe input into a Python expression is to use a function that is a [callable alias](https://xon.sh/tutorial.html#callable-aliases), which happens to accept a stdin file-like object. For example, ``` def func(args, stdin=None): for line in stdin: ls -dl @(line.strip()) find $WORKON_HOME -name pyvenv.cfg -print | @(func) ``` Of course you could skip the `@(func)` by putting func in `aliases`, ``` aliases['myls'] = func find $WORKON_HOME -name pyvenv.cfg -print | myls ``` Or if all you wanted to do was iterate over the output of `find`, you don't even need to pipe. ``` for line in !(find $WORKON_HOME -name pyvenv.cfg -print): ls -dl @(line.strip()) ```
Drawing on the answer from [Anthony Scopatz](https://stackoverflow.com/users/2312428/anthony-scopatz) you can do this on one line with a [callable alias](https://xon.sh/tutorial.html#callable-aliases) as a lambda. The function takes the third form, `def mycmd2(args, stdin=None)`. I discarded `args` with `_` because I don't need it and shortened `stdin` to `s` for convenience. Here is a command to hash a file: ``` type image.qcow2 | @(lambda _,s: hashlib.sha256(s.read()).hexdigest()) ```
5,094
62,032,878
I am new in ebpf & xdp topic and want to do learn it. My question is how to use ebpf filter to filter the packet on specific payload matching? for example, if the data(payload) of the packet is 1234 its passes to the network stack otherwise it blocks the packet. I reached payload length. For example, if I want to match the message payload length it works fine but when I start matching the payload characters I got an error. here is my code: ```c int ret_val; unsigned long payload_offset; unsigned long payload_size; const char *payload = "test"; struct ethhdr *eth = data; if ((void*)eth + sizeof(*eth) <= data_end) { struct iphdr *ip = data + sizeof(*eth); if ((void*)ip + sizeof(*ip) <= data_end) { if (ip->protocol == IPPROTO_UDP ) { struct udphdr *udp = (void*)ip + sizeof(*ip); if ((void*)udp + sizeof(*udp) <= data_end) { if (udp->dest == ntohs(5005)) { payload_offset = sizeof(struct udphdr); payload_size = ntohs(udp->len) - sizeof(struct udphdr); unsigned char *s = (unsigned char *)&payload_size; if (ret_val == __builtin_memcmp(s,payload,4) == 0) { return XDP_DROP; } } } } } } ``` The error had removed but unable to compare the payload... I am sending the UDP message from python socket code. If I compare the payload length it works fine.
2020/05/26
[ "https://Stackoverflow.com/questions/62032878", "https://Stackoverflow.com", "https://Stackoverflow.com/users/13623550/" ]
What did you try? You should probably read a bit more about eBPF to try to understand how to process packets, the basic example you give does not sound too complicated. Basically you would have to parse the headers to see where your payload begins. [Simple BPF parsing examples](https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/samples/bpf/parse_simple.c/?h=v5.6) might help you understand the principles: 1. Start from beginning of header (e.g. Ethernet at first) 2. Check packet is long enough to hold the header (or you would risk an out-of-bound access when trying to access the upper layers otherwise) 3. Add header length to get the offset of your next header (e.g. IPv4, then e.g. TCP...) 4. Rinse and repeat. In your case you would **process all headers until you get the offset of the data payload**. Note that this is trivial if the traffic you try to match always has the same headers (e.g. always IPv4 and UDP), but you get more cases to sort out if there is a mix (IPv4 + IPv6, encapsulation, IPv4 options...). Once you have the offset for your data, **just compare data at this offset** to your pattern (that you may hardcode in the BPF program or get from a BPF map, depending on your use case). Note that you [do not have access to `strcmp()`](https://stackoverflow.com/a/60384186/3716552), but `__builtin_memcmp()` is available if you need to compare more than 64 bits. (All the above applying of course to a C program that you would compile into an object file containing eBPF instructions with the LLVM back-end.) If you were to search for a string at an arbitrary offset in the payload, know that eBPF now supports (bounded) loops since kernel 5.3 (if I remember correctly).
Your edit is pretty much a new question, so here an updated answer. Please consider opening a new question instead in the future. There are a number of things that are wrong in your program. In particular: ```c 1| payload_offset = sizeof(struct udphdr); 2| payload_size = ntohs(udp->len) - sizeof(struct udphdr); 3| unsigned char *s = (unsigned char *)&payload_size; 4| 5| if (ret_val == __builtin_memcmp(s, payload, 4) == 0) { 6| return XDP_DROP; 7| } ``` * On line 1, your `payload_offset` variable is not an offset, it just contains the length of the UDP header. You would need to add that to the start of the UDP header to get the actual payload offset. * Line 2 is fine. * Line 3 does not make any sense! You make `s` (that you later compare to your pattern) point towards *the size of the payload*? (a.k.a “I told you so in the comments! :)”). Instead, it should point to... the beginning of the payload, maybe? So, basically, `data + payload_offset` (once offset is fixed). * Between lines 3 and 5, the check on payload length is missing. When you try to access your payload in `s` (`__builtin_memcmp(s, payload, 4)`), you try to compare four bytes of packet data; you *must* ensure that the packet is long enough to read those four bytes (just as you checked the length each time before you read from an Ethernet, IP or UDP header field). * While at it, we can also check that the length of the payload is equal to the length of the pattern to match, and exit if they differ without having to compare the bytes. * Line 5 has a `==` instead of `=`, as discussed in the comments. Easy to fix. However, I had no luck with `__builtin_memcmp()` for your program, it seems LLVM does not want to inline it and turns it into a failing function call. Never mind, we can work without it. For your example, you can cast to `int` and compare the four-byte long values directly. For longer patterns, and for recent kernels (or by unrolling if pattern size is fixed), we can use bounded loops. Here is a amended version of your program, that works on my setup. ```c #include <arpa/inet.h> #include <linux/bpf.h> #include <linux/if_ether.h> #include <linux/ip.h> #include <linux/udp.h> int xdp_func(struct xdp_md *ctx) { void *data_end = (void *)(long)ctx->data_end; void *data = (void *)(long)ctx->data; char match_pattern[] = "test"; unsigned int payload_size, i; struct ethhdr *eth = data; unsigned char *payload; struct udphdr *udp; struct iphdr *ip; if ((void *)eth + sizeof(*eth) > data_end) return XDP_PASS; ip = data + sizeof(*eth); if ((void *)ip + sizeof(*ip) > data_end) return XDP_PASS; if (ip->protocol != IPPROTO_UDP) return XDP_PASS; udp = (void *)ip + sizeof(*ip); if ((void *)udp + sizeof(*udp) > data_end) return XDP_PASS; if (udp->dest != ntohs(5005)) return XDP_PASS; payload_size = ntohs(udp->len) - sizeof(*udp); // Here we use "size - 1" to account for the final '\0' in "test". // This '\0' may or may not be in your payload, adjust if necessary. if (payload_size != sizeof(match_pattern) - 1) return XDP_PASS; // Point to start of payload. payload = (unsigned char *)udp + sizeof(*udp); if ((void *)payload + payload_size > data_end) return XDP_PASS; // Compare each byte, exit if a difference is found. for (i = 0; i < payload_size; i++) if (payload[i] != match_pattern[i]) return XDP_PASS; // Same payload, drop. return XDP_DROP; } ```
5,095
54,468,348
From the cmd window I have to do this every time I run a script: ``` C:\>cd C:\Users\my name\AppData\Local\Programs\Python\Python37 C:\Users\my name\AppData\Local\Programs\Python\Python37>python "C:\\Users\\my name\\AppData\\Local\\Programs\\Python\\Python37\\scripts\\helloWorld.py" hello world ``` How can I get away from having to paste in all of the paths? I tried this and a few other things: <https://www.youtube.com/watch?v=Y2q_b4ugPWk> thanks!
2019/01/31
[ "https://Stackoverflow.com/questions/54468348", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3524158/" ]
You need to pay attention to the current working directory of your python interpreter. It basically means the directory you are currently in where you execute the python interpreter, and it relies on that path to look for your script passed in. If you're inside the script already, you can easily check with `os.getcwd()` method. In your case, you could have easily done this instead: ``` C:\Users\my name\AppData\Local\Programs\Python\Python37>python "scripts\helloWorld.py" hello world ``` Since your current working directory is `C:\Users\my name\AppData\Local\Programs\Python\Python37`, you just need to give it the relative path `scripts\helloWorld.py`. The current working directory can be easily visualized like this: ``` # cwd.py import os print("Current Working Directory is " + os.getcwd()) ``` And then when you run the scripts: ``` C:\Users\MyUserName\Documents>python cwd.py Current Working Directory is C:\Users\MyUserName\Documents C:\Users\MyUserName\Documents\Some\Other\Path>python cwd.py Current Working Directory is C:\Users\MyUserName\Documents\Some\Other\Path ``` Note in any case, if the `cwd.py` was not in the current working directory or in your PATH environment variable, python interpreter would complain it couldn't find the script (because why should it know where your script is stored?) If you insist in adding the environment variable though, you will need to add the directory to your `PATH` or `PYTHONPATH`... though I have a feeling `\Python37` is already under there.
There is a designated directory where you can put your .py scripts if you want to invoke them without specifying the full path. Setting this up correctly will allow you to run the script simply by invoking the script name (if the .py extension is registered to the interpreter and not an editor). Windows ======= If you have a per-user python installation - which is the installer default - the directory is: ``` %LOCALAPPDATA%/python/python39/Scripts ``` Adjust version number as needed. If you have a system-wide all-user installation, the directory is: ``` %APPDATA%/python/python39/Scripts ``` ### Configuring PATH automatically The Windows python installer includes an [option to automatically add](https://docs.python.org/3/using/windows.html#finding-the-python-executable) this directory (plus the python interpreter path) to your `PATH` environment variable during installation. Select the checkbox at the bottom or use the `PrependPath=1` CLI option. [![Python installer dialog showing add-to-path checkbox](https://i.stack.imgur.com/3MBsR.png)](https://i.stack.imgur.com/3MBsR.png) If python is already installed, you can still use the installer to do this. In Control Panel, `Programs and Features`, select the python entry and choose "Uninstall/Change". Then choose "Modify" and select the "Add Python to PATH" checkbox. Alternatively, if you want to add it manually - search for the `Edit environment variables for your account` in Windows 10. Edit the `PATH` variable in that dialog to add the directory. Linux ===== ``` ~/.local/bin ```
5,096
63,841,244
I have been trying to scrape data from [this site](http://www.indianbluebook.com/). I need to fill **Get the precise price of your car** form ie. the year, make, model etc.. I have written the following code till now: ``` import requests import time from selenium import webdriver from selenium.webdriver.common.by import By from selenium.webdriver.support import expected_conditions as EC from selenium.webdriver.support.ui import WebDriverWait, Select from selenium.common.exceptions import NoSuchElementException from selenium.common.exceptions import StaleElementReferenceException from selenium.webdriver.support import expected_conditions from bs4 import BeautifulSoup import re chrome_options = webdriver.ChromeOptions() driver = webdriver.Chrome('chromedriver_win32/chromedriver.exe', options=chrome_options) url = "http://www.indianbluebook.com/" driver.get(url) save_city = driver.find_element_by_xpath('//*[@id="cityPopup"]/div[2]/div/div[2]/form/div[2]/div/a[1]').click() #Bangalore #fill year year_dropdown = Select(driver.find_element_by_xpath('//*[@id="car_value"]/div[2]/div[1]/div[1]/div/select')) driver.implicitly_wait(50) year_dropdown.select_by_value('2020') time.sleep(5) ``` But, its giving this error: ``` ElementNotInteractableException Traceback (most recent call last) <ipython-input-25-a4eb8001e649> in <module> 8 year_dropdown = Select(driver.find_element_by_xpath('//*[@id="car_value"]/div[2]/div[1]/div[1]/div/select')) 9 driver.implicitly_wait(50) ---> 10 year_dropdown.select_by_value('2020') 11 12 time.sleep(5) ~\anaconda3\lib\site-packages\selenium\webdriver\support\select.py in select_by_value(self, value) 80 matched = False 81 for opt in opts: ---> 82 self._setSelected(opt) 83 if not self.is_multiple: 84 return ~\anaconda3\lib\site-packages\selenium\webdriver\support\select.py in _setSelected(self, option) 210 def _setSelected(self, option): 211 if not option.is_selected(): --> 212 option.click() 213 214 def _unsetSelected(self, option): ~\anaconda3\lib\site-packages\selenium\webdriver\remote\webelement.py in click(self) 78 def click(self): 79 """Clicks the element.""" ---> 80 self._execute(Command.CLICK_ELEMENT) 81 82 def submit(self): ~\anaconda3\lib\site-packages\selenium\webdriver\remote\webelement.py in _execute(self, command, params) 631 params = {} 632 params['id'] = self._id --> 633 return self._parent.execute(command, params) 634 635 def find_element(self, by=By.ID, value=None): ~\anaconda3\lib\site-packages\selenium\webdriver\remote\webdriver.py in execute(self, driver_command, params) 319 response = self.command_executor.execute(driver_command, params) 320 if response: --> 321 self.error_handler.check_response(response) 322 response['value'] = self._unwrap_value( 323 response.get('value', None)) ~\anaconda3\lib\site-packages\selenium\webdriver\remote\errorhandler.py in check_response(self, response) 240 alert_text = value['alert'].get('text') 241 raise exception_class(message, screen, stacktrace, alert_text) --> 242 raise exception_class(message, screen, stacktrace) 243 244 def _value_or_default(self, obj, key, default): ElementNotInteractableException: Message: element not interactable: Element is not currently visible and may not be manipulated (Session info: chrome=85.0.4183.102) ``` Note: I have tried many available solutions on internet like using Expected conditions with WebDriverWait. Sometimes I get the error, `StaleElementException`. I don't know what to do now. Please help. I'm new to this.
2020/09/11
[ "https://Stackoverflow.com/questions/63841244", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9952858/" ]
You can use the below approach to achieve the same. ``` #Set link according to data need driver.get('http://www.indianbluebook.com/') #Wait webpage to fully load necessary tables def ajaxwait(): for i in range(1, 30): x = driver.execute_script("return (window.jQuery != null) && jQuery.active") time.sleep(1) if x == 0: print("All Ajax element loaded successfully") break ajaxwait() wait = WebDriverWait(driver, 20) wait.until(EC.element_to_be_clickable((By.XPATH, "//*[@id='cityPopup']/div[2]/div/div[2]/form/div[2]/div/a[1]"))) save_city = driver.find_element_by_xpath('//*[@id="cityPopup"]/div[2]/div/div[2]/form/div[2]/div/a[1]').click() #Bangalore ajaxwait() #fill year wait.until(EC.element_to_be_clickable((By.XPATH, "//div[@class='form-group']//select[@class='form-control' and @name='manufacture_year']/following-sibling::div/a"))) #//div[@class='form-group']//select[@class='form-control' and @name='manufacture_year'] this is the only unique elemnt with reference to this we can find other element. #click on select year field then a dropdown will be open we will enter the year in the input box. Then select the item from the ul list. driver.find_element_by_xpath("//div[@class='form-group']//select[@class='form-control' and @name='manufacture_year']/following-sibling::div/a").click() driver.find_element_by_xpath("//div[@class='form-group']//select[@class='form-control' and @name='manufacture_year']/following-sibling::div//input").send_keys("2017") driver.find_element_by_xpath("//div[@class='form-group']//select[@class='form-control' and @name='manufacture_year']/following-sibling::div//em").click() ``` Similarly you can select other drop down by changing the `@name='manufacture_year'` attribute value. Note: Updated the code with Ajax wait.
To click on **BANGALORE** and then select **2020** from the dropdown, you need to induce [WebDriverWait](https://stackoverflow.com/questions/49775502/webdriverwait-not-working-as-expected/49775808#49775808) for the `element_to_be_clickable()` and you can use the following [Locator Strategies](https://stackoverflow.com/questions/48369043/official-locator-strategies-for-the-webdriver/48376890#48376890): * Using `XPATH`: ``` driver.get('http://www.indianbluebook.com/') WebDriverWait(driver, 20).until(EC.visibility_of_element_located((By.LINK_TEXT, "BANGALORE"))).click() WebDriverWait(driver, 20).until(EC.element_to_be_clickable((By.XPATH, "//span[text()='Select Year']"))).click() WebDriverWait(driver, 20).until(EC.element_to_be_clickable((By.XPATH, "//ul[@class='chosen-results']//li[@class='active-result' and text()='2020']"))).click() ``` * **Note**: You have to add the following imports : ``` from selenium.webdriver.support.ui import WebDriverWait from selenium.webdriver.common.by import By from selenium.webdriver.support import expected_conditions as EC ``` * Browser Snapshot: [![indianbluebook](https://i.stack.imgur.com/l65n2.png)](https://i.stack.imgur.com/l65n2.png)
5,097
59,410,323
so I have a csv file which is of the form - ``` No. Name Money 1 Tom Cat 100 2 Dan Man 200 3 Marie Claw300 4 Catherine K. 400 ``` I need to detect if the some part of my second column data is in my third column. Is there a way in python to do this efficiently? Also, this is a made up example the whole dataset contains many cases like this and it is not a one-off incident. Edit - Expected output ``` No. Name Money 1 Tom Cat 100 2 Dan Man 200 3 Marie Claw 300 4 Catherine K. 400 ```
2019/12/19
[ "https://Stackoverflow.com/questions/59410323", "https://Stackoverflow.com", "https://Stackoverflow.com/users/12021224/" ]
Unfortunately you cannot use blade syntax within Vue unless you are writing the Vue code directly in the blade template, which would not be best practice. One thing I have found helpful is to write out all my Laravel API routes in a google docs so they are easier to refer to when referencing them in Vue. I hope that helps!
You can only use blade syntax, if you're in a `.blade` file. You have to statically set this route or others when calling a API NOT RECOMMENDED: Or you can define a js variable in your "master" blade file, which you're then using in the `Register.vue` file.
5,098
55,975,930
I'm locally running a standard app engine environment through dev\_appserver and cannot get rid of the following error: > > ImportError: No module named google.auth > > > Full traceback (replaced personal details with `...`): ``` Traceback (most recent call last): File "/Users/.../google-cloud-sdk/platform/google_appengine/google/appengine/runtime/wsgi.py", line 240, in Handle handler = _config_handle.add_wsgi_middleware(self._LoadHandler()) File "/Users/.../google-cloud-sdk/platform/google_appengine/google/appengine/runtime/wsgi.py", line 299, in _LoadHandler handler, path, err = LoadObject(self._handler) File "/Users/.../google-cloud-sdk/platform/google_appengine/google/appengine/runtime/wsgi.py", line 85, in LoadObject obj = __import__(path[0]) File "/Users/.../.../main.py", line 6, in <module> from services.get_campaigns import get_campaigns File "/Users/.../.../get_campaigns.py", line 3, in <module> from googleads import adwords File "/Users/.../.../lib/googleads/__init__.py", line 17, in <module> from ad_manager import AdManagerClient File "/Users/.../lib/googleads/ad_manager.py", line 28, in <module> import googleads.common File "/Users/.../lib/googleads/common.py", line 51, in <module> import googleads.oauth2 File "/Users/.../lib/googleads/oauth2.py", line 28, in <module> import google.auth.transport.requests File "/Users/.../google-cloud-sdk/platform/google_appengine/google/appengine/tools/devappserver2/python/runtime/sandbox.py", line 1154, in load_module raise ImportError('No module named %s' % fullname) ImportError: No module named google.auth ``` I do have google.auth installed, as `pip show google.auth` shows: ``` Name: google-auth Version: 1.6.3 Summary: Google Authentication Library Home-page: https://github.com/GoogleCloudPlatform/google-auth-library-python Author: Google Cloud Platform Author-email: jonwayne+google-auth@google.com License: Apache 2.0 Location: /Users/.../Library/Python/2.7/lib/python/site-packages Requires: rsa, pyasn1-modules, cachetools, six Required-by: googleads, google-auth-oauthlib, google-auth-httplib2, google-api-python-client ``` I have already upgraded all modules that require google.auth - *googleads, google-auth-oauthlib, google-auth-httplib2, google-api-python-clien*t - but without results. I'm not quite sure what next actions to take in order to debug this issue. Anyone here can point me in the right direction?
2019/05/03
[ "https://Stackoverflow.com/questions/55975930", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3793914/" ]
Your `google.auth` is installed in the system's Python site packages, not in your app: > > Location: /Users/.../Library/Python/2.7/lib/python/site-packages > > > You need to install your app's python dependencies inside your app instead - note the `-t lib/` pip option in the [Copying a third-party library](https://cloud.google.com/appengine/docs/standard/python/tools/using-libraries-python-27#copying_a_third-party_library) procedure you should follow: > > 2. Use [pip](https://pypi.python.org/pypi/pip) (version 6 or later) with the `-t <directory>` flag to copy the libraries into the folder you created in the previous step. > For example: > > > > ``` > pip install -t lib/ <library_name> > > ``` > > >
After much trial and error, I found the bug: A python runtime version issue. In my app.yaml file I had specified: ``` service: default runtime: python27 api_version: 1 threadsafe: false ``` There I changed runtime to: ``` runtime: python37 ``` Thanks to @AlassaneNdiaye for pointing me in this direction in the comments.
5,100
56,317,630
I am new in python and I am working with CSV file with over 10000 rows. In my CSV file, there are many rows with the same id which I would like to merge them in one and also combine their information as well. For instance, the data.csv look like (id and info is the name of columns): ``` id| info 1112| storage is full and needs extra space 1112| there is many problems with space 1113| pickup cars come and take the garbage 1113| payment requires for the garbage ``` and I want to get the output as: ``` id| info 1112| storage is full and needs extra space there is many problems with space 1113| pickup cars come and take the garbage payment requires for the garbage ``` I already looked at a few posts such as [1](https://stackoverflow.com/questions/35182749/merging-rows-with-the-same-id-variable) [2](https://stackoverflow.com/questions/39646345/pandas-merging-rows-with-the-same-value-and-same-index) [3](https://stackoverflow.com/questions/41915138/how-can-i-merge-csv-rows-that-have-the-same-value-in-the-first-cell) but none of them helped me to answer my question. It would be great if you could use python code to describe your help that I can also run and learn in my side. Thank you
2019/05/26
[ "https://Stackoverflow.com/questions/56317630", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8921989/" ]
I think about some simplier way: ``` some_dict = {} for idt, txt in line: #~ For line use your id, info reader. some_dict[idt] = some_dict.get(idt, "") + txt ``` It should create your dream structure without imports, and i hope most efficient way. Just to understand, `get` have secound argument, what must return if something isn't finded in dict. Then create empty string and add text, if some was finded, then add text to that. **@Edit:** Here is complete example with reader :). Try to replace correctly variable instead of reader entry, which shows how to do it :) ``` some_dict = {} with open('file.csv') as f: reader = csv.reader(f) for idt, info in reader: temp = some_dict.get(idt, "") some_dict[idt] = temp+" "+txt if temp else txt print(some_dict) df = pd.Series(some_dict).to_frame("Title of your column") ``` This is full program which should work for you. But, it won't work if you got more than 2 columns in file, then u can just replace `idt, info` with `row`, and use indexes for first and secound element. **@Next Edit:** For more then 2 columns: ``` some_dict = {} with open('file.csv') as f: reader = csv.reader(f) for row in reader: temp = some_dict.get(row[0], "") some_dict[row[0]] = temp+" "+row[1] if temp else row[1] #~ There you can add something with another columns if u want. #~ Example: another_dict[row[2]] = another_dict.get(row[2], "") + row[3] print(some_dict) df = pd.Series(some_dict).to_frame("Title of your column") ```
Just make a dictionary where id's are keys: ``` from collections import defaultdict by_id = defaultdict(list) for id, info in your_list: by_id[id].append(info) for key, value in by_id.items(): print(key, value) ```
5,101
25,012,031
I've migrated a Liferay 6.2-CE-GA2 server from Liferay 6.1.1-ce-ga2. I made a few changes in custom hooks and themes to addapt to the new version. On locale I have never had a problem with memory nor with the 6.1 version, but once in production, server runs out of memory in a few hours. I tried to adjust heap parameters and increasing server memory (from 2GB to 3GB) but it seems that the heap keeps growking slowly but non-stopping, until I get an `OutOfMemory: Java heap space` or, if I grant bigger limits to the heap, system runs out of memory. I've been some days studying catalina.out, trying to minimize warnings and errors and this is the only interesting things I've seen during a shutdown-reboot process (I replaced all domain names on logs): ``` [on shutdown] WARN: The method class org.apache.commons.logging.impl.SLF4JLogFactory#release() was invoked. WARN: Please see http://www.slf4j.org/codes.html#release for an explanation. [...] 28-xul-2014 16:53:51 org.apache.catalina.loader.WebappClassLoader clearReferencesThreads SEVERE: The web application [] appears to have started a thread named [org.python.google.common.base.internal.Finalizer] but has failed to stop it. This is very likely to create a memory leak. 28-xul-2014 16:53:51 org.apache.catalina.loader.WebappClassLoader clearReferencesThreads SEVERE: The web application [] appears to have started a thread named [Thread-26] but has failed to stop it. This is very likely to create a memory leak. 28-xul-2014 16:53:51 org.apache.catalina.loader.WebappClassLoader clearReferencesThreads SEVERE: The web application [] appears to have started a thread named [Thread-27] but has failed to stop it. This is very likely to create a memory leak. [...] 16:55:40,517 ERROR [liferay/hot_deploy-1][JDBCExceptionReporter:82] Batch entry 0 insert into CalendarBooking (uuid_, groupId, companyId, userId, userName, createDate, modifiedDate, resourceBlockId, calendarId, calendarResourceId, parentCalendarBookingId, title, description, location, startTime, endTime, allDay, recurrence, firstReminder, firstReminderType, secondReminder, secondReminderType, status, statusByUserId, statusByUserName, statusDate, calendarBookingId) values ('985aac08-6457-484c-becb-2c4964805158', '10545', '10154', '10196', 'Carlos Ces', '2012-06-06 06:26:41.431000 +01:00:00', '2012-06-06 06:26:41.431000 +01:00:00', '0', '550617', '550571', '565201', 'Master Class de improvisación e tango contemporáneo con Jorge Retamoza', '<p>_ <a href="http://www.rrrrrr.es/cultura/-/blogs/master-class-de-improvisacion-e-tango-contemporaneo-con-jorge-retamoza">Master Class con Jorge Retamoza. </a></p>_<p>_ <a href="http://www.rrrrrr.es/cultura/-/blogs/master-class-de-improvisacion-e-tango-contemporaneo-con-jorge-retamoza">Organiza: Escola de Música de rrrrrr. Colabora: Concello de rrrrrr.</a></p>', 'Auditorio de rrrrrr', '1339246800000', '1339257600000', '0', '', '900000', 'email', '300000', 'email', '0', '10196', 'Carlos Ces', '2012-06-06 06:26:41.431000 +01:00:00', '565201') was aborted. Call getNextException to see the cause. [Sanitized] 16:55:40,518 ERROR [liferay/hot_deploy-1][JDBCExceptionReporter:82] ERROR: duplicate key value violates unique constraint "ix_f4c61797" 16:55:40,536 ERROR [liferay/hot_deploy-1][SerialDestination:68] Unable to process message {destinationName=liferay/hot_deploy, response=null, responseDestinationName=null, responseId=null, payload=null, values={command=deploy, companyId=0, servletContextName=calendar-portlet}} com.liferay.portal.kernel.messaging.MessageListenerException: com.liferay.portal.kernel.dao.orm.ORMException: org.hibernate.exception.ConstraintViolationException: Could not execute JDBC batch update at com.liferay.portal.kernel.messaging.BaseMessageListener.receive(BaseMessageListener.java:32) at com.liferay.portal.kernel.messaging.InvokerMessageListener.receive(InvokerMessageListener.java:72) at com.liferay.portal.kernel.messaging.SerialDestination$1.run(SerialDestination.java:65) at com.liferay.portal.kernel.concurrent.ThreadPoolExecutor$WorkerTask._runTask(ThreadPoolExecutor.java:682) at com.liferay.portal.kernel.concurrent.ThreadPoolExecutor$WorkerTask.run(ThreadPoolExecutor.java:593) at java.lang.Thread.run(Thread.java:636) Caused by: com.liferay.portal.kernel.dao.orm.ORMException: org.hibernate.exception.ConstraintViolationException: Could not execute JDBC batch update [...] ... 5 more Caused by: org.hibernate.exception.ConstraintViolationException: Could not execute JDBC batch update at org.hibernate.exception.SQLStateConverter.convert(SQLStateConverter.java:96) at org.hibernate.exception.JDBCExceptionHelper.convert(JDBCExceptionHelper.java:66) [...] ... 62 more Caused by: java.sql.BatchUpdateException: Batch entry 0 insert into CalendarBooking (uuid_, groupId, companyId, userId, userName, createDate, modifiedDate, resourceBlockId, calendarId, calendarResourceId, parentCalendarBookingId, title, description, location, startTime, endTime, allDay, recurrence, firstReminder, firstReminderType, secondReminder, secondReminderType, status, statusByUserId, statusByUserName, statusDate, calendarBookingId) values ('985aac08-6457-484c-becb-2c4964805158', '10545', '10154', '10196', 'Carlos Ces', '2012-06-06 06:26:41.431000 +01:00:00', '2012-06-06 06:26:41.431000 +01:00:00', '0', '550617', '550571', '565201', 'Master Class de improvisación e tango contemporáneo con Jorge Retamoza', '<p>_ <a href="http://www.rrrrrr.es/cultura/-/blogs/master-class-de-improvisacion-e-tango-contemporaneo-con-jorge-retamoza">Master Class con Jorge Retamoza. </a></p>_<p>_ <a href="http://www.rrrrrr.es/cultura/-/blogs/master-class-de-improvisacion-e-tango-contemporaneo-con-jorge-retamoza">Organiza: Escola de Música de rrrrrr. Colabora: Concello de rrrrrr.</a></p>', 'Auditorio de rrrrrr', '1339246800000', '1339257600000', '0', '', '900000', 'email', '300000', 'email', '0', '10196', 'Carlos Ces', '2012-06-06 06:26:41.431000 +01:00:00', '565201') was aborted. Call getNextException to see the cause. [Sanitized] at org.postgresql.jdbc2.AbstractJdbc2Statement$BatchResultHandler.handleError(AbstractJdbc2Statement.java:2621) at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:1837) [...] ... 68 more ``` Then server runs properly for 5 hours with some spared warnings each few minutes: ``` 23:39:09,275 WARN [ajp-apr-8009-exec-20][SecurityPortletContainerWrapper:623] Reject process action for http://www.rrrrrr.com/rrrrrr25n/notadeprensa on 56_INSTANCE_rk7ADlb9Ui2w 23:43:51,234 WARN [ajp-apr-8009-exec-19][SecurityPortletContainerWrapper:623] Reject process action for http://www.rrrrrr.org/feiron/normas on 56_INSTANCE_jh6ewEPuvvjb 23:46:59,568 WARN [ajp-apr-8009-exec-5][SecurityPortletContainerWrapper:623] Reject process action for http://www.rrrrrr.es/recursos-servizossociais on 56_INSTANCE_4eX2GzETiAQb 23:55:51,177 WARN [ajp-apr-8009-exec-5][SecurityPortletContainerWrapper:623] Reject process action for http://www.rrrrrr.es/cans on 56_INSTANCE_4eX2GzETiAQb 00:00:13,713 WARN [ajp-apr-8009-exec-24][SecurityPortletContainerWrapper:623] Reject process action for http://www.rrrrrr.es/rexistro on 56_INSTANCE_4eX2GzETiAQb 00:00:25,822 WARN [ajp-apr-8009-exec-24][SecurityPortletContainerWrapper:623] Reject process action for http://www.rrrrrr.es/plenos on 110_INSTANCE_acNEFnslrX8c ``` And then memory problems begin. I post the first errors on log: ``` Exception in thread "http-apr-8080-exec-4" java.lang.OutOfMemoryError: Java heap space 01:00:01,223 ERROR [MemoryQuartzSchedulerEngineInstance_Worker-3][SimpleThreadPool:120] Error while executing the Runnable: java.lang.OutOfMemoryError: Java heap space Exception in thread "fileinstall-/home/rrrrrr/liferay-portal-6.2-ce-ga2/data/osgi/modules" Exception in thread "ajp-apr-8009-AsyncTimeout" Exception in thread "ajp-apr-8009-exec-21" at java.util.LinkedHashMap.createEntry(LinkedHashMap.java:441) at java.util.LinkedHashMap.addEntry(LinkedHashMap.java:423) at java.util.HashMap.put(HashMap.java:402) Exception in thread "ajp-apr-8009-exec-24" at sun.util.resources.OpenListResourceBundle.loadLookup(OpenListResourceBundle.java:134) Exception in thread "MemoryQuartzSchedulerEngineInstance_QuartzSchedulerThread" Exception in thread "ContainerBackgroundProcessor[StandardEngine[Catalina]]" at sun.util.resources.OpenListResourceBundle.loadLookupTablesIfNecessary(OpenListResourceBundle.java:113) Exception in thread "ajp-apr-8009-exec-35" Exception in thread "ajp-apr-8009-exec-36" Exception in thread "ajp-apr-8009-exec-29" Exception in thread "ajp-apr-8009-exec-37" at sun.util.resources.OpenListResourceBundle.handleGetKeys(OpenListResourceBundle.java:91) at sun.util.LocaleServiceProviderPool.getLocalizedObjectImpl(LocaleServiceProviderPool.java:353) Exception in thread "ajp-apr-8009-exec-33" Exception in thread "ajp-apr-8009-exec-34" Exception in thread "ajp-apr-8009-exec-30" at sun.util.LocaleServiceProviderPool.getLocalizedObject(LocaleServiceProviderPool.java:284) Exception in thread "ajp-apr-8009-exec-28" Exception in thread "ajp-apr-8009-exec-31" Exception in thread "http-apr-8080-AsyncTimeout" Exception in thread "http-apr-8080-exec-5" Exception in thread "http-apr-8080-exec-2" Exception in thread "liferay/scheduler_dispatch-3" Exception in thread "ajp-apr-8009-exec-41" at sun.util.TimeZoneNameUtility.retrieveDisplayNames(TimeZoneNameUtility.java:111) at sun.util.TimeZoneNameUtility.retrieveDisplayNames(TimeZoneNameUtility.java:99) at java.util.TimeZone.getDisplayNames(TimeZone.java:418) at java.util.TimeZone.getDisplayName(TimeZone.java:369) at java.text.SimpleDateFormat.subFormat(SimpleDateFormat.java:1110) at java.text.SimpleDateFormat.format(SimpleDateFormat.java:899) at java.text.SimpleDateFormat.format(SimpleDateFormat.java:869) at org.apache.tomcat.util.http.ServerCookie.appendCookieValue(ServerCookie.java:254) at org.apache.catalina.connector.Response.generateCookieString(Response.java:1032) at org.apache.catalina.connector.Response.addCookie(Response.java:974) at org.apache.catalina.connector.ResponseFacade.addCookie(ResponseFacade.java:381) at javax.servlet.http.HttpServletResponseWrapper.addCookie(HttpServletResponseWrapper.java:58) at com.liferay.portal.kernel.servlet.HttpOnlyCookieServletResponse.addCookie(HttpOnlyCookieServletResponse.java:62) at javax.servlet.http.HttpServletResponseWrapper.addCookie(HttpServletResponseWrapper.java:58) at javax.servlet.http.HttpServletResponseWrapper.addCookie(HttpServletResponseWrapper.java:58) at javax.servlet.http.HttpServletResponseWrapper.addCookie(HttpServletResponseWrapper.java:58) at com.liferay.portal.kernel.servlet.MetaInfoCacheServletResponse.addCookie(MetaInfoCacheServletResponse.java:128) at javax.servlet.http.HttpServletResponseWrapper.addCookie(HttpServletResponseWrapper.java:58) at com.liferay.portal.kernel.servlet.MetaInfoCacheServletResponse.addCookie(MetaInfoCacheServletResponse.java:128) 01:02:56,362 ERROR [PersistedQuartzSchedulerEngineInstance_QuartzSchedulerThread][ErrorLogger:120] An error occurred while scanning for the next triggers to fire. org.quartz.JobPersistenceException: Failed to obtain DB connection from data source 'ds': java.lang.OutOfMemoryError: Java heap space [See nested exception: java.lang.OutOfMemoryError: Java heap space] at org.quartz.impl.jdbcjobstore.JobStoreSupport.getConnection(JobStoreSupport.java:771) at org.quartz.impl.jdbcjobstore.JobStoreTX.getNonManagedTXConnection(JobStoreTX.java:71) at org.quartz.impl.jdbcjobstore.JobStoreSupport.executeInNonManagedTXLock(JobStoreSupport.java:3808) at org.quartz.impl.jdbcjobstore.JobStoreSupport.acquireNextTriggers(JobStoreSupport.java:2751) at org.quartz.core.QuartzSchedulerThread.run(QuartzSchedulerThread.java:264) Caused by: java.lang.OutOfMemoryError: Java heap space at javax.servlet.http.HttpServletResponseWrapper.addCookie(HttpServletResponseWrapper.java:58) at com.liferay.portal.kernel.servlet.MetaInfoCacheServletResponse.addCookie(MetaInfoCacheServletResponse.java:128) at com.liferay.portal.kernel.util.CookieKeys.addCookie(CookieKeys.java:99) at com.liferay.portal.kernel.util.CookieKeys.addCookie(CookieKeys.java:63) at com.liferay.portal.language.LanguageImpl.updateCookie(LanguageImpl.java:751) java.lang.OutOfMemoryError: Java heap space java.lang.OutOfMemoryError: Java heap space java.lang.OutOfMemoryError: Java heap space java.lang.OutOfMemoryError: Java heap space java.lang.OutOfMemoryError: Java heap space 01:03:00,917 ERROR [QuartzScheduler_PersistedQuartzSchedulerEngineInstance-NON_CLUSTERED_MisfireHandler][PortalJobStore:120] MisfireHandler: Error handling misfires: Unexpected runtime exception: Index: 0, Size: 0 org.quartz.JobPersistenceException: Unexpected runtime exception: Index: 0, Size: 0 [See nested exception: java.lang.IndexOutOfBoundsException: Index: 0, Size: 0] at org.quartz.impl.jdbcjobstore.JobStoreSupport.doRecoverMisfires(JobStoreSupport.java:3200) at org.quartz.impl.jdbcjobstore.JobStoreSupport$MisfireHandler.manage(JobStoreSupport.java:3947) at org.quartz.impl.jdbcjobstore.JobStoreSupport$MisfireHandler.run(JobStoreSupport.java:3968) Caused by: java.lang.IndexOutOfBoundsException: Index: 0, Size: 0 at java.util.ArrayList.rangeCheck(ArrayList.java:571) at java.util.ArrayList.get(ArrayList.java:349) at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:1689) at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:257) at org.postgresql.jdbc2.AbstractJdbc2Statement.execute(AbstractJdbc2Statement.java:512) at org.postgresql.jdbc2.AbstractJdbc2Statement.executeWithFlags(AbstractJdbc2Statement.java:388) at org.postgresql.jdbc2.AbstractJdbc2Statement.executeQuery(AbstractJdbc2Statement.java:273) at org.quartz.impl.jdbcjobstore.StdJDBCDelegate.countMisfiredTriggersInState(StdJDBCDelegate.java:416) at org.quartz.impl.jdbcjobstore.JobStoreSupport.doRecoverMisfires(JobStoreSupport.java:3176) ... 2 more java.lang.OutOfMemoryError: Java heap space java.lang.OutOfMemoryError: Java heap space Exception in thread "fileinstall-/home/rrrrrr/liferay-portal-6.2-ce-ga2/data/osgi/portal" java.lang.OutOfMemoryError: Java heap space java.lang.OutOfMemoryError: Java heap space java.lang.OutOfMemoryError: Java heap space java.lang.OutOfMemoryError: Java heap space java.lang.OutOfMemoryError: Java heap space java.lang.OutOfMemoryError: Java heap space java.lang.OutOfMemoryError: Java heap space java.lang.OutOfMemoryError: Java heap space java.lang.OutOfMemoryError: Java heap space java.lang.OutOfMemoryError: Java heap space java.lang.OutOfMemoryError: Java heap space java.lang.OutOfMemoryError: Java heap space ``` I have some custom themes and hooks that I now post. I wonder there has to be a memory leak anywhere on them, but i cannot find it. First, I have a custom `Application Display Template`for Blogs: ``` <div class="cr-blog container-fluid"> #foreach ($entry in $entries) <div class="entry-content"> #set ($viewUrl = $currentURL.replaceFirst("\?.*$","") + "/-/blogs/" + $entry.getUrlTitle()) #set($img_ini=$entry.content.indexOf("<img")) #if ($img_ini >= 0) #set($img_end=$entry.content.indexOf(">",$img_ini) + 1) #set($first_img_tag= $entry.content.substring($img_ini, $img_end)) #set($first_img_url=$first_img_tag.replaceFirst("<img.*src=\"","")) #set($first_img_url=$first_img_url.replaceFirst("\".*","")) #end <div class="entry-extract"> #if ($img_ini >= 0) <div class="extract-thumbnail"> <a href="$viewUrl"> <img src="$escapeTool.html($first_img_url)" /> </a> </div> #end <div class="extract-title"> <a href="$viewUrl"><span>$entry.title</span></a> </div> <div class="extract-content"> <a href="$viewUrl"> <span class="extract-date">$dateFormats.getSimpleDateFormat("dd MMMMM yyyy HH:mm", $locale).format($entry.displayDate)</span> #set($plain_content = $entry.content.replaceAll("</?[^>]+/?>", "")) #set($res_length = 240) #if ($res_length > $plain_content.length()) #set($res_length = $plain_content.length()) #end <p> $plain_content.substring(0,$res_length) ... </p> </a> </div> </div> </div> #end </div> ``` Custom hook Blogshome overrides some jsps from blogs\_aggregator. view\_entries.jspf ``` <c:choose> <c:when test="<%= results.isEmpty() %>"> <liferay-ui:message key="there-are-no-blogs" /> <br /><br /> </c:when> <c:otherwise> <% if (displayStyle.startsWith("extract-side-events")) { List<BlogsEntry> eventsColumn = new ArrayList<BlogsEntry>(); List<BlogsEntry> mainColumn = new ArrayList<BlogsEntry>(); for (int i=0; i< results.size(); i++) { BlogsEntry entry = (BlogsEntry) results.get(i); if (entry.getDisplayDate().after(new Date())) { searchContainer.setTotal(searchContainer.getTotal() - 1); continue; } boolean isEvent = ((Boolean) entry.getExpandoBridge().getAttribute("evento")); if (isEvent) eventsColumn.add(entry); mainColumn.add(entry); /* change: add ALL to mainColumn; events are duplicated on side */ } /* reorder eventsColumn */ TreeMap<Date, BlogsEntry> next= new TreeMap<Date, BlogsEntry>(); List<BlogsEntry> toRemove = new ArrayList<BlogsEntry>(); for (BlogsEntry entry: eventsColumn) { Date ini = (Date) entry.getExpandoBridge().getAttribute("evento-inicio"); Date end = (Date) entry.getExpandoBridge().getAttribute("evento-remate"); Date now = new Date(); if (ini.before(now) && (end.after(now))) { next.put(end, entry); /* mainColumn.remove(entry); */ } else if (end.before(now)) { toRemove.add(entry); } else { next.put(ini, entry); /* mainColumn.remove(entry); */ } } eventsColumn.removeAll(toRemove); /* third rearrangement: current & next are visible; past are pushed to mainColumn */ /* current ordered by end; next ordered by ini */ ArrayList<BlogsEntry> lNext = new ArrayList<BlogsEntry>(next.values()); %> <div class="home-events"> <div class="events-showdown" id="events-showdown"> <% if (!lNext.isEmpty()) { for (BlogsEntry entry: lNext) { %> <div class="carousel-item"> <%@ include file="/html/portlet/blogs_aggregator/view_entry_extract.jspf" %> </div> <% } } %> </div> <script> YUI().use('aui-carousel', function(Y) { new Y.Carousel( {contentBox: '#events-showdown', height: 320, width: 600, intervalTime: 5 }).render(); }); </script> <%-- <div class="home-events-nav"> <button type="button" class="btn btn-default btn-large" onclick="document.getElementById('events-showdown').style.right =50"> <span class="glyphicon glyphicon-chevron-right"></span> </button> </div> --%> </div> <% %> <div class="home-blogs container-fluid" > <% for (BlogsEntry entry: mainColumn) { %> <%@ include file="/html/portlet/blogs_aggregator/view_entry_extract.jspf" %> <% } %> </div> <% /* original blogs styles */ } else { for (int i = 0; i < results.size(); i++) { BlogsEntry entry = (BlogsEntry)results.get(i); if (entry.getDisplayDate().after(new Date())) { searchContainer.setTotal(searchContainer.getTotal() - 1); continue; } %> <%@ include file="/html/portlet/blogs_aggregator/view_entry_content.jspf" %> <% } } %> </c:otherwise> </c:choose> <c:if test="<%= enableRssSubscription %>"> <% StringBundler rssURLParams = new StringBundler(); if (selectionMethod.equals("users")) { if (organizationId > 0) { rssURLParams.append("&organizationId="); rssURLParams.append(organizationId); } else { rssURLParams.append("&companyId="); rssURLParams.append(company.getCompanyId()); } } else { rssURLParams.append("&groupId="); rssURLParams.append(themeDisplay.getScopeGroupId()); } %> <span class="button"> <liferay-ui:icon image="rss" label="<%= true %>" method="get" target="_blank" url='<%= themeDisplay.getPathMain() + "/blogs_aggregator/rss?p_l_id=" + plid + rssURLParams %>' /> </span> </c:if> <c:if test="<%= !results.isEmpty() %>"> <div class="search-container"> <liferay-ui:search-paginator searchContainer="<%= searchContainer %>" /> </div> </c:if> ``` view\_entry\_extract.jspf ``` <c:if test="<%= BlogsEntryPermission.contains(permissionChecker, entry, ActionKeys.VIEW) %>"> <div class="entry-content"> <% PortletURL showBlogEntryURL = renderResponse.createRenderURL(); showBlogEntryURL.setParameter("struts_action", "/blogs_aggregator/view_entry"); showBlogEntryURL.setParameter("entryId", String.valueOf(entry.getEntryId())); StringBundler sb = new StringBundler(8); StringBundler ab = new StringBundler(8); ab.append(themeDisplay.getURLPortal()); ab.append(GroupLocalServiceUtil.getGroup(entry.getGroupId()).getFriendlyURL()); ab.append("/-/blogs/"); ab.append(entry.getUrlTitle()); String viewEntryURL = ab.toString(); sb.append("&showAllEntries=1"); String viewAllEntriesURL = sb.toString(); User user2 = UserLocalServiceUtil.getUserById(entry.getUserId()); %> <div class="entry-header"> <c:if test='<%= (Boolean) entry.getExpandoBridge().getAttribute("evento")%>'> <div class="event-schedule"> <% Calendar iniDate = com.liferay.portal.kernel.util.CalendarFactoryUtil.getCalendar(timeZone); Calendar endDate = com.liferay.portal.kernel.util.CalendarFactoryUtil.getCalendar(timeZone); iniDate.setTime(((Date) entry.getExpandoBridge().getAttribute("evento-inicio"))); endDate.setTime(((Date) entry.getExpandoBridge().getAttribute("evento-remate"))); boolean sameDay = false; if ((iniDate.get(Calendar.DAY_OF_YEAR) == endDate.get(Calendar.DAY_OF_YEAR)) && (iniDate.get(Calendar.YEAR) == endDate.get(Calendar.YEAR))) sameDay = true; String diaDaSemana = (new SimpleDateFormat("EEEE", locale)).format(iniDate.getTime()); String numeroDeDia = (new SimpleDateFormat("d", locale)).format(iniDate.getTime()); String mes = (new SimpleDateFormat("MMMM", locale)).format(iniDate.getTime()); // String hora = (new SimpleDateFormat("HH:mm", locale)).format(iniDate.getTime()) + "h"; String hora = (iniDate.get(Calendar.HOUR_OF_DAY) + new SimpleDateFormat(":mm", locale).format(iniDate.getTime()) + "h"); String numeroDeDiaFin = StringPool.BLANK; String mesFin = StringPool.BLANK; if (!sameDay) { numeroDeDiaFin = (new SimpleDateFormat("d", locale)).format(endDate.getTime()); mesFin = (new SimpleDateFormat("MMMM", locale)).format(endDate.getTime()); } %> <% if (sameDay) { %> <div class='event-date'> <%= numeroDeDia %> </div> <div class='event-data'> <div class="event-month"><%= mes %></div> <div class="event-day"><%= diaDaSemana %></div> <div class="event-time"><%= hora %></div> </div> <% } else { %> <div class='event-date'> <%= numeroDeDia %> </div> <div class='event-data'> <div class="event-month"><%= mes %></div> <div class="event-day"><%= diaDaSemana %></div> <div class="event-time"><liferay-ui:message key="rrrrrr.events.until" /> <%= numeroDeDiaFin %> <liferay-ui:message key="rrrrrr.events.of" /> <%= mesFin %></div> </div> <% } %> </div> </c:if> </div> <div class="entry-extract"> <% String resumeText = StringPool.BLANK; String resumeImage = StringPool.BLANK; int extLength = 240; if (entry.isSmallImage()) { if (Validator.isNotNull(entry.getSmallImageURL())) resumeImage = entry.getSmallImageURL(); else resumeImage = themeDisplay.getPathImage() + "/journal/article?img_id=" + entry.getSmallImageId() + "&t=" + WebServerServletTokenUtil.getToken(entry.getSmallImageId()) ; resumeImage.trim(); } /* if no small image, extract first */ if ((resumeImage == null) || (resumeImage.isEmpty())) { java.util.regex.Pattern p = java.util.regex.Pattern.compile("src=['\"]([^'\"]+)['\"]"); java.util.regex.Matcher m = p.matcher(entry.getContent()); if (m.find()) resumeImage = m.group().substring(5, m.group().length() -1); } resumeText = HtmlUtil.stripHtml(entry.getDescription()); resumeText.trim(); /* if no resume description, extract text */ if ((resumeText == null) || (resumeText.isEmpty())) { resumeText = HtmlUtil.escape(StringUtil.shorten(HtmlUtil.extractText(entry.getContent()), extLength)); } %> <div class="extract-thumbnail"> <a href="<%= viewEntryURL %>" style="background-image: url('<%= HtmlUtil.escape(resumeImage) %>')"> <img class="asset-small-image" src="<%= HtmlUtil.escape(resumeImage) %>"/> </a> </div> <div class="extract-title"> <a href="<%= viewEntryURL %>"><%= HtmlUtil.escape(entry.getTitle()) %></a> </div> <div class="extract-content"> <a href="<%= viewEntryURL %>"> <span class="extract-scope"><%= GroupLocalServiceUtil.getGroup(entry.getGroupId()).getDescriptiveName() %></span> <span class="extract-date"><%= dateFormatDateTime.format(entry.getDisplayDate()) %></span> <span><%= " " + resumeText %></span> </a> </div> </div> </div> ``` I am unable to guess any memory leak there, but there has to be! I've spent few weeks trying to find a bug (deactivating hooks to see if errors persisted) but couldn't come to a clue. Does anybody see something potentially dangerous in my code? Wich other way can I trace Java memory usage to fetch for leaks?
2014/07/29
[ "https://Stackoverflow.com/questions/25012031", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3837065/" ]
As it seems the `open`-method doesn't update the `position` of the `infowindow`, you'll need to do it on your own(e.g. by binding the position of the infowindow to the position of the marker): ``` infowindow.unbind('position'); if(infowindow.getPosition() != this.getPosition()) { infowindow.bindTo('position',this,'position'); infowindow.open(map,this); } else { infowindow.close(); infowindow.setPosition(null); } ``` Demo: [**http://jsfiddle.net/aBg3N/**](http://jsfiddle.net/aBg3N/) Another solution(but this solution relies on a undocumented property `anchor`): ``` if(infowindow.get('anchor') != this) { infowindow.open(map,this); } else { infowindow.close(); } ``` Demo: [**http://jsfiddle.net/AZC3z/**](http://jsfiddle.net/AZC3z/) Both solutions will work with draggable markers and also when you use the same infowindow for multiple markers
I am not sure about `infowindow.getPosition()` . But you can try this code if you want to check whether the infowindow is open or not. JS: ``` function check(infoWindow) { var map = infoWindow.getMap(); return (map !== null && typeof map !== "undefined"); } ``` pass `infowindow` into the function and it will return `true` or `false` based on visibility of `infowindow`. Demo: <http://jsfiddle.net/lotusgodkk/x8dSP/3699/>
5,102
17,066,347
I have this issue with Titanium Studio. I can't compile my project for Android. I try to Run or Debug to project, but I've got this message: ``` Titanium Command-Line Interface, CLI version 3.1.0, Titanium SDK version 3.1.0.GA Copyright (c) 2012-2013, Appcelerator, Inc. All Rights Reserved. [INFO] : Running emulator process: python "C:\Users\Dev\AppData\Roaming\Titanium\mobilesdk\win32\3.1.0.GA\android\builder.py" "emulator" "MyApp" "E:\Developpement\Mobile\SDKs\Android" "E:\Developpement\Mobile\Appcelerator\MyApp" "com.developper.myapp" "2" "WVGA854" "armeabi" [INFO] : Running build process: python "C:\Users\Dev\AppData\Roaming\Titanium\mobilesdk\win32\3.1.0.GA\android\builder.py" "simulator" "MyApp" "E:\Developpement\Mobile\SDKs\Android" "E:\Developpement\Mobile\Appcelerator\MyApp" "com.developper.myapp" "2" "WVGA854" "/127.0.0.1:49314" [INFO] logfile = E:\Developpement\Mobile\Appcelerator\MyApp\build.log [INFO] Building MyApp for Android ... one moment [INFO] Titanium SDK version: 3.1.0 (04/15/13 18:45 57634ef) [ERROR] : Emulator process exited with code 1 [INFO] : Project built successfully in 5s 421ms [INFO] : Emulator not running, exiting... ``` The emulator is not starting and no APK file is built in the bin folder. I have the Android 2.2 and 4.2.2 SDK installed. I tried everythings (clean project, even uninstall and reinstall Titanium studio). I did this project with Titanium 2.1.4. Now I'm using 3.1.0 and I got this error message. In tiapp.xml, if I choose to run the project with the Titanium 2.1.4 SDK I got these messages : ``` [INFO] logfile = E:\Developpement\Mobile\Appcelerator\MyApp\build.log [INFO] Launching Android emulator...one moment [INFO] Creating new Android Virtual Device (2 WVGA854) [ERROR] Exception occured while building Android project: [ERROR] Traceback (most recent call last): [ERROR] File "C:\Users\Dev\AppData\Roaming\Titanium\mobilesdk\win32\2.1.4.GA\android\builder.py", line 2282, in <module> [ERROR] s.run_emulator(avd_id, avd_skin, avd_name, avd_abi, add_args) [ERROR] File "C:\Users\Dev\AppData\Roaming\Titanium\mobilesdk\win32\2.1.4.GA\android\builder.py", line 523, in run_emulator [ERROR] avd_name = self.create_avd(avd_id, avd_skin, avd_abi) [ERROR] File "C:\Users\Dev\AppData\Roaming\Titanium\mobilesdk\win32\2.1.4.GA\android\builder.py", line 485, in create_avd [ERROR] inifilec = open(inifile,'r').read() [ERROR] IOError: [Errno 2] No such file or directory: 'C:\\Users\\Dev\\.android\\avd\\titanium_2_WVGA854.avd\\config.ini' ``` And then : ``` [INFO] logfile = E:\Developpement\Mobile\Appcelerator\MyApp\build.log [INFO] Building MyAppfor Android ... one moment [INFO] Titanium SDK version: 2.1.4 (11/09/12 12:46 51f2c64) [ERROR] Application Installer abnormal process termination. Process exit value was 1 [ERROR] Timed out waiting for emulator to be ready, you may need to close the emulator and try again ``` No emulators are running and no APKs are built. If anyone has an idea... I'm using Win7 64bits. Maybe I missed somthing during the configuration. Thank you for your help.
2013/06/12
[ "https://Stackoverflow.com/questions/17066347", "https://Stackoverflow.com", "https://Stackoverflow.com/users/486286/" ]
If this happens with the Kitchen Sink demo, the fix is to go into the Android SDK Manager and install "Android 3.0 (API 11)". Make sure the app uses emulator "Google APIs (Android 2.3.3)" and "WVGA854". I assume there's a Titanium bug because you have to install a higher API level (3.0) than is actually used (2.3.3). Using exactly these settings, Kitchen Sink works as expected.
Did you read [System Requirements](http://docs.appcelerator.com/titanium/latest/#!/guide/Quick_Start-section-29004949_QuickStart-SystemRequirements)? From Documentation: > > For Windows, the 32-bit version of Java JDK is required regardless of > whether Titanium is running on a 32-bit or 64-bit system. > > > Try to install additional 32bit version of Java (without removing the 64bit) and set the system variable. May be this will help you.
5,104
49,870,594
I am trying to install few of the python packages from within a python script and I am using `pip.main(install)` for that. Below is code snippet ``` try: import requests except: import pip pip.main(['install', '-q', 'requests==2.0.1','PyYAML==3.11']) import requests ``` I have tried using importing main from pip.\_internal and using pipmain instead of pip.main() but it did not help. I am on `pip version 9.0.1` and `python 2.7`
2018/04/17
[ "https://Stackoverflow.com/questions/49870594", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4943621/" ]
I had the same issue and just running the below command solved it for me: ``` easy_install pip ```
The short answer is don't do this. Use `setup.py` or a straight import statement. [Here is why this doesn't work with pip and how to get around it if necessary.](https://pip.pypa.io/en/stable/user_guide/#using-pip-from-your-program) `pip` affects the whole environment. Depending on who is running this and why, they may or may not want to install `requests` in the environment they're running your script in. It could be a nasty surprise that running your script affects their python environment. Installing it as a package (using `python setup.py` or `pip install`) is a different matter. There are well-established ways to install other packages using `requirements.txt` and `setup.py`. It is also expected that installing a package will install its dependencies. [You can read more in the python.org packaging tutorials](https://packaging.python.org/tutorials/distributing-packages/) If your script has dependencies but people don't need to install it, you should tell them in the `README.rst` and/or `requirements.txt`. **or** simply include the import statement, and when they get the error, they'll know what to do. Let them control what environment installs which packages.
5,107
3,215,455
Is it possible to use multiple languages along side with ruby. For example, I have my application code in Ruby on Rails. I would like to calculate the recommendations and I would like to use python for that. So essentially, python code would get the data and calculate all the stuff and probably get the data from DB, calculate and update the tables.Is it possible and what do you guys think about its adv/disadv Thanks
2010/07/09
[ "https://Stackoverflow.com/questions/3215455", "https://Stackoverflow.com", "https://Stackoverflow.com/users/88898/" ]
If you are offloading work to an exterior process, you may want to make this a webservice (ajax, perhaps) of some sort so that you have some sort of consistent interface. Otherwise, you could always execute the python script in a subshell through ruby, using stdin/stdout/argv, but this can get ugly quick.
I would use the system command as such ``` system("python myscript.py") ```
5,116
48,160,819
I want to write a python program to process csv sheets, the total numbers of rows and cols are different each time. One of things I want to do is to delete columns containing a specific string. ``` import csv input = open("1.csv","rb") reader = csv.reader(input) output = open("2.csv","wb") writer = csv.writer(output) index = -1 for row in reader: for item in row: if item == str('string'): index = row.index(item) print(index) ... ``` Update:I rewrite the code thanks to [tuan-huynh](https://stackoverflow.com/users/2305843/tuan-huynh/), but this code only works for the first column containing "string".
2018/01/09
[ "https://Stackoverflow.com/questions/48160819", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9190725/" ]
You can find the column index with this code and can delete it. I test it ok import csv ``` with open("SampleCSVFile_2kb.csv","rb") as source: rdr= csv.reader( source ) with open("result","wb") as result: wtr= csv.writer( result ) index = -1 for r in rdr: for item in r: if item == str(string): index = r.index(item) ```
Assume csv looks like ``` name,color,price apple,red,10 banana,yellow,5 ``` ``` import csv with open(file_path, "r") as f: file = csv.reader(f) for line in file: print(line[0], line[1], line[2]) ``` print out would be ``` name color price apple red 10 banana yellow 5 ```
5,120
48,892,348
I have a for loop in python and at the end of each step I want the output to be added as a new column in a csv file. The output I have is a 40x1 array. So if the for loop consists of 100 steps, I want to have a csv file with 100 columns and 40 rows at the end. What I have now, at the end of each time step is the following: ``` with open( 'File name.csv','w') as output: writer=csv.writer(output, lineterminator='\n') for val in myvector: writer.writerow([val]) ``` However, this creates different csv files with 40 rows and 1 column each. How can have I add them all as different columns in the same csv file? This will save me a lot of computation time so any help would be very much appreciated.
2018/02/20
[ "https://Stackoverflow.com/questions/48892348", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6756920/" ]
The insight would be that when `serverUrl` is truthy, you don't need the `switch` at all - you always return the same value that was switched upon. So don't do the test in every switch `case`, but do it once before that: ``` function checkField(str: string) : string { if (serverUrl === 'abc') return str.toLowerCase(); else switch (str.toLowerCase()) { case 'code': return 'CODE'; case 'webid': return 'Webid'; case 'pkid': return 'PkId'; case 'barcode': return 'Barcode'; case 'price': return 'Price'; case 'bestbefore': return 'BestBefore'; case 'produce': return 'Produce'; case 'sales': return 'Sales'; case 'marketid': return "MarketId"; case 'regdate': return "Regdate"; //and more fields default: return str; } } ``` Instead of the `switch` statement, you can also use an object literal or a [`Map`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Map) as a lookup table.
Borrowing a bit from @Bergi's answer, I would create a mapping object to make it a little cleaner. E.g.: ``` function checkField(str: string) : string { //create a mapping var myMapping = { 'code' : 'CODE', 'webid' : 'Webid', 'pkid' : 'PkId', 'barcode': 'Barcode', //and more fields } if (serverUrl === 'abc') { return str.toLowerCase(); } else { return myMapping[str.toLowerCase()] || str; } } ``` This keeps the logic a little more separate from the mapping, so it feels cleaner to me. But that's a personal preference.
5,121
28,334,966
I am trying to open an Excel file (.xls) using xlrd. This is a summary of the code I am using: ``` import xlrd workbook = xlrd.open_workbook('thefile.xls') ``` This works for most files, but fails for files I get from a specific organization. The error I get when I try to open Excel files from this organization follows. ``` Traceback (most recent call last): File "<console>", line 1, in <module> File "/app/.heroku/python/lib/python2.7/site-packages/xlrd/__init__.py", line 435, in open_workbook ragged_rows=ragged_rows, File "/app/.heroku/python/lib/python2.7/site-packages/xlrd/book.py", line 116, in open_workbook_xls bk.parse_globals() File "/app/.heroku/python/lib/python2.7/site-packages/xlrd/book.py", line 1180, in parse_globals self.handle_writeaccess(data) File "/app/.heroku/python/lib/python2.7/site-packages/xlrd/book.py", line 1145, in handle_writeaccess strg = unpack_unicode(data, 0, lenlen=2) File "/app/.heroku/python/lib/python2.7/site-packages/xlrd/biffh.py", line 303, in unpack_unicode strg = unicode(rawstrg, 'utf_16_le') File "/app/.heroku/python/lib/python2.7/encodings/utf_16_le.py", line 16, in decode return codecs.utf_16_le_decode(input, errors, True) UnicodeDecodeError: 'utf16' codec can't decode byte 0x40 in position 104: truncated data ``` This looks as if xlrd is trying to open an Excel file encoded in something other than UTF-16. How can I avoid this error? Is the file being written in a flawed way, or is there just a specific character that is causing the problem? If I open and re-save the Excel file, xlrd opens the file without a problem. I have tried opening the workbook with different encoding overrides but this doesn't work either. The file I am trying to open is available here: <https://dl.dropboxusercontent.com/u/6779408/Stackoverflow/AEPUsageHistoryDetail_RequestID_00183816.xls> Issue reported here: <https://github.com/python-excel/xlrd/issues/128>
2015/02/05
[ "https://Stackoverflow.com/questions/28334966", "https://Stackoverflow.com", "https://Stackoverflow.com/users/382374/" ]
What are they using to generate that file ? They are using some Java Excel API (see below, [link here](http://jexcelapi.sourceforge.net/)), probably on an IBM mainframe or similar. From the stack trace the writeaccess information can't decoding into Unicode because the @ character. For more information on the writeaccess information of the XLS fileformat see [5.112 WRITEACCESS](https://www.openoffice.org/sc/excelfileformat.pdf) or [Page 277](http://www.digitalpreservation.gov/formats/digformatspecs/Excel97-2007BinaryFileFormat%28xls%29Specification.pdf). This field contains the username of the user that has saved the file. ``` import xlrd dump = xlrd.dump('thefile.xls') ``` Running xlrd.dump on the original file gives ``` 36: 005c WRITEACCESS len = 0070 (112) 40: d1 81 a5 81 40 c5 a7 83 85 93 40 c1 d7 c9 40 40 ????@?????@???@@ 56: 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 @@@@@@@@@@@@@@@@ 72: 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 @@@@@@@@@@@@@@@@ 88: 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 @@@@@@@@@@@@@@@@ 104: 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 @@@@@@@@@@@@@@@@ 120: 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 @@@@@@@@@@@@@@@@ 136: 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 @@@@@@@@@@@@@@@@ ``` After resaving it with Excel or in my case LibreOffice Calc the write access information is overwritten with something like ``` 36: 005c WRITEACCESS len = 0070 (112) 40: 04 00 00 43 61 6c 63 20 20 20 20 20 20 20 20 20 ?~~Calc 56: 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 72: 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 88: 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 104: 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 120: 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 136: 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 ``` Based on the spaces being encoded as 40, I believe the encoding is EBCDIC, and when we convert `d1 81 a5 81 40 c5 a7 83 85 93 40 c1 d7 c9 40 40` to EBCDIC we get `Java Excel API`. So yes the file is being written in a flawed way in the case of BIFF8 and higher it should be a unicode string, and in BIFF3 to BIFF5, it should be a byte string in the encoding in the CODEPAGE information which is ``` 152: 0042 CODEPAGE len = 0002 (2) 156: 12 52 ?R ``` 1252 is Windows CP-1252 (Latin I) (BIFF4-BIFF5), which is not [EBCDIC\_037](http://en.wikipedia.org/wiki/EBCDIC_037). The fact the xlrd tried to use unicode, means that it determined the version of the file to be BIFF8. In this case, you have two options 1. Fix the file before opening it with xlrd. You could check using dump to a file that isn't standard out, and then if it is the case, you can overwrite the writeaccess information with xlutils.save or another library. 2. Patch [xlrd](https://github.com/python-excel/xlrd/blob/9429b4c1cd479830b1ecb08a5e7639244ef8dbcf/xlrd/book.py#L1136-L1148) to handle your special case, in `handle_writeaccess` adding a try block and setting strg to empty string on unpack\_unicode failure. The following snippet ``` def handle_writeaccess(self, data): DEBUG = 0 if self.biff_version < 80: if not self.encoding: self.raw_user_name = True self.user_name = data return strg = unpack_string(data, 0, self.encoding, lenlen=1) else: try: strg = unpack_unicode(data, 0, lenlen=2) except: strg = "" if DEBUG: fprintf(self.logfile, "WRITEACCESS: %d bytes; raw=%s %r\n", len(data), self.raw_user_name, strg) strg = strg.rstrip() self.user_name = strg ``` with ``` workbook=xlrd.open_workbook('thefile.xls',encoding_override="cp1252") ``` Seems to open the file successfully. Without the encoding override it complains `ERROR *** codepage 21010 -> encoding 'unknown_codepage_21010' -> LookupError: unknown encoding: unknown_codepage_21010`
This worked for me. ``` import xlrd my_xls = xlrd.open_workbook('//myshareddrive/something/test.xls',encoding_override="gb2312") ```
5,123
21,867,596
I'm a little new to web parsing in python. I am using beautiful soup. I would like to create a list by parsing strings from a webpage. I've looked around and can't seem to find the right answer. Doe anyone know how to create a list of strings from a web page? Any help is appreciated. My code is something like this: ``` from BeautifulSoup import BeautifulSoup import urllib2 url="http://www.any_url.com" page=urllib2.urlopen(url) soup = BeautifulSoup(page.read()) #The data I need is coming from HTML tag of td page_find=soup.findAll('td') for page_data in page_find: print page_data.string #I tried to create my list here page_List = [page_data.string] print page_List ```
2014/02/18
[ "https://Stackoverflow.com/questions/21867596", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2278570/" ]
Having difficulty understanding what you are trying to achieve... If you want all values of `page_data.string` in `page_List`, then your code should look like this: ``` page_List = [] for page_data in page_find: page_List.append(page_data.string) ``` Or using a list comprehension: ``` page_List = [page_data.string for page_data in page_find] ``` The problem with your original code is that you create the list using the text from the last `td` element only (i.e. outside of the loop which processes each `td` element).
Here it is modified to call the web page as a string ``` import requests the_web_page_as_a_string = requests.get(some_path).content from lxml import html myTree = html.fromstring(the_web_page_as_a_string) td_list = [ e for e in myTree.iter() if e.tag == 'td'] text_list = [] for td_e in td_list: text = td_e.text_content() text_list.append(text) ```
5,124
16,815,170
So this is probably a very basic question about output formatting in python using '.format' and since I'm a beginner, I can't figure this out for the life of me. I've tried to be as detailed as possible, just to make sure that there's no confusion. Let me give you an example so that you can better understand my dilemma. Consider the following program ``` list = (['wer', 'werwe', 'werwe' ,'wer we']) # list[0], list[1], list[2], list[3] list.append(['vbcv', 'cvnc', 'bnfhn', 'mjyh']) # list[4] list.append(['yth', 'rnhn', 'mjyu', 'mujym']) # list[5] list.append(['cxzz', 'bncz', 'nhrt', 'qweq']) # list[6] first = 'bill' last = 'gates' print ('{:10} {:10} {:10} {:10}'.format(first,last,list[5], list[6])) ``` Understandably that would give the output: ``` bill gates ['yth', 'rnhn', 'mjyu', 'mujym'] ['cxzz', 'bncz', 'nhrt', 'qweq'] ``` So here's my real question. I was doing this practice problem from the book and I don't understand the answer. The program below will give you a good idea of what kind of output we are going for: ``` students = [] students.append(['DeMoines', 'Jim', 'Sophomore', 3.45]) #students[0] students.append(['Pierre', 'Sophie', 'Sophomore', 4.0]) #students[1] students.append(['Columbus', 'Maria', 'Senior', 2.5]) #students[2] students.append(['Phoenix', 'River', 'Junior', 2.45]) #students[3] students.append(['Olympis', 'Edgar', 'Junior', 3.99]) #students[4] students.append(['van','john', 'junior', 3.56]) #students[5] def Grades(students): print ('Last First Standing GPA') for students in students: print('{0:10} {1:10} {2:10} {3:8.2f}'.format(students[0],students[1],students[2],students[3])) ``` The output we're trying to get is a kind of a table that gives all the stats for all the students - ``` Last First Standing GPA DeMoines Jim Sophomore 3.45 Pierre Sophie Sophomore 4.00 Columbus Maria Senior 2.50 Phoenix River Junior 2.45 Olympis Edgar Junior 3.99 van john junior 3.56 ``` So here's what I don't understand. We are working with basically the same thing in the two examples i.e. a list inside a list. For my first example, the print statement was: ``` print('{:10} {:10} {:10} {:10}'.format(first, last, list[5], list[6])) ``` where `list[5]` and `list[6]` are lists themselves and they are printed in entirety, as you can see from the output. **But that doesn't happen in the book problem**. There, the print statement says ``` print('{0:10} {1:10} {2:10} {3:8.2f}'.format(students[0], students[1], students[2], students[3])) ``` As you can see from the table output, here `students[0]` refers only to **'DeMoines'**. But if you just run the statement `students[0]` in the Python interpreter, it gives the whole sub list, as it should. ``` ['DeMoines', 'Jim', 'Sophomore', 3.45] ``` So, basically, I've got two questions, why does `students[0]` have two different meanings and why does `students[0]` not print the whole list like we did with `list[5]` and `list[6]`?
2013/05/29
[ "https://Stackoverflow.com/questions/16815170", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2396553/" ]
Look at the *for loop*: ``` for students in students: # ^^^^^^^^ ``` So, `students`(inside loop) does not actually refers to **list of list**. And `students[0]` refers to **first element** from **element** from **list of lists**, as expected. I suggest replace `students` from function argument, say, with `all_students` or something like that.
Try renaming the variable `list` into something that's not a reserved word or built-in function or type. What's confusing to beginners - and it happens to everyone sooner or later - is what happens if you redefine or use in unintended ways a reserved word or a builtin. If you do ``` list = [1, 2, 3, 4] ``` you re-bind the name `list` to no longer point to the builtin `list` data type but to the actual list `[1, 2, 3, 4]` in the current scope. That is almost always not what you intend to do. Using a variable `dir` is a similar pitfall. Also do not use additional parantheses `()` around the square brackets of the list assignment. Something like `words = ['wer', 'werwe', 'werwe' ,'wer we']` suffices. Generally consider which names you choose for a variable. `students` is descriptive, helpful commentary, `list` is not. Also if `list` currently holds a list, your algorithm might be changed later on with the variable holding a `set` or any other container type. Then a type-based variable name will be even misleading.
5,127
15,433,372
How to perform **stepwise regression** in **python**? There are methods for OLS in SCIPY but I am not able to do stepwise. Any help in this regard would be a great help. Thanks. Edit: I am trying to build a linear regression model. I have 5 independent variables and using forward stepwise regression, I aim to select variables such that my model has the lowest p-value. Following link explains the objective: <https://www.google.co.in/url?sa=t&rct=j&q=&esrc=s&source=web&cd=4&ved=0CEAQFjAD&url=http%3A%2F%2Fbusiness.fullerton.edu%2Fisds%2Fjlawrence%2FStat-On-Line%2FExcel%2520Notes%2FExcel%2520Notes%2520-%2520STEPWISE%2520REGRESSION.doc&ei=YjKsUZzXHoPwrQfGs4GQCg&usg=AFQjCNGDaQ7qRhyBaQCmLeO4OD2RVkUhzw&bvm=bv.47244034,d.bmk> Thanks again.
2013/03/15
[ "https://Stackoverflow.com/questions/15433372", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2174063/" ]
You may try mlxtend which got various selection methods. ``` from mlxtend.feature_selection import SequentialFeatureSelector as sfs clf = LinearRegression() # Build step forward feature selection sfs1 = sfs(clf,k_features = 10,forward=True,floating=False, scoring='r2',cv=5) # Perform SFFS sfs1 = sfs1.fit(X_train, y_train) ```
You can make forward-backward selection based on `statsmodels.api.OLS` model, as shown [in this answer](https://datascience.stackexchange.com/a/24447/24162). However, [this answer](https://stats.stackexchange.com/questions/20836/algorithms-for-automatic-model-selection/20856#20856) describes why you should not use stepwise selection for econometric models in the first place.
5,128
17,332,350
For some reason I can't log into the same account on my home computer as my work computer. I was able to get Bo10's code to work, but not abernert's and I would really like to understand why. Here is my updates to abernert's code: ``` import csv import sys import json import urllib2 j = urllib2.urlopen('https://citibikenyc.com/stations/json') js = json.load(j) citi = js['stationBeanList'] columns = ('stationName', 'totalDocks', 'availableDocks', 'latitude', 'longitude', 'availableBikes') stations = (operator.itemgetter(columns)(station) for station in citi) with open('output.csv', 'w') as csv_file: csv_writer = csv.writer(csv_file) csv_file.writerows(stations) I thought adding this line `csv_writer = csv.writer(csv_file)` would fix the object has no attirbute error, but I am still getting it. This is the actual error: Andrews-MacBook:coding Andrew$ python citibike1.py Traceback (most recent call last): File "citibike1.py", line 17, in <module> csv_file.writerows(stations) AttributeError: 'file' object has no attribute 'writerows' ``` --- So now I have the changed the code to this and the output is just repeating the names of the columns 322 times. I changed it on line 14 because i was getting this error: ``` Traceback (most recent call last): File "citibike1.py", line 17, in <module> csv_writer.writerows(stations) File "citibike1.py", line 13, in <genexpr> stations = (operator.itemgetter(columns)(station) for station in citi) NameError: global name 'operator' is not defined: import csv import sys import json import urllib2 import operator j = urllib2.urlopen('https://citibikenyc.com/stations/json') js = json.load(j) citi = js['stationBeanList'] columns = ('stationName', 'totalDocks', 'availableDocks', 'latitude', 'longitude', 'availableBikes') stations = (operator.itemgetter(0,1,2,3,4,5)(columns) for station in citi) with open('output.csv', 'w') as csv_file: csv_writer = csv.writer(csv_file) csv_writer.writerows(stations) ```
2013/06/26
[ "https://Stackoverflow.com/questions/17332350", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1887261/" ]
The problem is that you're not using the `csv` module, you're using the `pickle` module, and this is what `pickle` output looks like. To fix it: ``` csvfile = open('output.csv', 'w') csv.writer(csvfile).writerows(stationList) csvfile.close() ``` --- Note that you're going out of your way to build a transposed table, with 6 lists of 322 lists, not 322 lists of 6 lists. So, you're going to get 6 rows of 322 columns each. If you want the opposite, just don't do that: ``` stationList = [] for f in citi: stationList.append((f['stationName'], f['totalDocks'], f['availableDocks'], f['latitude'], f['longitude'], f['availableBikes'])) ``` Or, more briefly: ``` stationlist = map(operator.itemgetter('stationName', 'totalDocks', 'availableDocks', 'latitude', 'longitude', 'availableBikes'), citi) ``` --- However, instead of building up a huge list, you may want to consider writing the rows one at a time. You can do that by putting `csv.writerow` calls into the middle of the for loop. But you can also do that just by using `itertools.imap` or a generator expression instead of `map` or a list comprehension. That will make `stationlist` into an iterable that creates new values as needed, instead of creating them all at once. --- Putting that all together, here's how I'd write your program: ``` import csv import sys import json import urllib2 j = urllib2.urlopen('https://citibikenyc.com/stations/json') js = json.load(j) citi = js['stationBeanList'] columns = ('stationName', 'totalDocks', 'availableDocks', 'latitude', 'longitude', 'availableBikes') stations = (operator.itemgetter(columns)(station) for station in citi) with open('output.csv', 'w') as csv_file: csv.writer(csv_file).writerows(stations) ```
As abarnert mentions, you're not actually using the `csv` module that you've imported. Also, your logic for storing the columns might actually be transposed. I think you might want to do this instead (*edited to fix the tuple/list confusion*): ``` import csv import json import urllib2 j = urllib2.urlopen('https://citibikenyc.com/stations/json') js = json.load(j) citi = js['stationBeanList'] columns = ["stationName", "totalDocks", "availableDocks", "latitude", "longitude", "availableBikes"] station_list = [[f[s] for s in columns] for f in citi] with open("output.csv", 'w') as outfile: csv_writer = csv.writer(outfile) csv_writer.writerows(station_list) ```
5,138
42,418,713
I need to perform an integration with python but with one of the limits being a variable, and not a number (from 0 to z). I tried the following: ``` import numpy as np import matplotlib.pyplot as plt from scipy.integrate import quad def I(z,y,a): #function I want to integrate I = (a*(y*(1+z)**3+(1-y))**(0.5))**(-1) return I def dl(z,y,a): #Integration of I dl = quad(I, 0, z, args =(z,y,a)) return dl ``` The problem I have is that `dl(z,y,a)` gives me an array, so whenever I want to plot or evaluate it, I obtain the following: > > ValueError: The truth value of an array with more than one element is ambiguous. > > > I don't know if there is any solution for that.Thanks in advanced
2017/02/23
[ "https://Stackoverflow.com/questions/42418713", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7569812/" ]
**Edit:** In your code you should only your `args` argument as `agrs=(y, a)`, z should not be included. Then you can access the result of integration by indexing the first element of the returned tuple. Actually `quad` returns a tuple. The first element in the tuple is the reuslt you want. Since I cannot get your code run without problem, I wrote some short codes instead. I am not sure if this is what you want: ``` def I(a): return lambda z, y: (a*(y*(1+z)**3+(1-y))**(0.5))**(-1) def dl(z, y, a): return quad(I(a), 0, z, args=(y)) print(dl(1,2,3)[0]) ``` Results: ``` 0.15826362868629346 ```
I don't think `quad` accepts vector valued integration boundaries. So in this case you'll actually have to either loop over `z` or use `np.vectorize`.
5,139
62,908,688
Recently I went on to clean my python code. I felt tiresome to remove all print statements in the code ony by one. Is there any shortcut in editor or RE for removing or commenting print statements in a python program in one go?
2020/07/15
[ "https://Stackoverflow.com/questions/62908688", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8721742/" ]
Find / Replace -------------- * Find Replace `print(` with `# print(` will comment them out * Probably works in most editors Using [Notepad++](https://notepad-plus-plus.org/downloads/) with regex ---------------------------------------------------------------------- * Free to download * Recognizes many programming languages * Search expression `(print).*` + `^(print).*` if you only want print statements from the beginning of the line + [![enter image description here](https://i.stack.imgur.com/nac5J.png)](https://i.stack.imgur.com/nac5J.png) [![enter image description here](https://i.stack.imgur.com/Bz3cA.png)](https://i.stack.imgur.com/Bz3cA.png) Write a script -------------- * Use [`pathlib`](https://docs.python.org/3/library/pathlib.html) to find files + [How to replace characters and rename multiple files?](https://stackoverflow.com/questions/62668490) + [Python 3's pathlib Module: Taming the File System](https://realpython.com/python-pathlib/) * Use [`re.sub`](https://docs.python.org/3/library/re.html#re.sub) to find and replace expression ```py from pathlib import Path p = Path('c:\...\path_to_python_files') # path to directory with files files = list(p.rglob('*.py')) # find all python files including subdirectories of p for file in files: with file.open('r') as f: rows = [re.sub('(print).*', '', row) for row in f.readlines()] new_file_name = file.parent / f'{file.stem}_no_print{file.suffix}' with new_file_name.open('w') as f: # you could overwrite the original file, but that might be scary f.writelines(rows) ```
You should avoid working with print statements. Use the python logging module instead: ``` import logging logging.debug('debug message') ``` Once you finished your development and dont need debugging information, you can increase the log level: ``` logging.basicConfig(format='%(levelname)s:%(message)s', level=logging.WARNING) ``` This suppresses all logging messages lower than WARNING. See the [DOCS](https://docs.python.org/3/howto/logging.html) for more information.
5,144