qid
int64
46k
74.7M
question
stringlengths
54
37.8k
date
stringlengths
10
10
metadata
listlengths
3
3
response_j
stringlengths
29
22k
response_k
stringlengths
26
13.4k
__index_level_0__
int64
0
17.8k
48,266,643
I have the 2d list mainlist ``` mainlist = [['John','Doe',True],['Mary','Jane',False],['James','Smith',False]] slist1 = ['John', 'Doe'] slist2 = ['John', 'Smith'] slist3 = ['Doe', 'John'] slist4 = ['John', True] ``` How to determine if a sublist of a sublist exists in a list where if slist1 is tested against mainlist will return True while slist2 will return False I am thinking of something like this (code from [here](https://stackoverflow.com/questions/22673770/simplest-way-to-check-if-multiple-items-are-or-are-not-in-a-list "here")) ``` for sublist in mainlist: if all(i in sublist for i in slist1): return True break ``` is there a more "pythonic" way to do this? thanks edit: 1. slist1 tested against mainlist would return True 2. slist2 would return False 3. slist3 would return False 4. slist4 would return False so basically, i am just testing if slist is in the first 2 index of mainlist[x]
2018/01/15
[ "https://Stackoverflow.com/questions/48266643", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8573372/" ]
You could take an array with references to the wanted arrays and as index for the array for pusing the remainder value of the actual index and the length of the temporary array. ```js var array = ['fruit', 'vegetables', 'sugars', 'bread', 'fruit', 'vegetables', 'sugars', 'bread'], fruits = [], // final arrays vegetables = [], // sugars = [], // breads = [], // temp = [fruits, vegetables, sugars, breads], len = temp.length; array.forEach((v, i) => temp[i % len].push(v)); console.log(fruits); console.log(vegetables); console.log(sugars); console.log(breads); ``` ```css .as-console-wrapper { max-height: 100% !important; top: 0; } ```
Not super elegant but it will do the job.. ```js var a = ['bread_1','fruit_1','vegetable_1','sugars_1', 'bread_2','fruit_2','vegetable_2','sugars_2', 'bread_3','fruit_3','vegetable_3','sugars_3']; var i=0; a = a.reduce(function(ac, va, id, ar){ if(i==ac.length) i=0; ac[i].push(va); i++; return ac; }, [[],[],[],[]]); console.log(a); ```
12,466
34,898,525
I want to generate a python list containing all months occurring between two dates, with the input and output formatted as follows: ``` date1 = "2014-10-10" # input start date date2 = "2016-01-07" # input end date month_list = ['Oct-14', 'Nov-14', 'Dec-14', 'Jan-15', 'Feb-15', 'Mar-15', 'Apr-15', 'May-15', 'Jun-15', 'Jul-15', 'Aug-15', 'Sep-15', 'Oct-15', 'Nov-15', 'Dec-15', 'Jan-16'] # output ```
2016/01/20
[ "https://Stackoverflow.com/questions/34898525", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3480116/" ]
With pandas, you can have a one liner like this: ``` import pandas as pd date1 = "2014-10-10" # input start date date2 = "2016-01-07" # input end date month_list = [i.strftime("%b-%y") for i in pd.date_range(start=date1, end=date2, freq='MS')] ```
Here is my solution with a simple list comprehension which uses `range` to know where months must start and end ``` from datetime import datetime as dt sd = dt.strptime('2014-10-10', "%Y-%m-%d") ed = dt.strptime('2016-01-07', "%Y-%m-%d") lst = [dt.strptime('%2.2d-%2.2d' % (y, m), '%Y-%m').strftime('%b-%y') \ for y in xrange(sd.year, ed.year+1) \ for m in xrange(sd.month if y==sd.year else 1, ed.month+1 if y == ed.year else 13)] print lst ``` produces ``` ['Oct-14', 'Nov-14', 'Dec-14', 'Jan-15', 'Feb-15', 'Mar-15', 'Apr-15', 'May-15', 'Jun-15', 'Jul-15', 'Aug-15', 'Sep-15', 'Oct-15', 'Nov-15', 'Dec-15', 'Jan-16'] ```
12,468
58,872,437
I launched `Jupyter Notebook`, created a new notebook in `python`, imported the necessary `libraries` and tried to access a `.xlsx` file on the desktop with this `code`: `haber = pd.read_csv('filename.xlsx')` but error keeps popping up. Want a reliable way of accessing this file on my desktop without incurring any error response
2019/11/15
[ "https://Stackoverflow.com/questions/58872437", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11003573/" ]
If you open the developer console, you'll see there's an error in one of the templates (DishDetailComponent.html@75:9): [![Error in template DishDetailComponent.html@75:9](https://i.stack.imgur.com/iH8LI.png)](https://i.stack.imgur.com/iH8LI.png) As you can see, it complains about there's no `dividerColor` property in the `mat-form-field` component. Perhaps it's a deprecated property because I don't see it in its API: <https://material.angular.io/components/form-field/api>
I found a changes log here: <https://www.reddit.com/r/Angular2/comments/86ta8k/angular_material_600beta5_changelog/> and replaced all `dividerColor`s with `color` in my project and it worked! Thanks for @Fel's help.
12,478
49,488,989
I'm looking into the Twitter Search API, and apparently, it has a count parameter that determines "The number of tweets to return per page, up to a maximum of 100." What does "per page" mean, if I'm for example running a python script like this: ``` import twitter #python-twitter package api = twitter.Api(consumer_key="mykey", consumer_secret="mysecret", access_token_key="myaccess", access_token_secret="myaccesssecret") results = api.GetSearch(raw_query="q=%23myHashtag&geocode=59.347937,18.072433,5km") print(len(results)) ``` This will only give me 15 tweets in results. I want more, preferably all tweets, if possible. So what should I do? Is there a "next page" option? Can't I just specify the search query in a way that gives me all tweets at once? Or if the number of tweets is too large, some maximum number of tweets?
2018/03/26
[ "https://Stackoverflow.com/questions/49488989", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3128156/" ]
Tweepy has a `Cursor` object that works like this: ``` for tweet in tweepy.Cursor(api.search, q="#myHashtag&geocode=59.347937,18.072433,5km", lang='en', tweet_mode='extended').items(): # handle tweets here ``` You can find more info in the [Tweepy Cursor docs](http://tweepy.readthedocs.io/en/v3.5.0/cursor_tutorial.html#introduction).
With [TwitterAPI](https://github.com/geduldig/TwitterAPI) you would access pages this way: ``` pager = TwitterPager(api, 'search/tweets', {'q':'#myHashtag', 'geocode':'59.347937,18.072433,5km'}) for item in pager.get_iterator(): print(item['text'] if 'text' in item else item) ``` A complete example is here: <https://github.com/geduldig/TwitterAPI/blob/master/examples/page_tweets.py>
12,479
50,195,029
today I have encountered a strange problem where python ide would not scale the font correctly on my 1920\*1080 screen. So i fixed it. Kinda I knew that there was an option in windows where one could toggle the "Override high DPI scaling behavior". Problem is that this tab is only available for application e.g ".exe". Windows is a strange beast. By default python ide has font size 9-10. That font on high res display just gets scaled by win 10 My solution is to manually enable DPI Awareness and then set the correct font size in class called run.py As this is not a real question i will post the code and mark it as answered It may not be compatible with displays that have higher resolution then 1920\*1080, but hey it works :D
2018/05/05
[ "https://Stackoverflow.com/questions/50195029", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4102180/" ]
This is what I was looking for: sort -t ';' -k 2,2 < some-csv.log Big thanks to @dmadic
If your input.txt is something like: ``` Any ANA Bill BOB Ana ``` and you want your output to be: ``` Ana Any Bill ANA BOB ``` then, maybe your could try something like: ``` grep -E "[a-z]+" input.txt | sort > lower.txt grep -wE "[A-Z]+" input.txt | sort > upper.txt cat lower.txt upper.txt ```
12,480
4,011,705
I've tried lots of solution that posted on the net, they don't work. ``` >>> import _imaging >>> _imaging.__file__ 'C:\\python26\\lib\\site-packages\\PIL\\_imaging.pyd' >>> ``` So the system can find the \_imaging but still can't use truetype font ``` from PIL import Image, ImageDraw, ImageFilter, ImageFont im = Image.new('RGB', (300,300), 'white') draw = ImageDraw.Draw(im) font = ImageFont.truetype('arial.ttf', 14) draw.text((100,100), 'test text', font = font) ``` Raises this error: ``` ImportError: The _imagingft C module is not installed File "D:\Python26\Lib\site-packages\PIL\ImageFont.py", line 34, in __getattr__ raise ImportError("The _imagingft C module is not installed") ```
2010/10/25
[ "https://Stackoverflow.com/questions/4011705", "https://Stackoverflow.com", "https://Stackoverflow.com/users/483144/" ]
The following worked for me on Ubuntu 14.04.1 64 bit: ``` sudo apt-get install libfreetype6-dev ``` Then, in the virtualenv: ``` pip uninstall pillow pip install --no-cache-dir pillow ```
Worked for Ubuntu 12.10: ``` sudo pip uninstall PIL sudo apt-get install libfreetype6-dev sudo apt-get install python-imaging ```
12,482
10,059,497
Code is much more precise than English; Here's what I'd like to do: ``` import sys fileName = sys.argv[1] className = sys.argv[2] # open py file here and import the class # ??? # Instantiante new object of type "className" a = eval(className + "()") # I don't know if this is the way to do that. # I "know" that className will have this method: a.writeByte(0x0) ``` *EDIT:* Per the request of the answers, here's what I'm trying to do: I'm writing a virtual processor adhering to the SIC/XE instruction set. It's an educational theoretical processor used to teach the fundamentals of assembly language and systems software to computer science students. There is a notion of a "device" that I'm trying to abstract from the programming of the "processor." Essentially, I want the user of my program to be able to write their own device plugin (limited to "read\_byte" and "write\_byte" functionality) and then I want them to be able to "hook up" their devices to the processor at command-line time, so that they can write something like: `python3 sicsim -d1 dev1module Dev1Class -d2 ...` They would also supply the memory image, which would know how to interact with their device. I basically want both of us to be able to write our code without it interfering with each other.
2012/04/08
[ "https://Stackoverflow.com/questions/10059497", "https://Stackoverflow.com", "https://Stackoverflow.com/users/569302/" ]
Use [`importlib.import_module`](http://localhost/pythondocs/library/importlib.html#importlib.import_module) and the built in function [`getattr`](http://localhost/pythondocs/library/functions.html#getattr). No need for `eval`. ``` import sys import importlib module_name = sys.argv[1] class_name = sys.argv[2] module = importlib.import_module(module_name) cls = getattr(module, class_name) obj = cls() obj.writeByte(0x0) ``` This will require that the file lives somewhere on your python path. Most of the time, the current directory is on said path. If this is not sufficient, you'll have to parse the directory out of it and `append` it to the `sys.path`. I'll be glad to help with that. Just give me a sample input for the first commandline argument. Valid input for this version would be something like: ``` python3 myscript.py mypackage.mymodule MyClass ```
As aaronasterling mentions, you can take advantage of the import machinery if the file in question happens to be on the python path (somewhere under the directories listed in `sys.path`), but if that's not the case, use the built in [`exec()`](http://docs.python.org/dev/library/functions.html#exec) function: ``` fileVars = {} exec(file(fileName).read(), fileVars) ``` Then, to get an instance of the class, you can skip the `eval()`: ``` a = fileVars[className]() ```
12,492
6,095,818
Just curious to know is there any document utility available in PHP which can perform something like docutils in python ? A libary which can be very user friendly in terms of converting restructured text into HTML ?
2011/05/23
[ "https://Stackoverflow.com/questions/6095818", "https://Stackoverflow.com", "https://Stackoverflow.com/users/239670/" ]
phpDocumentor is quite outdated. Have a look at [DocBlox (Github Repository)](https://github.com/mvriel/Docblox) or [DocBlox-project.org](http://www.docblox-project.org/) edit: docblox merged with phpdocumentor and they now maintain phpdocumentor 2. links that take you directly to the project: [phpdoc.org](http://www.phpdoc.org/) [github repo](https://github.com/phpDocumentor/phpDocumentor2)
Try [phpDocumentor](http://www.phpdoc.org/).
12,493
9,966,250
I am trying to understand eval(), but am not having much luck. I am writing my own math library and am trying to include integration into the library. I need help getting python to recognize the function as a series of variables, constants, and operators. I was told that eval would do the trick but how would i go about it? ``` fofx = input ("Write your function of x here >") def integrate (fofx): #integration algorithm here #input fofx and recognize it as f(x) to be integrated. ``` i have tried the documentation but that is limited and i have no clue how i could apply it to my function to be evaluated.
2012/04/01
[ "https://Stackoverflow.com/questions/9966250", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1044726/" ]
The documentation for [`eval()`](http://docs.python.org/library/functions.html#eval) is pretty clear in my view and gives a reasonable example of what you need. Basically you want to hold an expression to be evaluated in a string: ``` >>> f = 'x**2 + 2*x' ``` Then you can define a value for `x`: ``` >>> x = 3 ``` And finally call evaluate: ``` >>> eval(f) 15 ``` Or if you want to make the call to eval a little more controlled, as opposed to creating a local variable named `x`, then you can pass in the evaluation environment in the parameters to `eval()`: ``` >>> f = 'x**2 + 2*x' >>> eval(f, {}, {'x': 5}) 35 ``` The reason you want to control the evaluation environment is to avoid any variables defined in your program inadvertently being used in the evaluation.
Perhaps you might be thinking of the 'eval' mode of the abstract syntax tree module which allows you to constuct a syntax tree for a single expression. For example the code below will take an expression in a string and modify it such that 'x\*\*2+3\*x\*\* 4+2' changes to 'x\*\*3+3\*x\*\* 5+2'. (Note that this is not the integral of the expression, that code would be much longer!) ``` import ast class IncreasePower(ast.NodeTransformer): def visit_BinOp(self,node): node=self.generic_visit(node) if isinstance( node.op , ast.Pow) and isinstance(node.right, ast.Num): node.right.n+=1 return node x=4 s='x**2+3*x**4+2' print eval(s) A = ast.parse(s,'source','eval') B = IncreasePower().visit(A) E = compile(A,'increased','eval') print eval(E) ``` You may also find it helpful to look at the symbolic maths library sympy which uses a different approach to building up expressions. In sympy you start with x=sympy.Symbol("x") before constructing your expressions. The "sympy.integrate" function does symbolic integration.
12,494
46,207,299
On Windows when I execute: c:\python35\scripts\tensorboard --logdir=C:\Users\Kevin\Documents\dev\Deadpool\Tensorflow-SegNet\logs and I web browse to <http://localhost:6006> the first time I am redirected to <http://localhost:6006/[[_traceDataUrl]]> and I get the command prompt messages: ``` W0913 14:32:25.401402 Reloader tf_logging.py:86] Found more than one graph event per run, or there was a metagraph containing a graph_def, as well as one or more graph events. Overwriting the graph with the newest event. W0913 14:32:25.417002 Reloader tf_logging.py:86] Found more than one metagraph event per run. Overwriting the metagraph with the newest event. W0913 14:32:36.446222 Thread-2 application.py:241] path /[[_traceDataUrl]] not found, sending 404 ``` When I try <http://localhost:6006> again, TensorBoard takes a long time presents the 404 message again but this time displays a blank web page. Logs directory: ``` checkpoint events.out.tfevents.1504911606.LTIIP82 events.out.tfevents.1504912739.LTIIP82 model.ckpt-194000.data-00000-of-00001 model.ckpt-194000.index model.ckpt-194000.meta ``` Why am I getting redirected and 404ed?
2017/09/13
[ "https://Stackoverflow.com/questions/46207299", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1637126/" ]
i'am having the exact same error. Maybe it is because of [this](https://github.com/tensorflow/tensorflow/issues/7856) issue. So try to Change the env-variable to --logdir=foo:C:\Users\Kevin\Documents\dev\Deadpool\Tensorflow-SegNet\logs. Hope it helps.
Could it be that you try to access the webpage with IE? Apparently IE is not supported by Tensorboard yet(<https://github.com/tensorflow/tensorflow/issues/9372>). Maybe use another Browser.
12,495
7,093,121
Recently, reading Python ["Functional Programming HOWTO"](http://docs.python.org/howto/functional.html), I came across a mentioned there `test_generators.py` standard module, where I found the following generator: ``` # conjoin is a simple backtracking generator, named in honor of Icon's # "conjunction" control structure. Pass a list of no-argument functions # that return iterable objects. Easiest to explain by example: assume the # function list [x, y, z] is passed. Then conjoin acts like: # # def g(): # values = [None] * 3 # for values[0] in x(): # for values[1] in y(): # for values[2] in z(): # yield values # # So some 3-lists of values *may* be generated, each time we successfully # get into the innermost loop. If an iterator fails (is exhausted) before # then, it "backtracks" to get the next value from the nearest enclosing # iterator (the one "to the left"), and starts all over again at the next # slot (pumps a fresh iterator). Of course this is most useful when the # iterators have side-effects, so that which values *can* be generated at # each slot depend on the values iterated at previous slots. def simple_conjoin(gs): values = [None] * len(gs) def gen(i): if i >= len(gs): yield values else: for values[i] in gs[i](): for x in gen(i+1): yield x for x in gen(0): yield x ``` It took me a while to understand how it works. It uses a mutable list `values` to store the yielded results of the iterators, and the N+1 iterator return the `values`, which passes through the whole chain of the iterators. As I stumbled into this code while reading about functional programming, I started thinking if it was possible to rewrite this conjoin generator using functional programming (using functions from the [`itertools` module](http://docs.python.org/library/itertools.html)). There are a lot of routines written in functional style (just glance at the end of [this](http://docs.python.org/library/itertools.html#recipes) article in the Recipes section). But, unfortunately, I haven't found any solution. So, is it possible to write this conjoin generator using functional programming just using the [`itertools` module](http://docs.python.org/library/itertools.html)? Thanks
2011/08/17
[ "https://Stackoverflow.com/questions/7093121", "https://Stackoverflow.com", "https://Stackoverflow.com/users/862380/" ]
This seems to work, and it's still lazy: ``` def conjoin(gs): return [()] if not gs else ( (val,) + suffix for val in gs[0]() for suffix in conjoin(gs[1:]) ) def range3(): return range(3) print list(conjoin([range3, range3])) ``` Output: ``` [(0, 0), (0, 1), (0, 2), (1, 0), (1, 1), (1, 2), (2, 0), (2, 1), (2, 2)] ``` Example usage to show mutable state: ``` x = "" def mutablerange(): global x x += "x" return [x + str(i) for i in range(3)] print list(conjoin([range3, mutablerange])) ``` Output: (watch the increasing number of 'x's) ``` [(0, 'x0'), (0, 'x1'), (0, 'x2'), (1, 'xx0'), (1, 'xx1'), (1, 'xx2'), (2, 'xxx0'), (2, 'xxx1'), (2, 'xxx2')] ``` And if we use `itertools.product`: ``` x = "" print list(itertools.product(range3(), mutablerange())) ``` the result is the following: ``` [(0, 'x0'), (0, 'x1'), (0, 'x2'), (1, 'x0'), (1, 'x1'), (1, 'x2'), (2, 'x0'), (2, 'x1'), (2, 'x2')] ``` So, one clearly see, that `itertools.product` caches the values returned by the iterator.
`simple_conjoin` uses the same basic building blocks -- loops, conditions, and `yield` -- as the building blocks of the `itertools` recipes. It also treats functions as data, a hallmark of functional programming. > > Of course this is most useful when the > iterators have side-effects, so that which values *can* be generated at > each slot depend on the values iterated at previous slots. > > > This, however, is contrary to the way functional programming works. In functional programming, each function takes input and produces output, and reacts with the rest of the program in no other way. In `simple_conjoin`, the functions take no input, and have side effects. This is central to it's use. So while you can certainly *write* it in functional style, it won't be useful in simple translation. You'd need to figure out a way to write it so it operated without side effects before you could produce a truly "functional" implementation. Note: @recursive's answer is good, but if `range3` had side effects it wouldn't be truly functional.
12,497
11,915,432
Why raise UnicodeDecodeError? I try to deploy my django app using apache to copy static files, typing ``` $python manage.py collectstatic ``` and I got error message like below. ``` You have requested to collect static files at the destination location as specified in your settings. This will overwrite existing files! Are you sure you want to do this? Type 'yes' to continue, or 'no' to cancel: yes Traceback (most recent call last): File "manage.py", line 10, in <module> execute_from_command_line(sys.argv) File "/usr/local/lib/python2.7/dist-packages/django/core/management/__init__.py", line 443, in execute_from_command_line utility.execute() File "/usr/local/lib/python2.7/dist-packages/django/core/management/__init__.py", line 382, in execute self.fetch_command(subcommand).run_from_argv(self.argv) File "/usr/local/lib/python2.7/dist-packages/django/core/management/base.py", line 196, in run_from_argv self.execute(*args, **options.__dict__) File "/usr/local/lib/python2.7/dist-packages/django/core/management/base.py", line 232, in execute output = self.handle(*args, **options) File "/usr/local/lib/python2.7/dist-packages/django/core/management/base.py", line 371, in handle return self.handle_noargs(**options) File "/usr/local/lib/python2.7/dist-packages/django/contrib/staticfiles/management/commands/collectstatic.py", line 163, in handle_noargs collected = self.collect() File "/usr/local/lib/python2.7/dist-packages/django/contrib/staticfiles/management/commands/collectstatic.py", line 104, in collect for path, storage in finder.list(self.ignore_patterns): File "/usr/local/lib/python2.7/dist-packages/django/contrib/staticfiles/finders.py", line 137, in list for path in utils.get_files(storage, ignore_patterns): File "/usr/local/lib/python2.7/dist-packages/django/contrib/staticfiles/utils.py", line 37, in get_files for fn in get_files(storage, ignore_patterns, dir): File "/usr/local/lib/python2.7/dist-packages/django/contrib/staticfiles/utils.py", line 37, in get_files for fn in get_files(storage, ignore_patterns, dir): File "/usr/local/lib/python2.7/dist-packages/django/contrib/staticfiles/utils.py", line 25, in get_files directories, files = storage.listdir(location) File "/usr/local/lib/python2.7/dist-packages/django/core/files/storage.py", line 236, in listdir if os.path.isdir(os.path.join(path, entry)): File "/usr/lib/python2.7/posixpath.py", line 71, in join path += '/' + b UnicodeDecodeError: 'ascii' codec can't decode byte 0xba in position 1: ordinal not in range(128) ``` What's wrong with my static files? my settings.py ``` import os PROJECT_ROOT = os.path.dirname(__file__) STATIC_ROOT = os.path.join(PROJECT_ROOT, 'static/') # URL prefix for static files. # Example: "http://media.lawrence.com/static/" STATIC_URL = '/static/' STATICFILES_FINDERS = ( 'django.contrib.staticfiles.finders.FileSystemFinder', 'django.contrib.staticfiles.finders.AppDirectoriesFinder', # 'django.contrib.staticfiles.finders.DefaultStorageFinder', ) ``` and apache host conf ``` ServerName www.abcd.org DocumentRoot /srv/www/yyy <Directory /srv/www/yyy> Order allow,deny Allow from all </Directory> WSGIDaemonProcess yyy.djangoserver processes=2 threads=15 display-name=%{GROUP} WSGIProcessGroup iii.djangoserver WSGIScriptAlias / /srv/www/yyy/apache/django.wsgi ```
2012/08/11
[ "https://Stackoverflow.com/questions/11915432", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1559347/" ]
Looks like one or more paths to your static files that are going to be copied contains non ASCII characters. It has nothing to do with the path to the desctination directory. **One way to find out would be** to put ``` try: print path except: pass try: print entry except: pass ``` just before line 236 in /usr/local/lib/python2.7/dist-packages/django/core/files/storage.py for a moment and then run manage.py again. Then you should see where the problem occurs (you won't see the very culprit but the file just before it and propably directory of the problematic file). **Or, alternatively, you can use pdb**: ``` python -m pdb manage.py collectstatic ``` and check which file is causing the problem in debugger.
I had the same error when I used **django-pipeline** inside docker container. It turned out that for some reason the system used POSIX locale. I used the solution proposed here and exported locale setting in system shell: ``` export LC_ALL=en_US.UTF-8 export LANG=en_US.UTF-8 ``` You can check that afterwards your locale should be like: ``` vagrant@vagrant-ubuntu-trusty-64:/project$ locale LANG=en_US.UTF-8 LANGUAGE= LC_CTYPE="en_US.UTF-8" LC_NUMERIC="en_US.UTF-8" LC_TIME="en_US.UTF-8" LC_COLLATE="en_US.UTF-8" LC_MONETARY="en_US.UTF-8" LC_MESSAGES="en_US.UTF-8" LC_PAPER="en_US.UTF-8" LC_NAME="en_US.UTF-8" LC_ADDRESS="en_US.UTF-8" LC_TELEPHONE="en_US.UTF-8" LC_MEASUREMENT="en_US.UTF-8" LC_IDENTIFICATION="en_US.UTF-8" LC_ALL=en_US.UTF-8 ``` It worked well. Also, notice I did that both in docker and outside machine.
12,498
46,564,730
I am trying to read a table from a Google spanner database, and write it to a text file to do a backup, using google dataflow with the python sdk. I have written the following script: ``` from __future__ import absolute_import import argparse import itertools import logging import re import time import datetime as dt import logging import apache_beam as beam from apache_beam.io import iobase from apache_beam.io import WriteToText from apache_beam.io.range_trackers import OffsetRangeTracker, UnsplittableRangeTracker from apache_beam.metrics import Metrics from apache_beam.options.pipeline_options import PipelineOptions from apache_beam.options.pipeline_options import StandardOptions, SetupOptions from apache_beam.options.pipeline_options import GoogleCloudOptions from google.cloud.spanner.client import Client from google.cloud.spanner.keyset import KeySet BUCKET_URL = 'gs://my_bucket' OUTPUT = '%s/output/' % BUCKET_URL PROJECT_ID = 'my_project' INSTANCE_ID = 'my_instance' DATABASE_ID = 'my_db' JOB_NAME = 'spanner-backup' TABLE = 'my_table' class SpannerSource(iobase.BoundedSource): def __init__(self): logging.info('Enter __init__') self.spannerOptions = { "id": PROJECT_ID, "instance": INSTANCE_ID, "database": DATABASE_ID } self.SpannerClient = Client def estimate_size(self): logging.info('Enter estimate_size') return 1 def get_range_tracker(self, start_position=None, stop_position=None): logging.info('Enter get_range_tracker') if start_position is None: start_position = 0 if stop_position is None: stop_position = OffsetRangeTracker.OFFSET_INFINITY range_tracker = OffsetRangeTracker(start_position, stop_position) return UnsplittableRangeTracker(range_tracker) def read(self, range_tracker): # This is not called when using the dataflowRunner ! logging.info('Enter read') # instantiate spanner client spanner_client = self.SpannerClient(self.spannerOptions["id"]) instance = spanner_client.instance(self.spannerOptions["instance"]) database = instance.database(self.spannerOptions["database"]) # read from table table_fields = database.execute_sql("SELECT t.column_name FROM information_schema.columns AS t WHERE t.table_name = '%s'" % TABLE) table_fields.consume_all() self.columns = [x[0] for x in table_fields] keyset = KeySet(all_=True) results = database.read(table=TABLE, columns=self.columns, keyset=keyset) # iterator over rows results.consume_all() for row in results: JSON_row = { self.columns[i]: row[i] for i in range(len(self.columns)) } yield JSON_row def split(self, start_position=None, stop_position=None): # this should not be called since the source is unspittable logging.info('Enter split') if start_position is None: start_position = 0 if stop_position is None: stop_position = 1 # Because the source is unsplittable (for now), only a single source is returned yield iobase.SourceBundle( weight=1, source=self, start_position=start_position, stop_position=stop_position) def run(argv=None): """Main entry point""" pipeline_options = PipelineOptions() google_cloud_options = pipeline_options.view_as(GoogleCloudOptions) google_cloud_options.project = PROJECT_ID google_cloud_options.job_name = JOB_NAME google_cloud_options.staging_location = '%s/staging' % BUCKET_URL google_cloud_options.temp_location = '%s/tmp' % BUCKET_URL #pipeline_options.view_as(StandardOptions).runner = 'DirectRunner' pipeline_options.view_as(StandardOptions).runner = 'DataflowRunner' p = beam.Pipeline(options=pipeline_options) output = p | 'Get Rows from Spanner' >> beam.io.Read(SpannerSource()) iso_datetime = dt.datetime.now().replace(microsecond=0).isoformat() output | 'Store in GCS' >> WriteToText(file_path_prefix=OUTPUT + iso_datetime + '-' + TABLE, file_name_suffix='') # if this line is commented, job completes but does not do anything result = p.run() result.wait_until_finish() if __name__ == '__main__': logging.getLogger().setLevel(logging.INFO) run() ``` However, this script runs correctly only on the DirectRunner: when I let it run on the DataflowRunner, it runs for a while without any output, before exiting with an error: > > "Executing failure step failure14 [...] Workflow failed. Causes: [...] The worker lost contact with the service." > > > Sometimes, it just goes on forever, without creating an output. Moreover, if I comment the line 'output = ...', the job completes, but without actually reading the data. It also appears that the dataflowRunner calls the function 'estimate\_size' of the source, but not the functions 'read' or 'get\_range\_tracker'. Does anyone have any ideas about what may cause this ? I know there is a (more complete) java SDK with an experimental spanner source/sink available, but if possible I'd rather stick with python. Thanks
2017/10/04
[ "https://Stackoverflow.com/questions/46564730", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6837292/" ]
Google currently added support of Backup Spanner with Dataflow, you can choose related template when creating DataFlow job. For more: <https://cloud.google.com/blog/products/gcp/cloud-spanner-adds-import-export-functionality-to-ease-data-movement>
I have reworked my code following the suggestion to simply use a ParDo, instead of using the BoundedSource class. As a reference, here is my solution; I am sure there are many ways to improve on it, and I would be happy to to hear opinions. In particular I am surprised that I have to a create a dummy PColl when starting the pipeline (if I don't, I get an error > > AttributeError: 'PBegin' object has no attribute 'windowing' > > > that I could not work around. The dummy PColl feels a bit like a hack. ``` from __future__ import absolute_import import datetime as dt import logging import apache_beam as beam from apache_beam.io import WriteToText from apache_beam.options.pipeline_options import PipelineOptions from apache_beam.options.pipeline_options import StandardOptions, SetupOptions from apache_beam.options.pipeline_options import GoogleCloudOptions from google.cloud.spanner.client import Client from google.cloud.spanner.keyset import KeySet BUCKET_URL = 'gs://my_bucket' OUTPUT = '%s/some_folder/' % BUCKET_URL PROJECT_ID = 'my_project' INSTANCE_ID = 'my_instance' DATABASE_ID = 'my_database' JOB_NAME = 'my_jobname' class ReadTables(beam.DoFn): def __init__(self, project, instance, database): super(ReadTables, self).__init__() self._project = project self._instance = instance self._database = database def process(self, element): # get list of tables in the database table_names_row = Client(self._project).instance(self._instance).database(self._database).execute_sql('SELECT t.table_name FROM information_schema.tables AS t') for row in table_names_row: if row[0] in [u'COLUMNS', u'INDEXES', u'INDEX_COLUMNS', u'SCHEMATA', u'TABLES']: # skip these continue yield row[0] class ReadSpannerTable(beam.DoFn): def __init__(self, project, instance, database): super(ReadSpannerTable, self).__init__() self._project = project self._instance = instance self._database = database def process(self, element): # first read the columns present in the table table_fields = Client(self._project).instance(self._instance).database(self._database).execute_sql("SELECT t.column_name FROM information_schema.columns AS t WHERE t.table_name = '%s'" % element) columns = [x[0] for x in table_fields] # next, read the actual data in the table keyset = KeySet(all_=True) results_streamed_set = Client(self._project).instance(self._instance).database(self._database).read(table=element, columns=columns, keyset=keyset) for row in results_streamed_set: JSON_row = { columns[i]: row[i] for i in xrange(len(columns)) } yield (element, JSON_row) # output pairs of (table_name, data) def run(argv=None): """Main entry point""" pipeline_options = PipelineOptions() pipeline_options.view_as(SetupOptions).save_main_session = True pipeline_options.view_as(SetupOptions).requirements_file = "requirements.txt" google_cloud_options = pipeline_options.view_as(GoogleCloudOptions) google_cloud_options.project = PROJECT google_cloud_options.job_name = JOB_NAME google_cloud_options.staging_location = '%s/staging' % BUCKET_URL google_cloud_options.temp_location = '%s/tmp' % BUCKET_URL pipeline_options.view_as(StandardOptions).runner = 'DataflowRunner' p = beam.Pipeline(options=pipeline_options) init = p | 'Begin pipeline' >> beam.Create(["test"]) # have to create a dummy transform to initialize the pipeline, surely there is a better way ? tables = init | 'Get tables from Spanner' >> beam.ParDo(ReadTables(PROJECT, INSTANCE_ID, DATABASE_ID)) # read the tables in the db rows = (tables | 'Get rows from Spanner table' >> beam.ParDo(ReadSpannerTable(PROJECT, INSTANCE_ID, DATABASE_ID)) # for each table, read the entries | 'Group by table' >> beam.GroupByKey() | 'Formatting' >> beam.Map(lambda (table_name, rows): (table_name, list(rows)))) # have to force to list here (dataflowRunner produces _Unwindowedvalues) iso_datetime = dt.datetime.now().replace(microsecond=0).isoformat() rows | 'Store in GCS' >> WriteToText(file_path_prefix=OUTPUT + iso_datetime, file_name_suffix='') result = p.run() result.wait_until_finish() if __name__ == '__main__': logging.getLogger().setLevel(logging.INFO) run() ```
12,499
63,415,954
why the result of C++ and python bitwise shift operator are diffrernt? python ``` >>> 1<<20 1048576 ``` C++ ``` cout <<1<<20; 120 ```
2020/08/14
[ "https://Stackoverflow.com/questions/63415954", "https://Stackoverflow.com", "https://Stackoverflow.com/users/13966865/" ]
The result differes because of the operator associativity in C++. ``` std::cout << 1 << 20; ``` is the same as ``` (std::cout << 1) << 20; ``` because `operator <<` is left-associative. What you intend to do is ``` std::cout << (1 << 20); ```
cout overloads the '<<' operator to print the values. So when you are doing ``` cout <<1<<20; ``` It actually prints 1 and 20 and doesnt do any shifting ``` int shifted = 1 << 20; cout << shifted; ``` This should return the same output as python's simpler way is to do ``` cout << (1 <<20); ```
12,500
69,592,525
I refer to [Python : Using the map function](https://stackoverflow.com/questions/18087544/python-using-the-map-function) It says "map returns a specific type of generator in Python 3 that is not a list (but rather a 'map object', as you can see). " That is my understanding too. Generator object do not contain the values but is able to give you the values when you call it (next()). So my question is where are those values store ? I tried the following experiment. 1. create 2 tuple and check their size 2. create 2 map objects from the tuple 3. do a next() on the map objects to use up some of the values 4. delete one of the tuple 5. continue to do next() I would assume that when I delete the tuple, there would be no more values to do a next() but that's not the case. So my question is where are those values coming from ? Where are they stored after I delete the tuple ? Code: ``` t1 = tuple(range(1000)) t2 = tuple(range(10000)) print(f'{t1[:10]} len = {len(t1):5d} size = {getsizeof(t1):5d}') print(f'{t2[:10]} len = {len(t2):5d} size = {getsizeof(t2):5d}') ``` Output: ``` (0, 1, 2, 3, 4, 5, 6, 7, 8, 9) len = 1000 size = 8040 (0, 1, 2, 3, 4, 5, 6, 7, 8, 9) len = 10000 size = 80040 ``` Code: ``` m1 = map(lambda y: print(y), t1) m2 = map(lambda y: print(y), t2) print(f'size of m1 = {getsizeof(m1)}') print(f'size of m2 = {getsizeof(m2)}') ``` Output: ``` size of m1 = 48 size of m2 = 48 ``` Do the following a number of times: ``` next(m1) next(m2) ``` Output: ``` 23 23 ``` Delete the tuple: ``` import gc del t1 gc.collect() t1 ``` Output: ``` 168 --------------------------------------------------------------------------- NameError Traceback (most recent call last) <ipython-input-336-df561a5cc277> in <module> 2 del t1 3 gc.collect() ----> 4 t1 NameError: name 't1' is not defined ``` Continue next() ``` next(m1) next(m2) ``` Output: ``` 31 31 ``` I'm still able to get values from map after deleting the tuple.
2021/10/16
[ "https://Stackoverflow.com/questions/69592525", "https://Stackoverflow.com", "https://Stackoverflow.com/users/15670527/" ]
`del` doesn't remove the tuple from memory, it just removes the variable. The `map` object has its own reference to the tuple -- it's a class instance variable variable. Garbage collection doesn't remove a the tuple from memory until all references to it are destroyed. This will happen when the generator reaches the end (it should delete its own reference to avoid a memory leak) or if you delete the reference to the generator.
With `del t1` you delete the *variable*, not the object it references. Before `del t1`: [![before](https://i.stack.imgur.com/7EsXt.png)](https://i.stack.imgur.com/7EsXt.png) After `del t1`: [![after](https://i.stack.imgur.com/0xwMq.png)](https://i.stack.imgur.com/0xwMq.png) So that's still all alive and well and functional. You just don't have the separate `t1` variable referencing the tuple anymore.
12,501
69,141,448
I have an error that I cannot resolve. here is the error I get when I authenticate with postman: **TypeError: Object of type ObjectId is not JSON serializable // Werkzeug Debugger** **File "C:\Users\Amoungui\AppData\Local\Programs\Python\Python39\Lib\json\encoder.py", line 179, in default raise TypeError(f'Object of type {o.**class**.**name**} ' TypeError: Object of type ObjectId is not JSON serializable** Here is my code, I followed the official documentation, but at this does not work I do not understand. here is the link of the documentation: <https://pythonhosted.org/Flask-JWT/> customer.py ``` from flask import jsonify, make_response from config.mongoose import db import bson class Customer(db.Document): _id = db.ObjectIdField(default=bson.ObjectId, primary_key=True) #bson.ObjectId tel = db.StringField() password = db.StringField() def to_json(self): return { "_id": self._id, "tel": self.tel, "password": self.password, } def findAll(self): users = [] for user in self.objects: users.append(user) return users ``` service.py ``` from Models.Customer import Customer from werkzeug.security import safe_str_cmp find_by_username = {u.tel:u for u in Customer.objects} find_by_id = {u._id: u for u in Customer.objects} def auth(username, password): user = find_by_username.get(username, None) if user and safe_str_cmp(user.password.encode('utf-8'), password.encode('utf-8')): return user def identity(payload): _id = payload['identity'] return find_by_id.get(_id) ``` thank's for your help
2021/09/11
[ "https://Stackoverflow.com/questions/69141448", "https://Stackoverflow.com", "https://Stackoverflow.com/users/12665256/" ]
Try to use `timestamps` in your schema after defining you fields. ``` const itemSchema = mongoose.Schema({ person_name: String, person_position: String, person_level: String, },{timestamps:true}); var RecordItem = mongoose.model("recorditem", itemSchema); ```
There are several ways to safe createdAt 1. timestamp : true in options 2. `createdAt: { type: Date, default: Date.now },` 3. itemSchema.pre('save', function(next) { if (!this.createdAt) { this.createdAt = new Date(); } next(); });
12,502
68,168,293
I am trying to retrieve data from SQL Server database using python but the system crash and display the below error: > > ProgrammingError: ('42000', "[42000] [Microsoft][ODBC SQL Server Driver][SQL Server]Incorrect syntax near the keyword 'where'. (156) (SQLExecDirectW); [42000] [Microsoft][ODBC SQL Server Driver][SQL Server]Statement(s) could not be prepared. (8180)") > > > Code: ``` import pandas as pd import streamlit as st search_term=st.text_input("Enter Search Term") cursor.execute("select * from testDB.dbo.t1 where ID = ? OR where first =?",search_term,search_term) dd = cursor.fetchall() print(dd) ```
2021/06/28
[ "https://Stackoverflow.com/questions/68168293", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5980666/" ]
You have not declared user variable. Either declare it as follows: ``` const user = firebase.auth().currentUser ``` Or directly pass it as param if you don't need user object anywhere else: ``` .doc(firebase.auth().currentUser.uid) ```
You should initialize the user before using it. ``` // Your web app's Firebase configuration var firebaseConfig = { apiKey: "####", authDomain: "###.firebaseapp.com", projectId: "#", storageBucket: "#.appspot.com", messagingSenderId: "#", appId: "1:####" }; // Initialize Firebase firebase.initializeApp(firebaseConfig); const auth = firebase.auth() const db = firebase.firestore() db.settings({ timestampsInSnapshots: true }) </script> <script> function savetodb() { var user = firebase.auth().currentUser; db.collection('users').doc(user.uid).get().then(doc => { const saveditems1 = doc.data().saveditems const ob = { Name:'Test', Price:'Test', Link:'https://www.test.com' } saveditems1.push(ob) }); ```
12,503
33,050,100
I am dealing with a simple csv file that contains three columns and three rows containing numeric data. The csv data file looks like the following: ``` Col1,Col2,Col3 1,2,3 2,2,3 3,2,3 4,2,3 ``` I have hard time figuring out how to let my python program subtracts the average value of the first column "Col1" from each value in the same column. For illustration the output should give the following values for 'Col1': ``` 1 - 2.5 = -1.5 2 - 2.5 = -0.5 3 - 2.5 = 0.5 4 - 2.5 = 1.5 ``` Here is my attempt that gives me (TypeError: unsupported operand type(s) for -: 'str' and 'float' ) at the last print statement which containing the comprehension. ``` import csv # Opening the csv file file1 = csv.DictReader(open('columns.csv')) file2 = csv.DictReader(open('columns.csv')) # Do some calculations NumOfSamples = open('columns.csv').read().count('\n') SumData = sum(float(row['Col1']) for row in file1) Aver = SumData/(NumOfSamples - 1) # compute the average of the data in 'Col1' # Subtracting the average from each value in 'Col1' data = [] for row in file2: data.append(row['Col1']) # Print the results print Aver print [e-Aver for e in data] # trying to use comprehension to subtract the average from each value in the list 'data' ``` I do not know how to solve this problem! Any idea how to make the comprehension working to give what is supposed to do?
2015/10/10
[ "https://Stackoverflow.com/questions/33050100", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1974919/" ]
Please show your html page from where you are sending post data. I think you should have to make an array of $\_POST variables then you can get all the records at php side and you can insert all three records in table. Try this Please check the below link where you can find your solution [Inserting Multiple Rows with PHP & MySQL](https://stackoverflow.com/questions/8235250/inserting-multiple-rows-with-php-mysql)
Create Model that holds all table columns. ``` class OrderModel extends BaseModel{ public $id; // Fill here all columns public function __construct($data) { foreach ($data as $key => $value) { $this->$key = $value; } } public function get_table_name() { return "ordering"; } } ``` Create Base model class and put insert method in this class. ``` public function insert() { $dbh = DB::connect(); $object_keys = array_keys(get_object_vars($this)); $query = "INSERT INTO " . $this->get_table_name() . " (" . implode(',', $object_keys) . ") VALUES(:" . implode(',:', $object_keys) . ");"; $sth = $dbh->prepare($query); foreach ($object_keys as $key) { $sth->bindParam(':' . $key, $this->$key); } $sth->execute(); $this->id = $dbh->lastInsertId(); return TRUE; } $filtered_post = filter_input_array(INPUT_POST); $order = New OrderModel($filtered_post); $order->insert(); ``` * Use PDO instead mysql\_ functions * Use Filter functions to filter POST Array
12,504
41,841,828
I would like to know if there is an else statement, like in python, that when attached to a **try-catch** structure, makes the block of code within it only executable if no exceptions were thrown/caught. For instance: ``` try { //code here } catch(...) { //exception handling here } ELSE { //this should execute only if no exceptions occurred } ```
2017/01/25
[ "https://Stackoverflow.com/questions/41841828", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7408143/" ]
The concept of an `else` for a `try` block doesn't exist in c++. It can be emulated with the use of a flag: ``` { bool exception_caught = true; try { // Try block, without the else code: do_stuff_that_might_throw_an_exception(); exception_caught = false; // This needs to be the last statement in the try block } catch (Exception& a) { // Handle the exception or rethrow, but do not touch exception_caught. } // Other catches elided. if (! exception_caught) { // The equivalent of the python else block goes here. do_stuff_only_if_try_block_succeeded(); } } ``` The `do_stuff_only_if_try_block_succeeded()` code is executed only if the try block executes without throwing an exception. Note that in the case that `do_stuff_only_if_try_block_succeeded()` does throw an exception, that exception will not be caught. These two concepts mimic the intent of the python `try ... catch ... else` concept.
Why not just put it at the end of the try block?
12,505
59,694,929
i am creating a project where react is not rendering anything on django localhost index.html ``` <!DOCTYPE html> <html lang="en"> <head></head> <body> <div id="App"> <!---all will be define in App.js--> <h1>Index.html </h1> </div> </body> {% load static%} <script src="{% static "frontend/main.js" %}"></script> </html> ``` app.js ``` import React, { Component } from 'react'; import ReactDOM from 'react-dom'; import Header from './layout/header'; class App extends Component { render() { return ( <h1>App.JS</h1> ) } } ReactDOM.render(<App />, document.getElementById('app')); ``` this is my project structure: [![Project structure](https://i.stack.imgur.com/3GRVF.png)](https://i.stack.imgur.com/3GRVF.png) After running npm run dev and python manage.py runserver this is the status everything is fine till here: [![the status](https://i.stack.imgur.com/W4LNL.png)](https://i.stack.imgur.com/W4LNL.png)
2020/01/11
[ "https://Stackoverflow.com/questions/59694929", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11631248/" ]
Change this source code: ``` document.getElementById('app') ``` ... to this: ``` document.getElementById('App') ```
Its because your element has id of "App" but you are trying to hook react app on element 'app'. It's case sensitive.
12,506
39,760,629
UnitTests has a feature to capture `KeyboardInterrupt`, finishes a test and then report the results. > > **-c, --catch** > > > *Control-C* during the test run waits for the current test to end and then reports all the results so far. A second *Control-C* > raises the normal KeyboardInterrupt exception. > > > See Signal Handling for the functions that provide this functionality. > > > c.f. <https://docs.python.org/2/library/unittest.html#command-line-options> > > > In PyTest, `Ctrl`+`C` will just stop the session. Is there a way to do the same as UniTests: * Capture `KeyboardInterrupt` * Finishes to execute the on-going test [optional] * Skip other tests * Display a result, and potentially print a report (e.g. usage of `pytest-html`) Thanks **[Edit 11 November 2016]** I tried to put the hook in my `conftest.py` file but it does not seem to work and capture. In particular, the following does not write anything in `toto.txt`. ``` def pytest_keyboard_interrupt(excinfo): with open('toto.txt', 'w') as f: f.write("Hello") pytestmark = pytest.mark.skip('Interrupted Test Session') ``` Does anybody has a new suggestion?
2016/09/29
[ "https://Stackoverflow.com/questions/39760629", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1603480/" ]
Your issue may lie in the execution ordering of your hook, such that pytest exits prior to your hook being executed. This could happen if an unhandled exception occurs in preexisting handling of the keyboard interrupt. To ensure your hook executes sooner, use `tryfirst` or `hookwrapper` as described [here](https://docs.pytest.org/en/latest/writing_plugins.html#hook-function-ordering-call-example). The following shall be written in **conftest.py** file: ``` import pytest @pytest.hookimpl(tryfirst=True) def pytest_keyboard_interrupt(excinfo): with open('toto.txt', 'w') as f: f.write("Hello") pytestmark = pytest.mark.skip('Interrupted Test Session') ```
Take a look at pytest's [hookspec](http://doc.pytest.org/en/latest/_modules/_pytest/hookspec.html). They have a hook for keyword interrupt. ``` def pytest_keyboard_interrupt(excinfo): """ called for keyboard interrupt. """ ```
12,508
20,748,202
It is widely known that using `eval()` is a potential security risk so the use of [`ast.literal_eval(node_or_string)`](http://docs.python.org/2/library/ast.html#ast.literal_eval) is promoted However In python 2.7 it returns `ValueError: malformed string` when running this example: ``` >>> ast.literal_eval("4 + 9") ``` Whereas in python 3.3 this example works as expected: ``` >>> ast.literal_eval('4+9') 13 ``` Why does it run on python 3 and not python 2? How can I fix it in python 2.7 without using the risky `eval()` function?
2013/12/23
[ "https://Stackoverflow.com/questions/20748202", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2425215/" ]
The reason this doesn’t work on Python 2 lies in its implementation of `literal_eval`. The original implementation only performed number evaluation for additions and subtractions when the righth operand was a complex number. This is syntactically necessary for complex numbers to be expressed as a literal. This [was changed](https://hg.python.org/cpython/rev/884c71cd8dc6/) in Python 3 so that it supports any kind of valid number expression to be on either side of the addition and subtraction. However, the use of `literal_eval` is still restricted to additions and subtractions. This is mostly because `literal_eval` is supposed to be a function that turns a single *constant* literal (expressed as a string) into a Python object. Kind of like a backwards `repr` for simple built-in types. Actual expression evaluation is not included, and the fact that this works with Python 3 is just a nice-to-have side effect from its implementation. In order to evaluate actual expressions, without having to use `eval` (which we don’t want to), we can write our own expression evaluation algorithm that operates on the AST. This is pretty simple, especially for simple arithmetic operations on numbers (for example to build your own calculator etc.). We simply parse the string into an AST and then evaluate the resulting tree by looking at the different node types and applying the correct operation. Something like this: ``` import ast, operator binOps = { ast.Add: operator.add, ast.Sub: operator.sub, ast.Mult: operator.mul, ast.Div: operator.div, ast.Mod: operator.mod } def arithmeticEval (s): node = ast.parse(s, mode='eval') def _eval(node): if isinstance(node, ast.Expression): return _eval(node.body) elif isinstance(node, ast.Str): return node.s elif isinstance(node, ast.Num): return node.n elif isinstance(node, ast.BinOp): return binOps[type(node.op)](_eval(node.left), _eval(node.right)) else: raise Exception('Unsupported type {}'.format(node)) return _eval(node.body) ``` As you can see, this implementation is pretty straightforward. Of course it does not support more complex stuff like exponentiation and some unary nodes yet, but it’s not too difficult to add that. And it works just fine: ``` >>> arithmeticEval('4+2') 6 >>> arithmeticEval('4*1+2*6/3') 8 ``` You could even introduce more complex things later (for example function calls for things like `sin()`).
Use the source, luke! [`http://hg.python.org/cpython/file/2.7/Lib/ast.py#l40`](http://hg.python.org/cpython/file/2.7/Lib/ast.py#l40) [`http://hg.python.org/cpython/file/3.2/Lib/ast.py#l39`](http://hg.python.org/cpython/file/3.2/Lib/ast.py#l39) You will find your answer in there. Specifically, the 2.7 version has the weird restriction on [line 70](http://hg.python.org/cpython/file/2.7/Lib/ast.py#l70) that the right node of the BinOp is complex. ``` >>> sys.version '2.7.3 (default, Sep 26 2013, 20:03:06) \n[GCC 4.6.3]' >>> ast.literal_eval('9 + 0j') (9 + 0j) >>> ast.literal_eval('0j + 9') ValueError: malformed string ``` I'm guessing that the intention of 2.7 was to allow `literal_eval` of complex literals for example numbers like `9 + 0j`, and it was never intended to do simple integer additions. Then in python 3 they beefed up the `literal_eval` to handle these cases.
12,509
61,385,841
I have specific question. I have lego EV3 and i installed Micropython. But i want import turtle, tkinter and other modules and they aren't in micropython. But time module working.Do someone know what modules are in ev3 micropython? Thanks for answer.
2020/04/23
[ "https://Stackoverflow.com/questions/61385841", "https://Stackoverflow.com", "https://Stackoverflow.com/users/13045504/" ]
To add bearer token in retrofit, you have to create a class that implements `Interceptor` ``` public class TokenInterceptor implements Interceptor{ @Override public Response intercept(Chain chain) throws IOException { //rewrite the request to add bearer token Request newRequest=chain.request().newBuilder() .header("Authorization","Bearer "+ yourtokenvalue) .build(); return chain.proceed(newRequest); } } ``` Now add your Interceptor class in OKHttpClient object and add that obejct in Retrofit object: ``` TokenInterceptor interceptor=new TokenInterceptor(); OkHttpClient client = new OkHttpClient.Builder() .addInterceptor(interceptor). .build(); Retrofit retrofit = new Retrofit.Builder() .client(client) .baseUrl("add your url here") .addConverterFactory(JacksonConverterFactory.create()) .build(); ```
these three class will be your final setup for all types of call > > for first call(Login) you do not need to pass token and after login pass jwt as bearer token to authenticate after authentication do not need to pass > > > ``` public class ApiUtils { private static final String BASE_URL="https://abcd.abcd.com/"; public ApiUtils() { } public static API getApiService(String token){ return RetrofitClient.getClient(BASE_URL,token).create(API.class); }} ``` 2.Using ApiUtils.getapiService you can get the client ,pass jwt or bearer token ``` public class RetrofitClient { public static Retrofit retrofit=null; public static Retrofit getClient(String baseUrl, String token){ HttpLoggingInterceptor interceptor = new HttpLoggingInterceptor(); interceptor.setLevel(HttpLoggingInterceptor.Level.BODY); OkHttpClient client = new OkHttpClient.Builder() .readTimeout(60,TimeUnit.SECONDS) .connectTimeout(60,TimeUnit.SECONDS) .addInterceptor(interceptor) .addInterceptor(new Interceptor() { @NotNull @Override public Response intercept(@NotNull Chain chain) throws IOException { Request request=chain.request().newBuilder() .addHeader("Authorization", "Bearer " + token) .build(); return chain.proceed(request); } }).build(); if(retrofit==null||token!=null){ retrofit= new Retrofit.Builder() .baseUrl(baseUrl) .client(client) .addConverterFactory(GsonConverterFactory.create()) .build(); } return retrofit; }} ``` 3 In this Interface you can create methods for get or post requests ``` public interface API { @POST("/Api/Authentication/Login") Call<JsonObject> login(@Body Model userdata); @POST("/api/Authentication/ValidateSession") Call<JsonObject> validateSession(@Body MyToken myToken); @POST("/api/master/abcd") Call<JsonObject> phoneDir(@Body JsonObject jsonObject); @Multipart @POST("/api/dash/UploadProfilePic") Call<JsonObject> uploadProfile(@Part MultipartBody.Part part); @FormUrlEncoded @POST("/api/dashboard/RulesAndPolicies") Call<JsonObject> rulesAndProcess(@Field("ct") int city); @FormUrlEncoded @POST("/api/dashboard/RulesAndPolicies") Call<JsonObject> rulesAndProcess( @Field("city") int city, @Field("department") String department, @Field("ctype") String ctype ); ```
12,518
38,775,586
The following python code: ``` # user profile information args = { 'access_token':access_token, 'fields':'id,name', } print 'ACCESSED', urllib.urlopen('https://graph.facebook.com/me', urllib.urlencode(args)).read() ``` Prints the following: *ACCESSED {"success":true}* The token is valid, no error, the fields are valid. Why is it not returning the fields I asked for?
2016/08/04
[ "https://Stackoverflow.com/questions/38775586", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1507649/" ]
Turns out urllib.urlopen will send the data as a POST when the data parameter is provided. Facebook Graph API works using GET not POST. Change the call to trick the function into calling just a URL ( no data ): ``` print 'ACCESSED', urllib.urlopen('https://graph.facebook.com/me/?' + urllib.urlencode(args)).read() ``` And everything works! Sigh, I can see why urllib is being altered in python 3.0...
You have to add a `/` to the URL to get `https://graph.facebook.com/me/` instead of `https://graph.facebook.com/me`. ``` # user profile information args = { 'access_token':access_token, 'fields':'id,name' } print 'ACCESSED', urllib.urlopen('https://graph.facebook.com/me/', urllib.urlencode(args)).read() ``` PS : Try using the `requests` which is much more efficient than `urllib`, here is the doc : <http://docs.python-requests.org/en/master/>
12,519
73,009,209
I have a pandas datafrme with a text column and was wondering how can I count the number of line breaks.This is how it's done in excel and would like to now how I can achieve this in python: [How To Count Number Of Lines (Line Breaks) In A Cell In Excel?](https://www.extendoffice.com/documents/excel/4785-excel-count-newlines.html#:%7E:text=This%20formula%20%3DLEN(A2)%2D,0%20for%20a%20blank%20cell.)
2022/07/17
[ "https://Stackoverflow.com/questions/73009209", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7297511/" ]
Your approach is slow because you loop over the rows and use intermediate copies. You should be able to use boolean indexing for direct swapping: ``` mask = final['HomeAway'].eq(0) final.loc[mask, 4:124], final.loc[mask, 124:] = final.loc[mask, 124:], final.loc[mask, 4:124] ```
The Data on which you are working is unknown and I have tried to replicate your problem with duplicate data. Change the variables and the indexing values while using it in your project **CODE** ``` import pandas as pd import numpy as np data = pd.DataFrame({"HomeAway": [1, 1, 0, 0, 1], "Value1": [14, 16, 29, 22, 21], "Value2": [8, 14, 24, 14, 19], "Value3": [6, 2, 5, 8, 2], "Value4": [3, 3, 2, 2, 0]}) print("BEFORE") print(data) left = np.asanyarray(data[data["HomeAway"] == 0].iloc[:, 1:3]) right = np.asanyarray(data[data["HomeAway"] == 0].iloc[:, 3:5]) data.iloc[data["HomeAway"] == 0, 1:3] = right data.iloc[data["HomeAway"] == 0, 3:5] = left print("AFTER") print(data) ``` **OUTPUT** ``` BEFORE HomeAway Value1 Value2 Value3 Value4 0 1 14 8 6 3 1 1 16 14 2 3 2 0 29 24 5 2 3 0 22 14 8 2 4 1 21 19 2 0 AFTER HomeAway Value1 Value2 Value3 Value4 0 1 14 8 6 3 1 1 16 14 2 3 2 0 5 2 29 24 3 0 8 2 22 14 4 1 21 19 2 0 ```
12,520
7,733,200
I'm trying to include an additional urls.py inside my main urls - however it doesn't seem to be working. I've done a bunch of searching and I can't seem to figure it out main urls.py file - the admin works fine ``` from django.conf.urls.defaults import patterns, include, url from django.contrib import admin admin.autodiscover() urlpatterns = patterns('', (r'^pnasser/',include('pnasser.urls')), (r'^admin/',include(admin.site.urls)), (r'^',include('pnasser.urls')), ) ``` I then have a folder pnasser, with the file urls.py with the following: ``` from django.conf.urls.defaults import patterns, include, url urlpatterns = patterns('pnasser.views', (r'^$','index'), (r'^login/$','login'), (r'^signup/$','signup'), (r'^insertaccount/$','insertaccount'), (r'^home/$','home'), (r'^update/(?P<accid>\d+)','update'), (r'^history/(?P<accid>\d+)','account_history'), (r'^logout/(?P<accid>\d+)','logout'), ) ``` I'm not sure if I'm maybe missing something else in the configuration. if I visit mysite.com/admin it loads the admin correctly, if I goto mysite or any other url in the views I get 404 page not found: > > Using the URLconf defined in mysite.urls, Django tried these URL > patterns, in this order: > 1. ^pnasser/ > 2. ^admin/ > > > The current URL, , didn't match any of these. > > > **edit** settings.py installed apps: ``` INSTALLED_APPS = ( 'django.contrib.auth', 'django.contrib.contenttypes', 'django.contrib.sessions', #'django.contrib.sites', 'django.contrib.messages', 'django.contrib.staticfiles', # Uncomment the next line to enable the admin: 'django.contrib.admin', # Uncomment the next line to enable admin documentation: # 'django.contrib.admindocs', 'pnasser', ) ``` **Update 2** So, I also tried running my site via the dev server: `python manage.py runserver 0.0.0.0:8000` this works. I'm assuming somewhere in my integration with apache using mod\_wsgi is the problem. However, I'm not sure where the problem would be
2011/10/11
[ "https://Stackoverflow.com/questions/7733200", "https://Stackoverflow.com", "https://Stackoverflow.com/users/225600/" ]
``` from django.contrib import admin,include admin.autodiscover() urlpatterns = patterns('', (r'^pnasser/',include('pnasser.urls')), (r'^admin/',include(admin.site.urls)), (r'^',include('pnasser.urls')), ) ``` maybe you missed "include" in the first line
``` Using the URLconf defined in mysite.urls, Django tried these URL patterns, in this order: ``` This error message should list all possible URLs, including the 'expanded' urls from your pnasser app. Since you're only getting the URLs from your main urls.py, it suggests you haven't properly enabled the `pnasser` app in settings.py's `INSTALLED_APPS`.
12,521
56,093,339
I'm currently trying to write a script that does a specific action on a certain day. So for example, if today is the 6/30/2019 and in my dataframe there is a 6/30/2019 entry, xyz proceeds to happen. However, I am having troubles comparing the date from a dataframe to a DateTime date. Here's how I created the dataframe ``` now = datetime.datetime.now() Test1 = pd.read_excel(r"some path") ``` Heres what the output looks like when I print the dataframe. ``` symbol ... phase 0 MDCO ... Phase 2 1 FTSV ... Phase 1/2 2 NVO ... Phase 2 3 PFE ... PDUFA priority review 4 ATRA ... Phase 1 ``` Heres' how the 'event\_date' column prints out ``` 0 05/18/2019 1 06/30/2019 2 06/30/2019 3 06/11/2019 4 06/29/2019 ``` So I've tried a few things I've seen in other threads. I've tried: ``` if (Test1['event_date'] == now): print('True') ``` That returns ``` ValueError: The truth value of a Series is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all(). ``` I've tried reformatting my the data with: ``` column_A = datetime.strptime(Test1['event_date'], '%m %d %y').date() ``` which returns this ``` TypeError: strptime() argument 1 must be str, not Series ``` I've tried this ``` if any(Test1.loc(Test1['event_date'])) == now: ``` and that returned this ``` TypeError: 'Series' objects are mutable, thus they cannot be hashed ``` I don't know why python is telling me the str is a dataframe, I'm assuming it has something to do with how python exports data from an excel sheet. I'm not sure how to fix this. I simply want python to check if any of the rows have the same `"event_date"` value as the current date and return the index or return a boolean to be used in a loop.
2019/05/11
[ "https://Stackoverflow.com/questions/56093339", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11486279/" ]
``` import datetime import pandas as pd da = str(datetime.datetime.now().date()) # converting the column to datetime, you can check the dtype of the column by doing # df['event_date'].dtypes df['event_date'] = pd.to_datetime(df['event_date']) # generate a df with rows where there is a match df_co = df.loc[df['event_date'] == da] ``` I would suggest doing xy or what is required in a column based on the match in the same column. i.e. `df.loc[df['event_date'] == da,'column name'] = df['x'] + df['y']` Easier than looping.
``` import pandas as pd import time # keep only y,m,d, and throw out the rest: now = (time.strftime("%Y/%m/%d")) # the column in the dataframe needs to be converted to datetime first. df['event_date'] = pd.to_datetime(df['event_date']) # to return indices df[df['event_date']==now].index.values # if you want to loop, you can do: for ix, row in df.iterrows(): if row['event_date'] == now: print(ix) ```
12,531
44,089,727
i have a weekly report that i need to do, i chooseed to create it with openpyxl python module, and send it via mail, when i open the received mail (outlook), the cells with formulas appears as empty, but when downloading the file and open it, the data appears, OS fedora 20. parts of the code : ``` # imported modules from openpyxl ... wb = Workbook() ws = wb.active counter = 3 ws.append(row) for day in data : row = ['']*(len(hosts)*2 +5) row[0] = day.dayDate row[1] ='=SUM(F'+str(counter)+':'+get_column_letter(len(hosts)+5)+str(counter)+\ ')/(COUNT(F'+str(counter)+':'+get_column_letter(len(hosts)+5)+str(counter)+'))' row[2] = '=SUM('+get_column_letter(len(hosts)+6)+str(counter)+':'+\ get_column_letter(len(hosts)*2+5)+str(counter)+')/COUNT('+\ get_column_letter(len(hosts)+6)+str(counter)+':'+\ get_column_letter(len(hosts)*2+5)+str(counter)+')' row[3] = '=MAX('+get_column_letter(len(hosts)+6)+str(counter)+':'+\ get_column_letter(len(hosts)*2+5)+str(counter)+')' row[4] = '=_xlfn.STDEV.P('+get_column_letter(len(hosts)+6)+str(counter)\ +':'+get_column_letter(len(hosts)*2+5)+str(counter)+')' counter += 1 ``` then, i create from the date some charts, etc.. and save, then send via mail : ``` wb.save(pathToFile+fileName+'.xlsx') os.system('echo -e "'+msg+'" | mail -s "'+fileName+'" -a '+\ pathToFile+fileName+'.xlsx -r '+myUsr+' '+ppl2send2) ``` those are parts of the actual code, any one have an idea why the email don't show the results of the formulas in the cells ? Thanks in advance :) [![enter image description here](https://i.stack.imgur.com/iZ7IQ.png)](https://i.stack.imgur.com/iZ7IQ.png)
2017/05/20
[ "https://Stackoverflow.com/questions/44089727", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6424190/" ]
Unfortunately not. There is no language support for what you want. Let me be specific about what you want just so that you understand what I answered. Your question is basically this: Given that I have *two* instances of an object, and I have properties in this object that have a private setter, is there any language support for ensuring that instance #1 cannot change this private information of instance #2? And the answer is no. This will compile and "work": ``` public class Test { public void TestIt(Test t) { t.Value = 42; } public int Value { get; private set; } } ... var t1 = new Test(); var t2 = new Test(); t1.TestIt(t2); // will "happily" change t2.Value ``` Basically, the onus is on *you* to make sure this doesn't happen if you don't want it to happen. There is no language or runtime support to prevent this. The access modifiers you can use are: * `public`: Anyone can access this * `private`: Only the type can access this * `protected`: Only the type, or a descendant of the type, can access this * `internal`: Any type *in the same assembly* can access this * `internal protected`: Any type *in the same assembly* **or** a descendant, can access this Other than this, you have no other options. So "only the same **instance** can access this" does not exist as an access modifier.
That won't work. If `Pos` is a property with a private setter (as it is) the only way they could change it would be by calling a public method from within `otherPlayer`. Something like `otherPlayer.SetPos(new Vector2(34,151))`, where `SetPos()` is: ``` public void SetPos(Vector2 NewPos) { Pos = NewPos; } ```
12,532
32,496,664
**What is the pythonic way to set a maximum length paramter?** Let's say I want to restrict a list of strings to a certain maximum size: ``` >>> x = ['foo', 'bar', 'a', 'rushmoreorless', 'kilimangogo'] >>> maxlen = 3 >>> [i for i in x if len(i) <= maxlen] ['foo', 'bar', 'a'] ``` And I want to functionalize it and allow different maxlen but if no maxlen is given, it should return the full list: ``` def func1(alist, maxlen): return [i for i in x if len(i) <= maxlen] ``` And I want to set the maxlen to the max length of element in alist, so I tried but I am torn between using `None`, `0` or `-1`. If I use `None`, it would be hard to cast the type later on: ``` def func1(alist, maxlen=None): if maxlen == None: maxlen = max(alist, key=len) return [i for i in x if len(i) <= maxlen] ``` And I might get this as the function builds on: ``` >>> maxlen = None >>> int(None) Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: int() argument must be a string or a number, not 'NoneType' >>> type(maxlen) <type 'NoneType'> ``` If I use -1, it sort of resolve the `int(maxlen)` issues but it's also sort of weird since length should never be negative. ``` def func1(alist, maxlen=-1): if maxlen < 0: maxlen = max(alist, key=len) return [i for i in x if len(i) <= maxlen] ``` If I use 0, I face the problem when I really need to return an empty list and set `maxlen=0` but then again, if I really need an empty list then I don't even need a function to create one. ``` def func1(alist, maxlen=0): if maxlen == 0: maxlen = max(alist, key=len) return [i for i in x if len(i) <= maxlen] ``` (Note: the ultimate task is NOT to filter a list, i.e. `(filter(lambda k: len(k) <= maxlen, lst))` but it's an example to ask about the pythonic way to set an integer variable that sets a maximum)
2015/09/10
[ "https://Stackoverflow.com/questions/32496664", "https://Stackoverflow.com", "https://Stackoverflow.com/users/610569/" ]
> > Let's say I want to restrict a list of strings to a certain maximum > size: > > > And I want to functionalize it and allow different maxlen but if no > maxlen is given, it should return the full list: > > > And I want to set the maxlen to the max length of element in alist > > > To address all these requests, the best answer I can think of is something like this... ``` def func(lst, ln=None): if not ln: return lst ln = max(ln, len(lst)) return [i for i in lst if len(i) <= ln] ``` --- **edit:** If it is important to handle negative maximum length (who knows why) or 0 length, then a function like this can be used. (Though I am against duck-typing) ``` def func(lst, ln=None): if ln is None: return lst elif ln < 1: # handles 0, and negative values. return [] else: ln = max(ln, len(lst)) return [i for i in lst if len(i) <= ln] ```
How about the following approach, this avoids the need to use `max`: ``` def filter_length(a_list, max_length=None): if max_length == 0: return [] elif max_length: return [i for i in x if len(i) <= max_length] else: return a_list x = ['foo', 'bar', 'a', 'rushmoreorless', 'kilimangogo'] print filter_length(x, 3) print filter_length(x) print filter_length(x, 0) ``` Giving you the output: ``` ['foo', 'bar', 'a'] ['foo', 'bar', 'a', 'rushmoreorless', 'kilimangogo'] [] ```
12,535
28,894,756
I have installed python 2.7, numpy 1.9.0, scipy 0.15.1 and scikit-learn 0.15.2. Now when I do the following in python: ``` train_set = ("The sky is blue.", "The sun is bright.") test_set = ("The sun in the sky is bright.", "We can see the shining sun, the bright sun.") from sklearn.feature_extraction.text import CountVectorizer vectorizer = CountVectorizer() print vectorizer CountVectorizer(analyzer=u'word', binary=False, charset=None, charset_error=None, decode_error=u'strict', dtype=<type 'numpy.int64'>, encoding=u'utf-8', input=u'content', lowercase=True, max_df=1.0, max_features=None, min_df=1, ngram_range=(1, 1), preprocessor=None, stop_words=None, strip_accents=None, token_pattern=u'(?u)\\b\\w\\w+\\b', tokenizer=None, vocabulary=None) vectorizer.fit_transform(train_set) print vectorizer.vocabulary None. ``` Actually it should have printed the following: ``` CountVectorizer(analyzer__min_n=1, analyzer__stop_words=set(['all', 'six', 'less', 'being', 'indeed', 'over', 'move', 'anyway', 'four', 'not', 'own', 'through', 'yourselves', (...) ---> For count vectorizer {'blue': 0, 'sun': 1, 'bright': 2, 'sky': 3} ---> for vocabulary ``` The above code are from the blog: <http://blog.christianperone.com/?p=1589> Could you please help me as to why I get such an error. Since the vocabulary is not indexed properly I am not able to move ahead in understanding the concept of TF-IDF. I am a newbie for python so any help would be appreciated. Arc.
2015/03/06
[ "https://Stackoverflow.com/questions/28894756", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4193051/" ]
You are missing an underscore, try this way: ``` from sklearn.feature_extraction.text import CountVectorizer train_set = ("The sky is blue.", "The sun is bright.") test_set = ("The sun in the sky is bright.", "We can see the shining sun, the bright sun.") vectorizer = CountVectorizer(stop_words='english') document_term_matrix = vectorizer.fit_transform(train_set) print vectorizer.vocabulary_ # {u'blue': 0, u'sun': 3, u'bright': 1, u'sky': 2} ``` If you use the ipython shell, you can use tab completion, and you can find easier the methods and attributes of objects.
Try using the `vectorizer.get_feature_names()` method. It gives the column names in the order it appears in the `document_term_matrix`. ``` from sklearn.feature_extraction.text import CountVectorizer train_set = ("The sky is blue.", "The sun is bright.") test_set = ("The sun in the sky is bright.", "We can see the shining sun, the bright sun.") vectorizer = CountVectorizer(stop_words='english') document_term_matrix = vectorizer.fit_transform(train_set) vectorizer.get_feature_names() #> ['blue', 'bright', 'sky', 'sun'] ```
12,538
48,671,331
Am implementing a sign up using python & mysql. Am getting the error no module named flask.ext.mysql and research implies that i should install flask first. They say it's very simple, you simply type pip install flask-mysql but where do i type this? In mysql command line for my database or in the python app?
2018/02/07
[ "https://Stackoverflow.com/questions/48671331", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8857901/" ]
Pip is used from the command line. If you are on a Linux/Mac machine, type it from the Terminal. Make sure you actually have Pip. If you don't, use this command (on the Terminal) on linux: ``` sudo apt-get install pip ``` If you are on a Mac, use (in the Terminal): ``` /usr/bin/ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)" ``` then: ``` brew install pip ``` After doing that on Mac/Linux, you can go ahead and execute the command: ``` pip install flask-mysql ``` If you are on a Windows machine, read this instead: [How do I install pip on Windows?](https://stackoverflow.com/questions/4750806/how-do-i-install-pip-on-windows) Hope this helped!
You should be able to type it in the command line for your operating system (ie. CMD/bash/terminal) as long as you have pip installed and the executable location is in your PATH.
12,539
41,708,881
I used pip today for the first time in a while and I got the helpful message > > You are using pip version 8.1.1, however version 9.0.1 is available. > You should consider upgrading via the 'pip install --upgrade pip' command. > > > So, I went ahead and ``` pip install --upgrade pip ``` but things did not go according to plan... ``` Collecting pip Downloading pip-9.0.1-py2.py3-none-any.whl (1.3MB) 100% |████████████████████████████████| 1.3MB 510kB/s Installing collected packages: pip Found existing installation: pip 8.1.1 Uninstalling pip-8.1.1: Exception: Traceback (most recent call last): File "//anaconda/lib/python2.7/site-packages/pip/basecommand.py", line 209, in main status = self.run(options, args) File "//anaconda/lib/python2.7/site-packages/pip/commands/install.py", line 317, in run prefix=options.prefix_path, File "//anaconda/lib/python2.7/site-packages/pip/req/req_set.py", line 726, in install requirement.uninstall(auto_confirm=True) File "//anaconda/lib/python2.7/site-packages/pip/req/req_install.py", line 746, in uninstall paths_to_remove.remove(auto_confirm) File "//anaconda/lib/python2.7/site-packages/pip/req/req_uninstall.py", line 115, in remove renames(path, new_path) File "//anaconda/lib/python2.7/site-packages/pip/utils/__init__.py", line 267, in renames shutil.move(old, new) File "//anaconda/lib/python2.7/shutil.py", line 303, in move os.unlink(src) OSError: [Errno 13] Permission denied: '/anaconda/lib/python2.7/site-packages/pip-8.1.1.dist-info/DESCRIPTION.rst' You are using pip version 8.1.1, however version 9.0.1 is available. You should consider upgrading via the 'pip install --upgrade pip' command. ``` And now it seems that pip is completely gone from my computer: ``` $ pip -bash: //anaconda/bin/pip: No such file or directory ``` Is pip really gone, that is, did it really uninstall and then fail to reinstall, or did something just get unlinked? How can I avoid this issue in the future? Because I can imagine I will need to upgrade pip again at some point...
2017/01/17
[ "https://Stackoverflow.com/questions/41708881", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3704831/" ]
You can reinstall `pip` with `conda`: ``` conda install pip ``` Looks like you need to have root rights: ``` sudo conda install pip ```
You can use curl to reinstall pip via the Python Packaging Authority website: ``` curl https://bootstrap.pypa.io/get-pip.py | python ```
12,540
66,963,342
How can I change the output of the `models.ForeignKey` field in my below custom field? Custom field: ```py class BetterForeignKey(models.ForeignKey): def to_python(self, value): print('to_python', value) return { 'id': value.id, 'name_fa': value.name_fa, 'name_en': value.name_en, } def get_db_prep_value(self, value, connection, prepared=False): print('get_db_prep_value') return super().get_db_prep_value(value, connection, prepared) def get_prep_value(self, value): print('get_prep_value') return super().get_prep_value(value) ``` And used in the below model: ```py class A(models.Model): ... job_title = BetterForeignKey(JobTitle, on_delete=models.CASCADE) ``` I want to change the output of the below `print(a.job_title)` statement: ```py >>> a = A.objects.filter(job_title__isnull=False).last() get_db_prep_value get_prep_value >>> print(a.job_title) Developer ```
2021/04/06
[ "https://Stackoverflow.com/questions/66963342", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7431943/" ]
None of the above worked for me, but turns out the solution was quite simple... All I was doing wrong was not explicitly including "null" as the parameter in the useRef initialization (it expects null, not undefined). Also you CANNOT use "HTMLElement" as your ref type, you have to be more specific, so for me it was "HTMLDivElement" for example). So working code for me was something like this: ``` const ref = useRef<HTMLDivElement>(null); return <div ref={ref}> Some Content... </div> ```
The same stands for the `<svg>` elements: ``` const ref = useRef<SVGSVGElement>(null) ... <svg ref={ref} /> ```
12,543
14,160,686
I'm writing a python (3.2+) plugin library and I want to create a function which will create some variables automatically handled from config files. The use case is as follows (class variable): ``` class X: option(y=0) def __init__(self): pass ``` (instance variable): ``` class Y: def __init__(self): option(y=0) ``` the option draft code is as follows: ``` def option(**kwargs): frame = inspect.stack()[1][0] locals_ = frame.f_locals if locals_ == frame.f_globals: raise SyntaxError('option() can only be used in a class definition') if '__module__' in locals_: # TODO else: for name, value in kwargs.items(): if not name in locals_["self"].__class__.__dict__: setattr(locals_["self"].__class__, name, VirtualOption('_'+name, static=False)) setattr(locals_["self"], '_'+name,value) ``` I have problem the first case, when option is declared as class variable. Is it possible to somehow get reference to the class in which this function was used (in example to class X)?
2013/01/04
[ "https://Stackoverflow.com/questions/14160686", "https://Stackoverflow.com", "https://Stackoverflow.com/users/889902/" ]
You cannot get a reference to the class, because the class has yet to be created. Your parent frame points a temporary function, whose `locals()` when it completes will be used as the class body. As such, all you need to do is add your variables to the parent frame locals, and these will be added to the class when class construction is finished. Short demo: ``` >>> def foo(): ... import sys ... flocals = sys._getframe(1).f_locals ... flocals['ham'] = 'eggs' ... >>> class Bar: ... foo() ... >>> dir(Bar) ['__class__', '__delattr__', '__dict__', '__dir__', '__doc__', '__eq__', '__format__', '__ge__', '__getattribute__', '__gt__', '__hash__', '__init__', '__le__', '__locals__', '__lt__', '__module__', '__ne__', '__new__', '__qualname__', '__reduce__', '__reduce_ex__', '__repr__', '__setattr__', '__sizeof__', '__str__', '__subclasshook__', '__weakref__', 'ham'] >>> Bar.ham 'eggs' ```
It seems to me that a metaclass would be suitable here: **python2.x syntax** ``` def class_maker(name,bases,dict_): dict_['y']=0 return type(name,bases,dict_) class X(object): __metaclass__ = class_maker def __init__(self): pass print X.y foo = X() print foo.y ``` **python3.x syntax** It seems that python3 uses a `metaclass` keyword in the class definition: ``` def class_maker(name,bases,dict_): dict_['y']=0 return type(name,bases,dict_) class X(metaclass=class_maker): def __init__(self): pass print( X.y ) foo = X() print( foo.y ) print( type(foo) ) ``` Or, more along the lines of what you have in your question: ``` def class_maker(name,bases,dict_,**kwargs): dict_.update(kwargs) return type(name,bases,dict_) class X(metaclass=lambda *args: class_maker(*args,y=0)): def __init__(self): pass print( X.y ) foo = X() print( foo.y ) print( type(foo) ) ```
12,553
8,301,962
I am trying to rewrite the code described [here](http://opencv.itseez.com/doc/tutorials/features2d/feature_homography/feature_homography.html#feature-homography). using the python API for Opencv. The step 3 of the code has this lines: ``` FlannBasedMatcher matcher; std::vector< DMatch > matches; matcher.match( descriptors_object, descriptors_scene, matches ); ``` I have looked over and over in [the OpenCV reference](http://opencv.itseez.com/index.html) but found nothing related to a FlannBasedMatcher in python or some other object which can do the work. Any ideas? NOTE: I am usign OpenCV 2.3.1 and Python 2.6
2011/11/28
[ "https://Stackoverflow.com/questions/8301962", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1053925/" ]
Looking in the examples provided by OpenCV 2.3.1 under the python2 folder, I found an implementation of a flann based match function which doesn't rely on the FlanBasedMatcher object. Here is the code: ``` FLANN_INDEX_KDTREE = 1 # bug: flann enums are missing flann_params = dict(algorithm = FLANN_INDEX_KDTREE, trees = 4) def match_flann(desc1, desc2, r_threshold = 0.6): flann = cv2.flann_Index(desc2, flann_params) idx2, dist = flann.knnSearch(desc1, 2, params = {}) # bug: need to provide empty dict mask = dist[:,0] / dist[:,1] < r_threshold idx1 = np.arange(len(desc1)) pairs = np.int32( zip(idx1, idx2[:,0]) ) return pairs[mask] ```
Pythonic FlannBasedMatcher is already available in OpenCV trunk, but if I remember correctly, it was added after 2.3.1 release. Here is OpenCV sample using FlannBasedMatcher: <http://code.opencv.org/projects/opencv/repository/revisions/master/entry/samples/python2/feature_homography.py>
12,554
31,483,448
I have a python script which I want to start using a rc(8) script in FreeBSD. The python script uses the `#!/usr/bin/env python2` pattern for portability purposes. (Different \*nix's put interpreter binaries in different locations on the filesystem). The FreeBSD rc scripts will not work with this. Here is a script that sets up a test scenario that demonstrates this: ``` #!/bin/sh # Create dummy python script which uses env for shebang. cat << EOF > /usr/local/bin/foo #!/usr/bin/env python2.7 print("Hello foo") EOF # turn on executable bit chmod +x /usr/local/bin/foo # create FreeBSD rc script with command_interpreter specified. cat << EOF > /usr/local/etc/rc.d/foo #!/bin/sh # # PROVIDE: foo . /etc/rc.subr name="foo" rcvar=foo_enable command_interpreter="/usr/local/bin/python2.7" command="/usr/local/bin/foo" load_rc_config \$name run_rc_command \$1 EOF # turn on executable bit chmod +x /usr/local/etc/rc.d/foo # enable foo echo "foo_enable=\"YES\"" >> /etc/rc.conf ``` Here follows a console log demonstrating the behaviour when executing the rc script directly. Note this works, but emits a warning. ``` # /usr/local/etc/rc.d/foo start /usr/local/etc/rc.d/foo: WARNING: $command_interpreter /usr/local/bin/python2 != python2 Starting foo. Hello foo # ``` Here follows a console log demonstrating the behaviour when executing the rc script using the service(8) command. This fails completely. ``` # service foo start /usr/local/etc/rc.d/foo: WARNING: $command_interpreter /usr/local/bin/python2 != python2 Starting foo. env: python2: No such file or directory /usr/local/etc/rc.d/foo: WARNING: failed to start foo # ``` Why does the `service foo start` fail? Why does rc warn about the interpreter? Why does it not use the interpreter as specified in the `command_interpreter` variable?
2015/07/17
[ "https://Stackoverflow.com/questions/31483448", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1183499/" ]
*Self answered my question, but I'm hoping someone else will give a better answer for posterity* The reason why env(1) does not work is because it expects an environment in the first place, but rc scripts run before the environment is set up. Hence it fails. It seems that the popular env shebang pattern is actually an anti-pattern. I do not have a cogent answer for the `command_interpreter` warning.
The command interpreter warning is generated by the `_find_processes()` function in `/usr/src/etc/rc.subr`. The reason that it does that is because a service written in an interpreted language is found in `ps` output by the *name of the interpreter*.
12,557
6,648,394
I have a project and I want to use python but the server is only Windows Server 2000 can it run on this system?
2011/07/11
[ "https://Stackoverflow.com/questions/6648394", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1347816/" ]
If you are using windows 2000 then it's possible that python 3.2 is not your best alternative. A couple of months ago there was an interesting thread in the python-dev mailing list[1] about dropping win2k support (there are some annoying bugs for this platform). [1] <http://mail.python.org/pipermail/python-dev/2010-March/098074.html>
You can use [Micro Python](https://micropython.org/) for DOS. It has Python 3.4 syntax.
12,558
5,189,483
If my question is unclear there is a great explainaion of what I'm attempting to do here under the section, "Method 2: The British Method": <http://www.gradeamathhelp.com/how-to-factor-polynomials.html> My current program simply inputted all 3 A,B, and C variables and then assigned A\*C to D I then took the negative value of the absolute value of D and assigned it to X and Y I then simply did if/then statements to test if X+Y=B and X\*Y=D and if not, to add .5 to X until it was equal to or greater than D at which point I put X back to it's orignial value and added .5 to Y. This resulted in a memory error. Disregarding the awful, AWFUL habit I made of using if/then statements, does anyone have a better idea of how I can solve this? (And cut me some slack, I only dabble around in java and python and sometimes TIBasic, and I'm only a sophomore in highschool!) Note: This code won't run because I'm recreating it, it's not the actual code, just a recreation. Syntax is all bugged up. (IE: -> is an arrow, not a negative equal sign) I just wrote this so i might have forgotten something. ``` :Prompt A :Prompt B :Prompt C :A*C→D :-abs(D)→X :-abs(D)→Y :Lbl A :If X+Y=B and X*Y=D :Then :Disp X,Y :Pause :Else :X+.5→X :Goto B :Lbl B :If X>D :-abs(D)→X :Y+.5→Y :Goto A ```
2011/03/04
[ "https://Stackoverflow.com/questions/5189483", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
Your code isn't working because you need "END" commands every time you use a "THEN" command. END is also used to close off "REPEAT", "FOR", and "WHILE" loops. Why does it need "END" for an "IF,THEN" type command? Because "IF,THEN" equates to: ``` If X+Y=B and X*Y=D ( ``` In typical scripting The "END" is like an end parentheses, and is needed because the "THEN" is like an open parentheses. If you only have one command needing executed, do something like ``` If X+Y=B and X*Y=D do this ``` Which is like ``` If X+Y=B and X*Y=D do this ``` I'm not sure on this because I'm not sure exactly what you want to do, but I think the code is: ``` :Prompt A :Prompt B :Prompt C :A*C -> D :-abs(D) -> X :-abs(D) -> Y :lbl A :if X+Y=B and X*Y=D :then :disp X,Y :pause :else :X+.5 -> X :goto B :end :lbl B :if X>D :-abs(D) -> X :Y+.5->Y :goto A ``` Where if X+Y=B and X\*Y=D, it will display the correct answer, but if not it will add 0.5 to X, and proceed to lbl B.Within lbl B, it will check to check if X>D; if X is greater than D then the negative absolute value of D will be stored as X, Y will be increased by 0.5, and it will recheck the values again in lbl A. Also, a note on lbl B: ``` :if X>D :-abs(D) -> X :Y+.5->Y :goto A ``` Will make store the negative absolute value of D as X only if X>D. Y will be increased by 0.5 whether or not X is greater than or less than D. Not sure if that's what you intended, but just in case I figured I'd bring it to your attention.
Download available at: <http://www.ticalc.org/pub/83/basic/math/algebra/>
12,559
16,847,597
I am new to python and programming, so apologies in advance. I know of remove(), append(), len(), and rand.rang (or whatever it is), and I believe I would need those tools, but it's not clear to me *how* to code it. What I would like to do is, while looping or otherwise accessing List\_A, randomly select an index within List\_A, remove the selected\_index from List\_A, and then append() the selected\_index to List\_B. I would like to randomly remove only up to a *certain percentage* (or real number if this is impossible) of items from List A. Any ideas?? Is what I'm describing possible?
2013/05/30
[ "https://Stackoverflow.com/questions/16847597", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2058922/" ]
If you don't care about the order of the input list, I'd shuffle it, then remove `n` items from that list, adding those to the other list: ``` from random import shuffle def remove_percentage(list_a, percentage): shuffle(list_a) count = int(len(list_a) * percentage) if not count: return [] # edge case, no elements removed list_a[-count:], list_b = [], list_a[-count:] return list_b ``` where `percentage` is a float value between `0.0` and `1.0`. Demo: ``` >>> list_a = range(100) >>> list_b = remove_percentage(list_a, 0.25) >>> len(list_a), len(list_b) (75, 25) >>> list_b [1, 94, 13, 81, 23, 84, 41, 92, 74, 82, 42, 28, 75, 33, 35, 62, 2, 58, 90, 52, 96, 68, 72, 73, 47] ```
If you can find a random index `i` of some element in `listA`, then you can easily move it from A to B using: ``` listB.append(listA.pop(i)) ```
12,561
62,999,056
This python 3 code does exactly what I want ```py from pathlib import Path def minify(src_dir:Path, dest_dir:Path, n: int): """Write first n lines of each file f in src_dir to dest_dir/f""" dest_dir.mkdir(exist_ok=True) for path in src_dir.iterdir(): new = [x.rstrip() for x in list(path.open().readlines())][:n] dest_path = dest_dir.joinpath(path.name) dest_path.open('w').write('\n'.join(new)) ``` Is there a way to do the same in bash, maybe with `xargs`? ``` ls src_dist/* | xargs head -10 ``` displays what I need, but I don't know how to route that output to the proper file.
2020/07/20
[ "https://Stackoverflow.com/questions/62999056", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4381942/" ]
Looks like you are using Bootstrap. Currently, your left nav is initially set to `position: fixed`, I recommend using `position: relative` to your left nav initially so that positioning your `nav` elements can be **relative to the height of the background image**. Using Bootstrap, this solution wraps the left nav & the content in a flex container so that the content can be positioned relative to this container easily, since later on the navs are going to get `position: fixed`. Basically on the script, just detect if the top of the scroll bar's Y position is already past the `height` of the background image element. If it is, assign the `fixed` position to the nav elements and adjust the content's position as needed relative to the container wrapping the left nav & the content. Check the CSS properties involving `.navs-are-fixed` to see how the navs are assigned the `fixed` position. ```js $(window).scroll(function() { // innerHeight is used to include any padding var bgImgHeight = $('.some-bg-img').innerHeight(); // if the the scroll reaches the end of the background image, this is when you start to assign 'fixed' to your nav elements if ($(window).scrollTop() >= bgImgHeight) { $('body').addClass("navs-are-fixed"); } else { $('body').removeClass("navs-are-fixed"); } }); ``` ```css #menu_izq { height: 100%; width: 200px; position: relative; left: 0; top: 0; z-index: 3; background-color: #503522; overflow-x: hidden; padding-top: 20px; } #menu_arriba { background-color: #503522; width: 100%; z-index: 1; position: relative; top: 0; } body { background-color: #ffc75a !important; height: 300vh; /* sample arbitrary value to force body scrolling */ } .some-bg-img { background: url(https://via.placeholder.com/1920x200.png?text=Sample%20Background%20Image); height: 200px; background-repeat: no-repeat; background-size: cover; background-position: bottom; } .navs-are-fixed #menu_arriba { position: fixed; } .navs-are-fixed #menu_izq { position: fixed; top: 72px; /* the height of your top nav */ } .navs-are-fixed .some-sample-content { position: absolute; top: 72px; /* the height of your top nav */ left: 200px; /* the width of your left nav */ } ``` ```html <script src="https://code.jquery.com/jquery-3.3.1.min.js" integrity="sha256-FgpCb/KJQlLNfOu91ta32o/NMZxltwRo8QtmkMRdAu8=" crossorigin="anonymous"></script> <link rel="stylesheet" href="https://stackpath.bootstrapcdn.com/bootstrap/4.5.0/css/bootstrap.min.css" integrity="sha384-9aIt2nRpC12Uk9gS9baDl411NQApFmC26EwAOH8WgZl5MYYxFfc+NcPb1dKGj7Sk" crossorigin="anonymous"> <body> <div class="some-bg-img"></div> <nav class="navbar navbar-expand-lg navbar-light bg-light" id="menu_arriba"> <div class="navbar-brand"></div> <a href="user.php" class="navbar-brand"> <p>Inicio</p> </a> <a href="instrucciones.php" class="navbar-brand"> <p>Instrucciones</p> </a> <a href="contacto.php" class="navbar-brand"> <p>Contacto</p> </a> <a href="faq.php" class="navbar-brand"> <p>FaQ</p> </a> <a href="ajax/logout.php" class="navbar-brand"> <p><i class="fas fa-sign-out-alt"></i> Salir</p> </a> </nav> <div class="d-flex h-100 w-100 position-absolute"> <nav id="menu_izq" class="sidenav"> <div></div> <a href="nueva_cata.php"> <p>Nueva Cata</p> </a> <a href="nueva_cerveza.php"> <p>Nueva Cerveza</p> </a> <a href="cata.php"> <p>Mis catas</p> </a> <a href="mis_cervezas.php"> <p>Mis cervezas</p> </a> <a href="mis_amigos.php"> <p>Mis amigos</p> </a> </nav> <div class="some-sample-content"> <p>Lorem Ipsum</p> <p>Lorem Ipsum</p> <p>Lorem Ipsum</p> <p>Lorem Ipsum</p> <p>Lorem Ipsum</p> <p>Lorem Ipsum</p> <p>Lorem Ipsum</p> <p>Lorem Ipsum</p> <p>Lorem Ipsum</p> <p>Lorem Ipsum</p> <p>Lorem Ipsum</p> <p>Lorem Ipsum</p> <p>Lorem Ipsum</p> </div> </div> </body> ``` If you really must keep your `fixed` positioning on your left nav, then you are going to have to compute it's `top` value based on on the `height` of both the banner & the top nav, such that if the `top` of scroll bar's Y position is past the `height` of the banner, the `top` value of the left nav will be equal to the height of the top nav - so you push it down so that they don't overlap. If the `top` of the scroll bar's Y position is not past the height of the banner, the top value of the left nav is going to be equal to the difference of the `height` of the banner & the top nav minus the top of the scroll bar's Y position. ```js $(window).scroll(function() { // innerHeight is used to include any padding var bgImgHeight = $('.some-bg-img').innerHeight(); var topNavHeight = $('#menu_arriba').innerHeight(); var leftNavInitialCssTop = bgImgHeight + topNavHeight; // if the the scroll reaches the end of the background image, this is when you start to assign 'fixed' to the top nav if ($(window).scrollTop() >= bgImgHeight) { $('body').addClass("navs-are-fixed"); $("#menu_izq").css("top", topNavHeight); } else { $('body').removeClass("navs-are-fixed"); $("#menu_izq").css("top", leftNavInitialCssTop - $(window).scrollTop()) } }); ``` ```css #menu_izq { height: 100%; width: 200px; position: fixed; left: 0; top: 252px; z-index: 3; background-color: #503522; overflow-x: hidden; padding-top: 20px; } #menu_arriba { background-color: #503522; width: 100%; z-index: 1; top: 0; } body { background-color: #ffc75a !important; height: 400vh; /* sample arbitrary value to force body scrolling */ } .some-bg-img { background: url(https://via.placeholder.com/1920x200.png?text=Sample%20Background%20Image); height: 179px; background-repeat: no-repeat; background-size: cover; background-position: bottom; } .navs-are-fixed #menu_arriba { position: fixed; } ``` ```html <script src="https://code.jquery.com/jquery-3.3.1.min.js" integrity="sha256-FgpCb/KJQlLNfOu91ta32o/NMZxltwRo8QtmkMRdAu8=" crossorigin="anonymous"></script> <link rel="stylesheet" href="https://stackpath.bootstrapcdn.com/bootstrap/4.5.0/css/bootstrap.min.css" integrity="sha384-9aIt2nRpC12Uk9gS9baDl411NQApFmC26EwAOH8WgZl5MYYxFfc+NcPb1dKGj7Sk" crossorigin="anonymous"> <div class="some-bg-img"></div> <nav class="navbar navbar-expand-lg navbar-light bg-light" id="menu_arriba"> <div class="navbar-brand"></div> <a href="user.php" class="navbar-brand"> <p>Inicio</p> </a> <a href="instrucciones.php" class="navbar-brand"> <p>Instrucciones</p> </a> <a href="contacto.php" class="navbar-brand"> <p>Contacto</p> </a> <a href="faq.php" class="navbar-brand"> <p>FaQ</p> </a> <a href="ajax/logout.php" class="navbar-brand"> <p><i class="fas fa-sign-out-alt"></i> Salir</p> </a> </nav> <nav id="menu_izq" class="sidenav"> <div></div> <a href="nueva_cata.php"> <p>Nueva Cata</p> </a> <a href="nueva_cerveza.php"> <p>Nueva Cerveza</p> </a> <a href="cata.php"> <p>Mis catas</p> </a> <a href="mis_cervezas.php"> <p>Mis cervezas</p> </a> <a href="mis_amigos.php"> <p>Mis amigos</p> </a> </nav> ```
You should be able to have that working with just CSS and no javascript using `position: sticky` attribute. Make both elements `position: sticky`, the top nav should have a `top: 0` property and the side nav should have a `top: x` property where `x` is the height of the top nav. That should be enough and you should be able to remove the js code. Read more about `sticky` position here <https://developer.mozilla.org/en-US/docs/Web/CSS/position>
12,564
58,525,753
I'm trying to use Ansible with ssh for interact with Windows machines i have successfully install OpenSSH on a Windows machine that mean i can connect from linux to windows with: ``` ssh username@ipAdresse ``` i've tried using a lot of version of ansible (2.6, 2.7.12, 2.7.14, 2.8.5 and 2.8.6) and i always test if i can ping an other Linux machine with this line(it work): ``` ansible linux -m ping ``` There is my hosts file ``` [windows] 192.***.***.*** [linux] 192.***.***.*** [all:vars] ansible_connection=ssh ansible_user=root [windows:vars] ansible_ssh_pass=******* remote_tmp=C:\Users\root\AppData\Local\Temp\ become_method=runas ``` there is the error with verbose: ``` [root@oel76-template ~]# ansible windows -m win_ping -vvv ansible 2.8.6 config file = /etc/ansible/ansible.cfg configured module search path = [u'/root/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules'] ansible python module location = /usr/lib/python2.7/site-packages/ansible executable location = /usr/bin/ansible python version = 2.7.5 (default, Aug 7 2019, 08:19:52) [GCC 4.8.5 20150623 (Red Hat 4.8.5-39.0.1)] Using /etc/ansible/ansible.cfg as config file host_list declined parsing /etc/ansible/hosts as it did not pass it's verify_file() method script declined parsing /etc/ansible/hosts as it did not pass it's verify_file() method auto declined parsing /etc/ansible/hosts as it did not pass it's verify_file() method Parsed /etc/ansible/hosts inventory source with ini plugin META: ran handlers <192.***.***.***> ESTABLISH SSH CONNECTION FOR USER: root <192.***.***.***> SSH: EXEC sshpass -d8 ssh -C -o ControlMaster=auto -o ControlPersist=60s -o 'User="root"' -o ConnectTimeout=10 -o ControlPath=/root/.ansible/cp/91df1ca379 192.168.46.99 '/bin/sh -c '"'"'( umask 77 && mkdir -p "` echo C:/Users/root/AppData/Local/Temp/ansible-tmp-1571839448.66-279092717123794 `" && echo ansible-tmp-1571839448.66-279092717123794="` echo C:/Users/root/AppData/Local/Temp/ansible-tmp-1571839448.66-279092717123794 `" ) && sleep 0'"'"'' <192.***.***.***> (1, '', 'The system cannot find the path specified.\r\n') <192.***.***.***> Failed to connect to the host via ssh: The system cannot find the path specified. 192.***.***.*** | UNREACHABLE! => { "changed": false, "msg": "Authentication or permission failure. In some cases, you may have been able to authenticate and did not have permissions on the target directory. Consider changing the remote tmp path in ansible.cfg to a path rooted in \"/tmp\". Failed command was: ( umask 77 && mkdir -p \"` echo C:/Users/root/AppData/Local/Temp/ansible-tmp-1571839448.66-279092717123794 `\" && echo ansible-tmp-1571839448.66-279092717123794=\"` echo C:/Users/root/AppData/Local/Temp/ansible-tmp-1571839448.66-279092717123794 `\" ), exited with result 1", "unreachable": true } ``` I don't know what i'm doing wrong, i also try to change the remote\_tmp in ansible.cfg but nothing more. Actual value for remote\_tmp=C:/Users/root/AppData/Local/Temp Any idea ?
2019/10/23
[ "https://Stackoverflow.com/questions/58525753", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11431519/" ]
Ok solved, the problem was ``` ansible_ssh_pass=***** ``` the correct syntax is ``` ansible_password=***** ```
To use SSH as the connection to a Windows host (starting from Ansible 2.8), set the following variables in the inventory: * ansible\_connection=ssh * **ansible\_shell\_type**=cmd/powershell (Set either cmd or powershell not both) Finally, the inventory file: ``` [windows] 192.***.***.*** [all:vars] ansible_connection=ssh ansible_user=root [windows:vars] ansible_password='*******' ansible_shell_type=cmd ``` Note for the variable **ansible\_password**. Use single quotation for password with special characters.
12,569
68,590,820
I want to use the face recognition module of python in a project but when I am trying to install it using the command "pip install face\_recognition" or "pip install face-recognition", it is showing an error and is not installing. This is the screenshot of the error:[![enter image description here](https://i.stack.imgur.com/DP5RK.png)](https://i.stack.imgur.com/DP5RK.png) How to fix this error and install the module? Thanks in advance!
2021/07/30
[ "https://Stackoverflow.com/questions/68590820", "https://Stackoverflow.com", "https://Stackoverflow.com/users/16017288/" ]
Your base template should provide defaults for most or all pages, but make it possible to override the default in cases where it's less than ideal. Here, put the base template message code into a named block. ``` {% block default_messages %} {% if messages %} {% for message in messages %} <div class="alert alert-{{ message.tags }} alert-dismissible fade show" role="alert" id="message"> {{ message }} <button type="button" class="close" data-dismiss="alert" aria-label="Close"> <span aria-hidden="true">&times;</span> </button> </div> {% endfor %} {% endif %} {% endblock default_messages %} ``` Now in any page where you find that the default is undesirable, you can override it. In `page_template.html`: ``` {% block default_messages %} <!-- delete the default, i.e. omit { { block.super } } --> {% endblock default_messages %} ``` and handle `messages` explicitly, elsewhere in `page_template.html`, perhaps with a `%include` if the block of message code is going to be useful in multiple pages.
``` {% block content %} {% endblock %} ``` put these inside div tag
12,570
61,969,924
my problem is that i am trying to use locust for the first time and i copied the basic code from their website <https://docs.locust.io/en/stable/quickstart.html> this is the code that they have given ``` from locust import HttpUser, task, between import random class WebsiteUser(HttpUser): wait_time = between(5, 9) @task(2) def index(self): self.client.get("/") self.client.get("/ajax-notifications/") @task(1) def view_post(self): post_id = random.randint(1, 10000) self.client.get("/post?id=%i" % post_id, name="/post?id=[post-id]") def on_start(self): """ on_start is called when a User starts before any task is scheduled """ self.login() def login(self): self.client.post("/login", {"username":"ellen_key", "password":"education"}) ``` This is the path to my locustfile.py E:\work\wipro\work\locust\_training\locustfile.py To run the locustfile.py i type locust in terminal ``` E:\work\wipro\work\locust_training>locust ``` The error that it throws is ``` [2020-05-23 14:44:25,916] DESKTOP-LQ261OQ/INFO/locust.main: Starting web monitor at http://:8089 Traceback (most recent call last): File "e:\setup\python\lib\runpy.py", line 193, in _run_module_as_main return _run_code(code, main_globals, None, File "e:\setup\python\lib\runpy.py", line 86, in _run_code exec(code, run_globals) File "E:\setup\python\Scripts\locust.exe\__main__.py", line 9, in <module> File "e:\setup\python\lib\site-packages\locust\main.py", line 236, in main web_ui = environment.create_web_ui( File "e:\setup\python\lib\site-packages\locust\env.py", line 144, in create_web_ui self.web_ui = WebUI(self, host, port, auth_credentials=auth_credentials, tls_cert=tls_cert, tls_key=tls_key) File "e:\setup\python\lib\site-packages\locust\web.py", line 79, in __init__ app = Flask(__name__) File "e:\setup\python\lib\site-packages\flask\app.py", line 558, in __init__ self.add_url_rule( File "e:\setup\python\lib\site-packages\flask\app.py", line 66, in wrapper_func return f(self, *args, **kwargs) File "e:\setup\python\lib\site-packages\flask\app.py", line 1216, in add_url_rule self.url_map.add(rule) File "e:\setup\python\lib\site-packages\werkzeug\routing.py", line 1562, in add rule.bind(self) File "e:\setup\python\lib\site-packages\werkzeug\routing.py", line 711, in bind self.compile() File "e:\setup\python\lib\site-packages\werkzeug\routing.py", line 767, in compile self._build = self._compile_builder(False) File "e:\setup\python\lib\site-packages\werkzeug\routing.py", line 1128, in _compile_builder return self.BuilderCompiler(self).compile(append_unknown) File "e:\setup\python\lib\site-packages\werkzeug\routing.py", line 1119, in compile co = types.CodeType(*code_args) TypeError: code() takes at least 14 arguments (13 given) ``` I searched for the same but was not able to find any specific solution I even tried to copy any locust code i was able to find but that was also not helpful as in one way or other either this error came or some other Can anyone help with this What should I do next Any help will be appreciated And Thanks in Advance
2020/05/23
[ "https://Stackoverflow.com/questions/61969924", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5825106/" ]
**If it crash on iOS :** Check if you have updated your Info.plist file. You must have the key «Privacy - Contacts Usage Description» with a sentence in value. Follow the documentation : <https://github.com/morenoh149/react-native-contacts#ios-2> **If it crash on Android :** Check if you have updated your AndroidManifest.xml file. You must request permission, like iOS. Follow the documentation : <https://github.com/morenoh149/react-native-contacts#permissions>
``` //Try it const addContact = () => { PermissionsAndroid.requestMultiple([ PermissionsAndroid.PERMISSIONS.WRITE_CONTACTS, PermissionsAndroid.PERMISSIONS.READ_CONTACTS, ]) Contacts.getAll().then(contacts => { // console.log('hello',contacts); setMblContacts(contacts); }) } ```
12,571
30,897,442
I had a working project with django 1.7, and now I moved it to django 1.8. I can do `syncdb` and run the app with sqlite, but when I switch to postgres, it fails to do **syncdb**: ``` Creating tables... Creating table x Creating table y Running deferred SQL... Traceback (most recent call last): File "manage.py", line 10, in <module> execute_from_command_line(sys.argv) File "~/venv/lib/python2.7/site-packages/django/core/management/__init__.py", line 338, in execute_from_command_line utility.execute() File "~/venv/lib/python2.7/site-packages/django/core/management/__init__.py", line 330, in execute self.fetch_command(subcommand).run_from_argv(self.argv) File "~/venv/lib/python2.7/site-packages/django/core/management/base.py", line 390, in run_from_argv self.execute(*args, **cmd_options) File "~/venv/lib/python2.7/site-packages/django/core/management/base.py", line 441, in execute output = self.handle(*args, **options) File "~/venv/lib/python2.7/site-packages/django/core/management/commands/syncdb.py", line 25, in handle call_command("migrate", **options) File "~/venv/lib/python2.7/site-packages/django/core/management/__init__.py", line 120, in call_command return command.execute(*args, **defaults) File "~/venv/lib/python2.7/site-packages/django/core/management/base.py", line 441, in execute output = self.handle(*args, **options) File "~/venv/lib/python2.7/site-packages/django/core/management/commands/migrate.py", line 179, in handle created_models = self.sync_apps(connection, executor.loader.unmigrated_apps) File "~/venv/lib/python2.7/site-packages/django/core/management/commands/migrate.py", line 317, in sync_apps cursor.execute(statement) File "~/venv/lib/python2.7/site-packages/django/db/backends/utils.py", line 79, in execute return super(CursorDebugWrapper, self).execute(sql, params) File "~/venv/lib/python2.7/site-packages/django/db/backends/utils.py", line 64, in execute return self.cursor.execute(sql, params) File "~/venv/lib/python2.7/site-packages/django/db/utils.py", line 97, in __exit__ six.reraise(dj_exc_type, dj_exc_value, traceback) File "~/venv/lib/python2.7/site-packages/django/db/backends/utils.py", line 62, in execute return self.cursor.execute(sql) django.db.utils.ProgrammingError: relation "auth_user" does not exist ``` I tried deleting the database and recreating it. Also, I tried: ``` python manage.py migrate auth ``` which also fails: ``` django.db.utils.ProgrammingError: relation "django_site" does not exist LINE 1: SELECT (1) AS "a" FROM "django_site" LIMIT 1 ``` Please help get this fixed.
2015/06/17
[ "https://Stackoverflow.com/questions/30897442", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1896222/" ]
I didn't like the idea of commenting/uncommenting code, so I tried a different approach: I migrated "manually" some apps, and then run `django-admin.py migrate` for the remaining ones. After deleting all the `*.pyc` files, my sequence of commands was: ``` $ django-admin.py migrate auth $ django-admin.py migrate contentypes $ django-admin.py migrate sites $ django-admin.py migrate MY_CUSTOM_USER_APP $ django-admin.py migrate ``` where `MY_CUSTOM_USER_APP` is the name of the application containing the model I set `AUTH_USER_MODEL` to in my `settings` file. Hope it can help. Btw, it seems strange that the best way to synchronize your db in Django 1.8 is so complicated. I wonder if I'm missing something (I'm not very familiar with Django 1.8, I used to work with older versions)
I had this issues with a `forms.ChoiceForm` queryset. I was able to switch to using [`forms.ModelChoiceForm`](https://docs.djangoproject.com/en/dev/ref/forms/fields/#modelchoicefield) which are lazily evaluated and this fixed the problem for me.
12,572
2,066,049
I'm trying to write a POS-style application for a [Sheevaplug](http://en.wikipedia.org/wiki/SheevaPlug) that does the following: 1. Captures input from a card reader (as I understand, most mag card readers emulate keyboard input, so basically I'm looking to capture that) 2. Doesn't require X 3. Runs in the background (daemon) I've seen examples of code that will wait for STDIN, but that won't work because this is a background process with no login, not even a monitor actually. I also found this snippet [elsewhere](https://stackoverflow.com/questions/1859049/check-if-key-is-pressed-using-python-a-daemon-in-the-background) on this site: ``` from struct import unpack port = open("/dev/input/event1","rb") while 1: a,b,c,d = unpack("4B",port.read(4)) print a,b,c,d ``` Which, while being the closest thing to what I need so far, only generates a series of numbers, all of which are different with no way that I know of to translate them into useful values. Clearly, I'm missing something here, but I don't know what it is. Can someone please how to get the rest of the way?
2010/01/14
[ "https://Stackoverflow.com/questions/2066049", "https://Stackoverflow.com", "https://Stackoverflow.com/users/231670/" ]
Section 5 of the Linux kernel [input documentation](http://www.kernel.org/doc/Documentation/input/input.txt) describes what each of the values in the event interface means.
the format is explained in the [kernel documentation](http://www.mjmwired.net/kernel/Documentation/input/) in section *5. Event Interface*.
12,582
6,495,688
There are lots of [good](https://stackoverflow.com/questions/2429511/why-do-people-write-usr-bin-env-python-on-the-first-line-of-a-python-script) [reasons](https://stackoverflow.com/questions/1352922/why-is-usr-bin-env-python-supposedly-more-correct-than-just-usr-bin-pyth) to use #! /usr/bin/env. Bottom line: It makes your code more portable. Well, sorta. Check this out.... --- I have two nearly identical scripts, `bintest.py` ``` #! /usr/bin/python import time time.sleep(5*60) ``` and `envtest.py` ``` #! /usr/bin/env python import time time.sleep(5*60) ``` Note that they are only different in their [shebangs](http://en.wikipedia.org/wiki/Shebang_%28Unix%29). --- `bintest.py` runs as expected ``` br@carina:~$ ./bintest.py & ps && killall bintest.py [1] 15061 PID TTY TIME CMD 14625 pts/0 00:00:00 bash 15061 pts/0 00:00:00 bintest.py 15062 pts/0 00:00:00 ps br@carina:~$ [1]+ Terminated ./bintest.py ``` but `envtest.py` does something less-than-optimal ``` br@carina:~$ ./envtest.py & ps && killall envtest.py [1] 15066 PID TTY TIME CMD 14625 pts/0 00:00:00 bash 15066 pts/0 00:00:00 python 15067 pts/0 00:00:00 ps envtest.py: no process found br@carina:~$ killall python br@carina:~$ [1]+ Terminated ./envtest.py ``` --- What we've seen is that using `#! /usr/bin/env` caused the process to receive the name "python" rather than "envtest.py", thus rendering our `killall` ineffective. On some level it seems like we've traded one kind of portability for another: we can now swap out python interpreters easily, but we've lost "kill-ability" on the command line. What's up with that? If there's a best-practice here for achieving both, what is it?
2011/06/27
[ "https://Stackoverflow.com/questions/6495688", "https://Stackoverflow.com", "https://Stackoverflow.com/users/730070/" ]
*"kill-ability" on the command line* can by addressed portably and reliably using the PID of the backgrounded process obtained from shell `$!` variable. ``` $ ./bintest.py & bg_pid=$! ; echo bg_pid=$bg_pid ; ps && kill $bg_pid [1] 2993 bg_pid=2993 PID TTY TIME CMD 2410 pts/0 00:00:00 bash 2993 pts/0 00:00:00 bintest.py 2994 pts/0 00:00:00 ps $ [1]+ Terminated ./bintest.py $ ``` and envtest.py ``` $ ./envtest.py & bg_pid=$! ; echo bg_pid=$bg_pid ; ps && kill $bg_pid [1] 3016 bg_pid=3016 PID TTY TIME CMD 2410 pts/0 00:00:00 bash 3016 pts/0 00:00:00 python 3017 pts/0 00:00:00 ps $ [1]+ Terminated ./envtest.py $ ``` As @Adam Bryzak points out, neither script cause the process title to be set on Mac OS X. So, if that feature is a firm requirement, you may need to install and use python module [setproctitle](http://pypi.python.org/pypi/setproctitle) with your application. This Stackoverflow post discusses [setting process title in python](https://stackoverflow.com/questions/564695/is-there-a-way-to-change-effective-process-name-in-python)
I don't think you can rely on the `killall` using the script name to work all the time. On Mac OS X I get the following output from `ps` after running both scripts: ``` 2108 ttys004 0:00.04 /usr/local/bin/python /Users/adam/bin/bintest.py 2133 ttys004 0:00.03 python /Users/adam/bin/envtest.py ``` and running `killall bintest.py` results in ``` No matching processes belonging to you were found ```
12,583
56,594,272
I found a code for text classification in tensorflow and when I try to run this code: <https://www.tensorflow.org/beta/tutorials/keras/feature_columns> I get an error. I used the dataset from here: <https://www.kaggle.com/kazanova/sentiment140> ``` Traceback (most recent call last): File "text_clas.py", line 35, in <module> train_ds = df_to_dataset(train, batch_size=batch_size) File "text_clas.py", line 27, in df_to_dataset labels = dataframe.pop('target') File "/home/yildiz/.local/lib/python2.7/site-packages/pandas/core/generic.py", line 809, in pop result = self[item] File "/home/yildiz/.local/lib/python2.7/site-packages/pandas/core/frame.py", line 2927, in __getitem__ indexer = self.columns.get_loc(key) File "/home/yildiz/.local/lib/python2.7/site-packages/pandas/core/indexes/base.py", line 2659, in get_loc return self._engine.get_loc(self._maybe_cast_indexer(key)) File "pandas/_libs/index.pyx", line 108, in pandas._libs.index.IndexEngine.get_loc File "pandas/_libs/index.pyx", line 132, in pandas._libs.index.IndexEngine.get_loc File "pandas/_libs/hashtable_class_helper.pxi", line 1601, in pandas._libs.hashtable.PyObjectHashTable.get_item File "pandas/_libs/hashtable_class_helper.pxi", line 1608, in pandas._libs.hashtable.PyObjectHashTable.get_item KeyError: 'target' ``` When I printed df.index.name I got NONE. So is the dataset not correct or am I doing something wrong? I changen the dataframe.head() to print(dataframe.head()) and got this output: ``` 0 ... @switchfoot http://twitpic.com/2y1zl - Awww, that's a bummer. You shoulda got David Carr of Third Day to do it. ;D 0 0 ... is upset that he can't update his Facebook by ... 1 0 ... @Kenichan I dived many times for the ball. Man... 2 0 ... my whole body feels itchy and like its on fire 3 0 ... @nationwideclass no, it's not behaving at all.... 4 0 ... @Kwesidei not the whole crew [5 rows x 6 columns] 1023999 train examples 256000 validation examples 320000 test examples ```
2019/06/14
[ "https://Stackoverflow.com/questions/56594272", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
You are saving or deleting a customer from the table, but the `dataSource` is using the data that you have already fetched from the database. It cannot get the updated data unless you manually do it. While saving a new customer, you'll have to make the `getUser()` request again ( or push the customer object in the `results` variable) and initialize the `dataSource` again. While deleting, again, either make the call again, or iterate through the result variable, find the customer that was deleted, remove it from the array, and reinitialize the `dataSource`.
Akash's approach is correct. In addition to his approach you should use [afterClosed()](https://material.angular.io/components/dialog/api#MatDialogRef) method to link MatDialog to current component and get notified when the dialog is closed. After the dialog is closed, just fetch users again. ```js ngOnInit() { this.fetchUsers(); } newCustomer(type) { this.service.initializeForm(); const dialogConfig = new MatDialogConfig(); //dialogConfig.disableClose = true; dialogConfig.autoFocus = true; dialogConfig.width = "60%"; dialogConfig.id = type; this.dialog.open(CustomerComponent, dialogConfig).afterClosed().subscribe(() => this.fetchUsers()); } fetchUsers() { this.service.getUser().subscribe(results => { if (!results) { return; } console.log(results); this.dataSource = new MatTableDataSource(results); this.dataSource.sort = this.sort; this.dataSource.paginator = this.paginator; }) } ``` Also if you share the code for deletion and `CustomerComponent` it would be easier to analyse any potential problems.
12,586
33,546,935
I have a list of lists, with integer values in each list, that represent dates over an 8 year period. ``` dates = [[2014, 11, 14], [2014, 11, 13], ....., [2013, 12, 01].....] ``` I need to compare these dates so that I can find an average cost per month, with other data stored in the file. So i need to figure out how to iterate through the dates and stop when the month changes. Each date has corresponding cost and volume values that I will need to find the average. So I was trying to find a way to make the dates in x.year, x.month, x.day format and do it that way but I'm new to python and very confused. How do I iterate through all of the dates for the 8 year period, but stopping and calculating the average of the data for each particular month and store that average for each month?? Thanks in advance, hope this makes sense.
2015/11/05
[ "https://Stackoverflow.com/questions/33546935", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5522009/" ]
You can use a dictionary to preserve the year and month as the key and the relative days in a list as value, then you can do any progress on your items which are categorized by the year and month. ``` >>> dates = [['2014', '11', '14'], ['2014', '10', '13'], ['2014', '10', '01'], ['2014', '12', '01'], ['2013', '12', '01'], ['2013', '12', '09'], ['2013', '10', '01'], ['2013', '10', '05'], ['2013', '04', '01']] >>> my_dict = {} >>> >>> for y,m,d in dates: ... my_dict.setdefault((y,m),[]).append(d) ... >>> my_dict {('2013', '10'): ['01', '05'], ('2013', '12'): ['01', '09'], ('2014', '11'): ['14'], ('2014', '10'): ['13', '01'], ('2014', '12'): ['01'], ('2013', '04'): ['01']} >>> ``` You can use a nested list comprehension to convert the result to a nested list of relative month of datetime objects : ``` >>> [[datetime(int(y),int(m),int(d)) for d in days] for (y,m),days in my_dict.items()] [[datetime.datetime(2013, 10, 1, 0, 0), datetime.datetime(2013, 10, 5, 0, 0)], [datetime.datetime(2013, 12, 1, 0, 0), datetime.datetime(2013, 12, 9, 0, 0)], [datetime.datetime(2014, 11, 14, 0, 0)], [datetime.datetime(2014, 10, 13, 0, 0), datetime.datetime(2014, 10, 1, 0, 0)], [datetime.datetime(2014, 12, 1, 0, 0)], [datetime.datetime(2013, 4, 1, 0, 0)]] ```
If you want to iterate thought you dates array, and do an action every time the month changes you could use this method: ``` dates = [[2014, 11, 14], [2014, 11, 13], [2013, 12, 1]] old_m = "" for year, month, day in dates: if old_m != month: # calculate average here old_m = month ```
12,587
51,941,175
I am trying to read a txt file(kept in another location) in python, but getting error. -------------------------------------------------------------------------------------- > > FileNotFoundError > > in () > ----> 1 employeeFile=open("C:‪/Users/xxxxxxxx/Desktop/python/files/employee.txt","r") > 2 print(employeeFile.read()) > 3 employeeFile.close() > > > FileNotFoundError: [Errno 2] No such file or > directory:'C:\u202a/Users/xxxxxxxx/Desktop/python/files/employee.txt' > > > Code used: ``` employeeFile=open("C:‪/Users/xxxxxxxx/Desktop/python/files/employee.txt","r") print(employeeFile.read()) employeeFile.close() ``` I tried using frontslash(/) and backslash(). But getting the same error.Please let me know what is missing in code.
2018/08/21
[ "https://Stackoverflow.com/questions/51941175", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6487702/" ]
I'm guessing you copy and pasted from a Windows property pane, switching backslashes to forward slashes manually. Problem is, the properties dialog shoves a Unicode LEFT-TO-RIGHT EMBEDDING character into the path so the display is consistent, even in locales with right-to-left languages (e.g. Arabic, Hebrew). You can read more about this on [Raymond Chen's blog, The Old New Thing](https://blogs.msdn.microsoft.com/oldnewthing/20150506-00/?p=44924/). The solution is to delete that invisible character from your path string. Selecting everything from the initial `"` to the first forward slash, deleting it, then retyping `"C:/`, should do the trick.
As your error message suggests, there's a weird character between the colon and the forward slash (`C:[some character]/`). Other than that the code is fine. ```py employeeFile = open("C:/Users/xxxxxxxx/Desktop/python/files/employee.txt", "r") ``` You can copy paste this code and use it.
12,588
19,623,386
Hi: I want to do a sound waves simulation that include wave propagation, absorbing and reflection in 3D space. I do some searches and I found [this question](https://stackoverflow.com/questions/4956331/wave-simulation-with-python) in stackoverflow but it talk about electromagnetic waves not sound waves. I know i can reimplement the **FDTD** method for sound waves but how about the sources and does it act like the electromagnetic waves ? Is there any resources to start with ? Thanks in advance.
2013/10/27
[ "https://Stackoverflow.com/questions/19623386", "https://Stackoverflow.com", "https://Stackoverflow.com/users/553460/" ]
Hope this can give you some inputs... As far as i know, in EM simulations obstacles (and thus terrain) are not considered at all. With sound you have to consider reflection, diffraction, etc there are different standards to calculate the noise originated from different sources (I'll list the europe ones, the one i know of): * traffic, NMPB (NMPB-Routes-96) is THE standard. All the noise calculations have to be done with that one (at least in my country). Results aren't very good. A "new" algorithm is SonRoad (i think it uses inverse ray-tracing)... from my tests it works great. * trains: Schall03 * industries, ISO 9613 * a list of all the used models in CadnaA (a professional software) so you can google them all: <http://www.datakustik.com/en/products/cadnaa/modeling-and-calculation/calculation-standards/> another pro software is SoundPlan, somewhere on the web there is a free "SoundPlan-ReferenceManual.pdf" 800-pages with the mathematical description of the implemented algorithms... i haven't had any luck with google today tough
An easy way to do this is use the SoundPlan software. Multiple sound propagation methods such as ISO9613-2, CONCAWE and Nord2000 are implemented. It has basic 3D visualization with sound pressure level contours.
12,589
46,309,161
I am having issues reading data from a bucket hosted by Google. I have a bucket containing ~1000 files I need to access, held at (for example) gs://my-bucket/data Using gsutil from the command line or other of Google's Python API clients I can access the data in the bucket, however importing these APIs is not supported by default on google-cloud-ml-engine. I need a way to access both the data and the names of the files, either with a default python library (i.e. os) or using tensorflow. I know tensorflow has this functionality built in somewhere, it has been hard for me to find Ideally I am looking for replacements for one command such as os.listdir() and another for open() ``` train_data = [read_training_data(filename) for filename in os.listdir('gs://my-bucket/data/')] ``` Where read\_training\_data uses a tensorflow reader object Thanks for any help! ( Also p.s. my data is binary )
2017/09/19
[ "https://Stackoverflow.com/questions/46309161", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8598909/" ]
If you just want to read data into memory, then [this answer](https://stackoverflow.com/a/42799952/1399222) has the details you need, namely, to use the [file\_io](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/lib/io/file_io.py) module. That said, you might want to consider using built-in reading mechanisms for TensorFlow as they can be more performant. Information on reading can be found [here](https://www.tensorflow.org/api_guides/python/reading_data). The latest and greatest (but not yet part of official "core" TensorFlow) is the Dataset API (more info [here](https://www.tensorflow.org/programmers_guide/datasets)). Some things to keep in mind: * Are you using a format TensorFlow can read? Can it be converted to that format? * Is the overhead of "feeding" high enough to affect training performance? * Is the training set too big to fit in memory? If the answer is yes to one or more of the questions, especially the latter two, consider using readers.
For what its worth. I also had problems reading files, in particular binary files from google cloud storage inside a datalab notebook. The first way I managed to do it was by copying files using gs-utils to my local filesystem and using tensorflow to read the files normally. This is demonstrated here after the file copy was done. Here is my setup cell ``` import math import shutil import numpy as np import pandas as pd import tensorflow as tf tf.logging.set_verbosity(tf.logging.INFO) pd.options.display.max_rows = 10 pd.options.display.float_format = '{:.1f}'.format ``` Here is a cell for reading the file locally as a sanity check. ``` # this works for reading local file audio_binary_local = tf.read_file("100852.mp3") waveform = tf.contrib.ffmpeg.decode_audio(audio_binary_local, file_format='mp3', samples_per_second=44100, channel_count=2) # this will show that it has two channels of data with tf.Session() as sess: result = sess.run(waveform) print (result) ``` Here is reading the file from gs: directly as a binary file. ``` # this works for remote files in gs: gsfilename = 'gs://proj-getting-started/UrbanSound/data/air_conditioner/100852.mp3' # python 2 #audio_binary_remote = tf.gfile.Open(gsfilename).read() # python 3 audio_binary_remote = tf.gfile.Open(gsfilename, 'rb').read() waveform = tf.contrib.ffmpeg.decode_audio(audio_binary_remote, file_format='mp3', samples_per_second=44100, channel_count=2) # this will show that it has two channels of data with tf.Session() as sess: result = sess.run(waveform) print (result) ```
12,590
56,567,013
We are fairly new to Django. We we have an app and a model. We'd like to add an 'Category' object to our model. We did that, and then ran 'python manage.py makemigrations'. We then deploy our code to a server running the older code, and run 'python manage.py migrate'. This throws 2 pages of exceptions, finishing with 'django.db.utils.ProgrammingError: (1146, "Table 'reporting.contact\_category' doesn't exist")' This seems to be looking at our models.py. If we comment out Category from our model, and all references to it, the migration succeeds. I thought that the point of migrations is to make the database match what the model expects, but this seems to require that the model match the database before the migration. We clearly are doing something wrong, but what?
2019/06/12
[ "https://Stackoverflow.com/questions/56567013", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11638068/" ]
I believe you skipped some migration in the server, so now you are missing some tables (I have been in that situation. Ensure **migrations** directories are on your **.gitignore**. You CAN NOT check in migrations files, you have to run `makemigrations` on the server). This can be solved by tracing back up to the point the database and models files match, but it is a risky process if it is your production database, so you should make a full backup before proceeding, and try the process on a different computer before. This would be my advice: 1. Delete migration files from the server. 2. Comment the models that rise the error. 3. Set the server's migration history to the point the database is, using `python manage.py makemigrations` and `python manage.py migrate --fake-initial` (this will update the migration files without actually attempting to modify the database). 4. Uncomment the models that raise the error. 5. Run `python manage.py makemigrations` and `python manage.py migrate`. If, after you comment the models that raise the exception, you get a different exception, you have to keep on commenting and attempting again. Once a migrations succeeds, you can uncomment all commented models and make an actual migration.
Remember to run `python manage.py makemigrations` if you made changes to the `models.py` then run `python manage.py makemigrations` Both commands **must** be run on the same server with the same database
12,591
10,654,707
I download python2.6.6 source form <http://www.python.org/getit/releases/2.6.6/> After that I run these commands ./configure make I tried to import zlib but it says no module named zlib. How can install zlib module for it After I tried installing python2.6.8 I got same error no zlib. While installing it I got below error Failed to find the necessary bits to build these modules: ``` _bsddb _curses _curses_panel _hashlib _sqlite3 _ssl _tkinter bsddb185 bz2 dbm dl gdbm imageop linuxaudiodev ossaudiodev readline sunaudiodev zlib ``` To find the necessary bits, look in setup.py in detect\_modules() for the module's name. Failed to build these modules: ``` crypt nis ```
2012/05/18
[ "https://Stackoverflow.com/questions/10654707", "https://Stackoverflow.com", "https://Stackoverflow.com/users/813102/" ]
I tried following which helped me with some of these modules. You have to edit setup.py. Find the following lines in setup.py: ``` lib_dirs = self.compiler.library_dirs + [ '/lib64', '/usr/lib64', '/lib', '/usr/lib', ] ``` **For 64 bit** Add `/usr/lib/x86_64-linux-gnu`: ``` lib_dirs = self.compiler.library_dirs + [ '/lib64', '/usr/lib64', '/lib', '/usr/lib', '/usr/lib/x86_64-linux-gnu', ] ``` **For 32 bit** Add `/usr/lib/i386-linux-gnu`: ``` lib_dirs = self.compiler.library_dirs + [ '/lib64', '/usr/lib64', '/lib', '/usr/lib', '/usr/lib/i386-linux-gnu', ] ``` Note `x86_64-linux-gnu` & `i386-linux-gnu` might be located somewhere else in your system so path accordingly. Ater this you will be left with only following modules: ``` _bsddb bsddb185 dbm gdbm sunaudiodev ```
I wrote a note for myself addressing your problem, might be helpful: [`python installation`](http://cheater.nemoden.com/python-installation/). Do you really need `bsddb` and `sunaudiodev` modules? You might not want to since both are deprecated since python 2.6
12,592
41,971,623
Is it possible to have a python script pause when you hold a button down and then start when you release that button? (I have the button connected to GPIO pins on my Raspberry Pi)
2017/02/01
[ "https://Stackoverflow.com/questions/41971623", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5070269/" ]
**Yes.** The AWS account that is currently controlling your domain name with Route 53 must be used, but it can be pointed to anything on the Internet. Steps: * In the AWS account with the "other" EC2 instance, create an **Elastic IP Address** and assign it to the EC2 instance. This will ensure that its IP address does not change when the instance is stopped and started. * In your existing Route 53 configuration (in the original account), **create a Record Set** for the sub-domain (eg `images.example.com`) of type `A` and enter the Elastic IP Address as the *value*.
Once you have set the nameserver for your domain to point to Route53, you no longer need to control the subdomains from bigrock services. Just add them to your Route53 dashboard, and they'll be reflected live.
12,597
50,628,893
I am doing a final project in a python course and I have done a program using phantomjs that run like a background process in windows. Therefore, after creating my project, I used pyinstaller --noconsole --onefile to my file in order to hide his console but even tough I did it, I still get a console popup - phantomjs.exe [like this](https://i.stack.imgur.com/d9Zx8.png) Someone know how to remove the console without impairing the proper functioning of the program. Thanks alot, Omer **Note:** in my spec file in the exe option there is debug = False!
2018/05/31
[ "https://Stackoverflow.com/questions/50628893", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9581412/" ]
No need to rename the .dat to .csv. Instead you can use a regex that matches two or more spaces as a column separator. Try use `sep` parameter: ``` pd.read_csv('http://users.stat.ufl.edu/~winner/data/clinton1.dat', header=None, sep='\s\s+', engine='python') ``` Output: ``` 0 1 2 3 4 5 6 7 8 9 10 0 Autauga, AL 30.92 31.7 57623 15768 15.2 10.74 51.41 60.4 2.36 457 1 Baldwin, AL 26.24 35.5 84935 16954 13.6 9.73 51.34 66.5 5.40 282 2 Barbour, AL 46.36 32.8 83656 15532 25.0 8.82 53.03 28.8 7.02 47 3 Blount, AL 32.92 34.5 61249 14820 15.0 9.67 51.15 62.4 2.36 185 4 Bullock, AL 67.67 31.7 75725 11120 33.0 7.08 50.76 17.6 2.91 141 ``` If you want your state as a seperate column you can use this sep='\s\s+|,' which means seperate columns on two spaces or more OR a comma. ``` pd.read_csv('http://users.stat.ufl.edu/~winner/data/clinton1.dat', header=None, sep='\s\s+|,', engine='python') ``` Output: ``` 0 1 2 3 4 5 6 7 8 9 10 11 0 Autauga AL 30.92 31.7 57623 15768.0 15.2 10.74 51.41 60.4 2.36 457.0 1 Baldwin AL 26.24 35.5 84935 16954.0 13.6 9.73 51.34 66.5 5.40 282.0 2 Barbour AL 46.36 32.8 83656 15532.0 25.0 8.82 53.03 28.8 7.02 47.0 3 Blount AL 32.92 34.5 61249 14820.0 15.0 9.67 51.15 62.4 2.36 185.0 4 Bullock AL 67.67 31.7 75725 11120.0 33.0 7.08 50.76 17.6 2.91 141.0 ```
You can use a regular expression as a separator. In your specific case, all the delimiters are more than one space whereas the spaces in the names are just single spaces. ``` import pandas as pd clinton = pd.read_csv("clinton1.csv", sep='\s{2,}', header=None, engine='python') ```
12,598
20,269,507
I'm a novice in python and also in py.test. I'm searching a way to run multiple tests on multiple items and cannot find it. I'm sure it's quite simple when you know how to do it. I have simplified what I'm trying to do to make it simple to understand. If I have a Test class who defines a serie of tests like this one : ``` class SeriesOfTests: def test_greater_than_30(self, itemNo): assert (itemNo > 30), "not greather than 30" def test_lesser_than_30(self, itemNo): assert (itemNo < 30), "not lesser thant 30" def test_modulo_2(self, itemNo): assert (itemNo % 2) == 0, "not divisible by 2" ``` I want to execute this SeriesOfTest on each item obtained from a function like : ``` def getItemNo(): return [0,11,33] ``` The result i'm trying to obtain is something like : ``` RESULT : Test "itemNo = 0" - test_greater_than_30 = failed - test_lesser_than_30 = success - test_modulo_2 = success Test "itemNo = 11" - test_greater_than_30 = failed - test_lesser_than_30 = success - test_modulo_2 = failed Test "itemNo = 33" - test_greater_than_30 = success - test_lesser_than_30 = failed - test_modulo_2 = failed ``` How can I do this with py.test? Than you guys (and girls also) André
2013/11/28
[ "https://Stackoverflow.com/questions/20269507", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1269921/" ]
Add the sorting clause to the most outer query
For paging with a "window", you can do something like this: ``` select e.* from ( select e.* , row_number() over (order by uur_id) ive$idx$ from bubs_uren_v e where ( uur_id = :w1 ) ) e where ive$idx$ between (:start_index + 1) and (:start_index + :max_row_count) ``` We use this code in our project management suite which handles very large volumes of data. Don't ask me how Oracle did it, but it consistently fast. Remember to only include the columns you need, saves processing, memory and maybe even PL/SQL function calls.
12,599
63,191,480
I wrote a small code with python. But this part of the code doesn't work when game focused on and it doesnt respond back. `pyautogui.moveRel(-2, 4)` Also this part works when my cursor appear in menu or etc. too. But when i switched into game (when my cursor disappear and crosshair appeared) it doesn't work (doesn't matter fullscreen or else). These type of keyboard commands are in my code also but they works fine. `keyboard.is_pressed('Alt')` It's about mouse or pyautogui ?.. How can i make mouse moves correct ?
2020/07/31
[ "https://Stackoverflow.com/questions/63191480", "https://Stackoverflow.com", "https://Stackoverflow.com/users/14027997/" ]
I tried this code below: `import win32con` `import win32api` `win32api.mouse_event(win32con.MOUSEEVENTF_MOVE, int(10), int(10), 0, 0)` And it worked in game. I think it relative with win32con. Anyway i got it.
This is how PyAutoGui works: ``` 0,0 X increases --> +---------------------------+ | | Y increases | | | | 1920 x 1080 screen | | | | V | | | | +---------------------------+ 1919, 1079 ``` So you need to write like this: ``` pyautogui.moveTo(100, 200) # moves mouse to X of 100, Y of 200 ``` or ``` pyautogui.moveTo(100, 200, 2) # moves mouse to X of 100, Y of 200 over 2 seconds ```
12,600
22,049,248
I would like to develop an app engine application that directly stream data into a BigQuery table. According to Google's documentation there is a simple way to stream data into bigquery: * <http://googlecloudplatform.blogspot.co.il/2013/09/google-bigquery-goes-real-time-with-streaming-inserts-time-based-queries-and-more.html> * <https://developers.google.com/bigquery/streaming-data-into-bigquery#streaminginsertexamples> (note: in the above link you should select the python tab and not Java) Here is the sample code snippet on how streaming insert should be coded: ``` body = {"rows":[ {"json": {"column_name":7.7,}} ]} response = bigquery.tabledata().insertAll( projectId=PROJECT_ID, datasetId=DATASET_ID, tableId=TABLE_ID, body=body).execute() ``` Although I've downloaded the client api I didn't find any reference to a "bigquery" module/object referenced in the above Google's example. Where is the the bigquery object (from snippet) should be located? Can anyone show a more complete way to use this snippet (with the right imports)? I've Been searching for that a lot and found documentation confusing and partial.
2014/02/26
[ "https://Stackoverflow.com/questions/22049248", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2558486/" ]
Minimal working (as long as you fill in the right ids for your project) example: ``` import httplib2 from apiclient import discovery from oauth2client import appengine _SCOPE = 'https://www.googleapis.com/auth/bigquery' # Change the following 3 values: PROJECT_ID = 'your_project' DATASET_ID = 'your_dataset' TABLE_ID = 'TestTable' body = {"rows":[ {"json": {"Col1":7,}} ]} credentials = appengine.AppAssertionCredentials(scope=_SCOPE) http = credentials.authorize(httplib2.Http()) bigquery = discovery.build('bigquery', 'v2', http=http) response = bigquery.tabledata().insertAll( projectId=PROJECT_ID, datasetId=DATASET_ID, tableId=TABLE_ID, body=body).execute() print response ``` As Jordan says: "Note that this uses the appengine robot to authenticate with BigQuery, so you'll to add the robot account to the ACL of the dataset. Note that if you also want to use the robot to run queries, not just stream, you need the robot to be a member of the project 'team' so that it is authorized to run jobs."
Here is a working code example from an appengine app that streams records to a BigQuery table. It is open source at code.google.com: <http://code.google.com/p/bigquery-e2e/source/browse/sensors/cloud/src/main.py#124> To find out where the bigquery object comes from, see <http://code.google.com/p/bigquery-e2e/source/browse/sensors/cloud/src/config.py> Note that this uses the appengine robot to authenticate with BigQuery, so you'll to add the robot account to the ACL of the dataset. Note that if you also want to use the robot to run queries, not just stream, you need to robot to be a member of the project 'team' so that it is authorized to run jobs.
12,602
39,771,024
I made a module and moved it to `/root/Downloads/Python-3.5.2/Lib/site-packages`. When I run bash command `python3` in this folder to start the ide and import the module it works. However if I run `python3` in any other directory (e.g. `/root/Documents/Python`) it says ```none ImportError: No module named 'exampleModule' ``` I was under the impression that Python would automatically search for modules in `site-packages` regardless of the directory. How can I fix it so that it will work regardless of where I am?
2016/09/29
[ "https://Stackoverflow.com/questions/39771024", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6726467/" ]
Instead of moving the module I would suggest you add the location of the module to the `PYTHONPATH` environment variable. If this is set then the python interpreter will know where to look for your module. e.g. on Linux ```sh export PYTHONPATH=$PYTHONPATH:<insert module location here> ```
**If you are a window user and you are getting import issues from site-packages then you can add the path of your site-packages to env variable** [![enter image description here](https://i.stack.imgur.com/fmK0h.png)](https://i.stack.imgur.com/fmK0h.png) C:\Users\hp\AppData\Local\Programs\Python\Python310\Lib\site-packages (site-package directory)
12,603
8,121,886
I'm sure this has been answered somewhere, because it's a very basic question - I can not, however, for the life of me, find the answer on the web. I feel like a complete idiot, but I have to ask so, here goes: I'm writing a python code that will produce a list of all page addresses on a domain. This is done using selenium 2 - my problem occurs when I try to access the list of all links produced by selenium. Here's what I have so far: ``` from selenium import webdriver import time HovedDomene = 'http://www.example.com' Listlinker = [] Domenesider = [] Domenesider.append(HovedDomene) driver = webdriver.Firefox() for side in Domenesider: driver.get(side) time.sleep(10) Listlinker = driver.find_elements_by_xpath("//a") for link in Listlinker: if link in Domenesider: pass elif str(HovedDomene) in str(link): Domenesider.append(side) print(Domenesider) driver.close() ``` the `Listlinker` variable does not contain the links found on the page - instead the list contains, (I'm guessing here) selenium specific objects called WebElements. I can not, however, find any WebElement attributes that will give me the links - as a matter of fact I can't find any examples of WebElement attributes being accessed in python (at least not in a manner i can reproduce) I would really appreciate any help you all could give me Sincerely Rookie
2011/11/14
[ "https://Stackoverflow.com/questions/8121886", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1020693/" ]
I'm familiar with python's api of selenium but you probably can receive link using `get_attribute(attributename)` method. So it should be something like: ``` linkstr = "" for link in Listlinker: linkstr = link.get_attribute("href") if linkstr in Domenesider: pass elif str(HovedDomene) in linkstr: Domenesider.append(side) ```
> > I've been checking up on your tip to not use time.sleep(10) as a page load wait. From reading different posts itseems to me that waiting for page loading is redundant with selenium 2. Se for example link The reason being that selenium 2 has a implicit wait for load function. Just thought I'd mention it to you, since you took the time to answer my question. > > > Sometimes selenium behaves in unclear way. And sometimes selenium throws errors which don't interested for us. ``` By byCondition; T result; // T is IWebElement const int SELENIUMATTEMPTS = 5; int timeout = 60 * 1000; StopWatch watch = new StopWatch(); public T MatchElement<T>() where T : IWebElement { try { try { this.result = this.find(WebDriver.Instance, this.byCondition); } catch (NoSuchElementException) { } while (this.watch.ElapsedMilliseconds < this.timeout && !this.ReturnCondMatched) { Thread.Sleep(100); try { this.result = this.find(WebDriver.Instance, this.byCondition); } catch (NoSuchElementException) { } } } catch (Exception ex) { if (this.IsKnownError(ex)) { if (this.seleniumAttempts < SELENIUMATTEMPTS) { this.seleniumAttempts++; return MatchElement(); } } else { log.Error(ex); } } return this.result; } public bool IsKnownError(Exception ex) { //if selenium find nothing it throw an exception. This is bad practice to my mind. bool res = (ex.GetType() == typeof(NoSuchElementException)); //OpenQA.Selenium.StaleElementReferenceException: Element not found in the cache //issue appears when selenium interact with other plugins. //this is probably something connected with syncronization res = res || (ex.GetType() == (typeof(InvalidSelectorException) && ex.Message .Contains("Component returned failure code: 0x80070057 (NS_ERROR_ILLEGAL_VALUE)" + "[nsIDOMXPathEvaluator.createNSResolver]")); //OpenQA.Selenium.StaleElementReferenceException: Element not found in the cache res = res || (ex.GetType() == typeof(StaleElementReferenceException) && ex.Message.Contains("Element not found in the cache")); return res; } ``` Sorry for C# but I'm beginner in Python. Code is simplified of course.
12,605
52,131,675
I have list of list of integers as shown below: ``` flst = [[19], [21, 31], [22], [23], [9, 25], [26], [27, 29], [28], [27, 29], [2, 8, 30], [21, 31], [5, 11, 32], [33]] ``` I want to get the list of integers in increasing order as shown below: ``` out = [19, 21, 22, 23, 25, 26, 27, 28, 29, 30, 31, 32, 33] ``` I want to compare every list item with the item/s in next list and get item which is greater than the preceding item: for ex: In the list first item is [19] and next list items are [21,31]. Both elements are greater than [19] but [21] is near to [19], so it should be selected. I'm learning python and tried the following code: ``` for i in range(len(flst)-2): for j in flst[i+1]: if j in range(flst[j], flst[j+2]): print(j) ``` Went through many codes for incremental order in stackoverflow, but unable to find any solution.
2018/09/01
[ "https://Stackoverflow.com/questions/52131675", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6017391/" ]
Sure, you can pass field names as arguments and use `[arg]` accessors as you already do with `[key]`: ```js function populateClinicRoomSelect(object, valueField, labelField) { var selectArray = []; var options = []; for(var key in object) { if(object.hasOwnProperty(key)) { options = { value: object[key][valueField], label: object[key][labelField], }; selectArray = selectArray.concat(options); } } return selectArray; } const object = { a: { id: 1, room: 'first room' }, b: { id: 2, room: 'second room' } } const result = populateClinicRoomSelect(object, 'id', 'room'); console.log(result) ```
You mean like this? ```js function populateClinicRoomSelect(object, value, label) { value = value || "id"; // defaults value to id label = label || "RoomName"; // defaults label to RoomName var selectArray = []; var options = []; for(var key in object) { if(object.hasOwnProperty(key)) { options = { value: object[key][value], label: object[key][label], }; selectArray = selectArray.concat(options); } } return selectArray; } let obj = { 1: { id:1, RoomName: "Blue Lounge" }, 2: { id:2, RoomName: "Red Lounge" } } console.log(populateClinicRoomSelect(obj, 'id', 'RoomName')); ```
12,606
44,456,572
Am getting the following error - Missing required dependencies ['numpy'] Standalone and via Django, without Apache2 integration - the code work likes charm, however things start to fall when used with Apache2. It refuses to import pandas or numpy giving one error after another. I am using Apache2, libapache2-mod-wsgi-py3, Python 3.5 and Anaconda 2.3.0 ``` Request Method: GET Request URL: http://127.0.0.1/api/users/0/ Django Version: 1.10.5 Exception Type: ImportError Exception Value: Missing required dependencies ['numpy'] Exception Location: /home/fractaluser/anaconda3/lib/python3.4/site-packages/pandas/__init__.py in <module>, line 18 Python Executable: /usr/bin/python3 Python Version: 3.5.2 Python Path: ['/home/fractaluser/anaconda3/lib/python3.4/site-packages', '/home/fractaluser/anaconda3/lib/python3.4/site-packages/Sphinx-1.3.1-py3.4.egg', '/home/fractaluser/anaconda3/lib/python3.4/site-packages/setuptools-27.2.0-py3.4.egg', '/usr/lib/python35.zip', '/usr/lib/python3.5', '/usr/lib/python3.5/plat-x86_64-linux-gnu', '/usr/lib/python3.5/lib-dynload', '/usr/local/lib/python3.5/dist-packages', '/usr/lib/python3/dist-packages', '/var/www/html/cgmvp'] Server time: Fri, 9 Jun 2017 11:12:37 +0000 ```
2017/06/09
[ "https://Stackoverflow.com/questions/44456572", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1759084/" ]
In your class fields ``` private Handler progressHandler = new Handler(); private Runnable progressRunnable = new Runnable() { @Override public void run() { progressDialog.setProgress(progressValue); progressHandler.postDelayed(this, 1000); } }; ``` When the time consuming thread is started ``` // Here start time consuming thread // Here show the ProgressDialog progressHandler.postDelayed(progressRunnable, 1000); ``` When the time consuming thread ends ``` progressHandler.removeCallbacks(progressRunnable); /// Here dismiss the ProgressDialog. ``` **ADDED:** Instead new Thread(new Runnable) that you probably use for your time consuming code I propose to do this: To initialize the task : ``` MyTask task = new MyTask(); task.execute(); // Here show the PorgressDialog progressHandler.postDelayed(progressRunnable, 1000); ``` Add this private class inside your main class: ``` private class MyTask extends AsyncTask<Void, Void, Void> { @Override protected Void doInBackground(Void... voids) { //Here do your time consuming work return null; } @Override protected void onPostExecute(Void aVoid) { // This will be called on the UI thread after doInBackground returns progressHandler.removeCallbacks(progressRunnable); progressDialog.dismiss(); } } ```
do something lik this ``` new Thread(new Runnable() { public void run() { while (prStatus < 100) { prStatus += 1; handler.post(new Runnable() { public void run() { pb_2.setProgress(prStatus); } }); try { Thread.sleep(150); } catch (InterruptedException e) { e.printStackTrace(); } if(prStatus == 100) prStatus = 0; } } }).start(); ```
12,607
17,056,796
I'm trying to send an ascii command over tcp/ip but python (i think) add a header to he string. if I do a `s.send(bytes('RV\n ', 'ascii'))` I get an eRV rather than RV when I inspect the command going out. Any ideas? [Previous post](https://stackoverflow.com/questions/16968253/python-3-tcp-ip-ascii-command).
2013/06/12
[ "https://Stackoverflow.com/questions/17056796", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2460673/" ]
If you want to stay away from SQL you can try with [EntityFramework.Extended](http://weblogs.asp.net/pwelter34/archive/2011/11/29/entity-framework-batch-update-and-future-queries.aspx). Provides support for writing LINQ like batch delete/update queries. I only tried it once, it worked nice, but not sure if i would use it in production.
There are two ways I can think of right off hand to achieve what you are seeking. 1) create a stored procedure and call it from your entity model. 2) Send the raw command text to the db, see this [microsoft article](http://msdn.microsoft.com/en-us/data/jj592907.aspx)
12,608
17,420,528
I am following the book "how to think like a computer scientist" to learn python and am having some problems understanding the classes and object chapter. An exercise there says to write a function named moveRect that takes a Rectangle and 2 parameters named dx& dy. It should change the location of the rectangle by adding dx to the x co-ordinate of corner and dy to the y co-ordinate of corner. Now, I am not really sure if the code I have written is correct or not. So, let me tell you what i was trying to do and you can tell me whether I was doing it right? first I created a class Rectangle then I created an instance of it and entered the details such as values of co-ordinates x and y and width and height of the rectangle. so, this was my code earlier: ``` class Rectangle: pass rect=Rectangle() rect.x=3.0 rect.y=4.0 rect.width=50 rect.height=120 def moveRect(Rectangle,dx,dy): Rectangle.x=Rectangle.x + dx Rectangle.y=Rectangle.y + dy dx=raw_input("enter dx value:") dy=raw_input("enter dy value:") moveRect(Rectangle,dx,dy) ``` But when I ran this code it gave me an attribute error and : class Rectangle has no attribute x Therefore, I moved the following lines into the moveRect function ``` rect=Rectangle() rect.x=3.0 rect.y=4.0 rect.width=50 rect.height=120 ``` and thus the code became: ``` class Rectangle: pass def moveRect(Rectangle,dx,dy): Rectangle.x=Rectangle.x + dx Rectangle.y=Rectangle.y + dy rect=Rectangle() rect.x=3.0 rect.y=4.0 rect.width=50 rect.height=120 dx=raw_input("enter dx value:") dy=raw_input("enter dy value:") moveRect(Rectangle,dx,dy) ``` But, this code still gives me an error. So,what's actually wrong with this code? At the moment, I feel as if I wrote this code using trial and error, and changed around the parts when I saw an error. I want to properly understand how this works.so,please shed some light on this. The book "how to think like a computer scientist" hasn't introduced init in chapter 12 and therefore I need to do it without using init.
2013/07/02
[ "https://Stackoverflow.com/questions/17420528", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1297440/" ]
In your first example, you passed the *class* as an argument instead of the *instance* you created. Because there is no `self.x` in the class `Rectangle`, the error was raised. You could just put the function in the class: ``` class Rectangle: def __init__(self, x, y, width, height): self.x = x self.y = y self.width = width self.height = height def moveRect(self, dx, dy): self.x += dx self.y += dy rect = Rectangle(3.0, 4.0, 50, 120) dx = raw_input("enter dx value:") dy = raw_input("enter dy value:") rect.moveRect(float(dx), float(dy)) ```
Frob instances, not types. ``` moveRect(rect, dx, dy) ```
12,609
30,460,461
I have this in my project urlconf `photocheck.urls`: ``` urlpatterns = patterns('', url(r'^admin/docs/', include('django.contrib.admindocs.urls')), url(r'^admin/', include(admin.site.urls)), url(r'^rest/', include('core.urls')), url(r'^shotmaker/', include('shotmaker.urls')), url(r'^report/', include('report.urls')), url(r'^users/', include('users.urls')), ) + static(settings.MEDIA_URL, document_root=settings.MEDIA_ROOT) ``` this is my `core` app urlconf: ``` router.register(r'cameras', views.CameraViewSet) router.register(r'lamps', views.LampViewSet) router.register(r'snapshots', views.SnapshotViewSet) urlpatterns = patterns( 'core.views', url(r'', include(router.urls)) ) ``` this is `shotmaker` urlconf: ``` urlpatterns = patterns( 'shotmaker.views', url(r'^$', views.CameraList.as_view(), name='camera_list'), url(r'^camera/(?P<pk>[-\w]+)/$', views.CameraDetail.as_view(), name='camera_detail'), url(r'^save_preview_image/(?P<pk>[-\w]+)/$', views.save_preview_image), url(r'^get_position/(?P<pk>[-\w]+)/$', views.get_position), url(r'^set_position/(?P<pk>[-\w]+)/$', views.set_position), url(r'^update_calibrating_image/(?P<pk>[-\w]+)/$', views.update_calibrating_image), url(r'^save_preview_get_position/(?P<pk>[-\w]+)/$', views.save_preview_get_position), ) ``` and `report` urlconf ``` urlpatterns = patterns( 'report.views', url(r'^$', views.LampReportView.as_view(), name='lamp_report'), ) ``` and `users` urlconf ``` urlpatterns = patterns('', url(r'^login/$', views.MyLoginView.as_view(), name="login"), url(r'^logout/$', LogoutView.as_view(), name="logout"), ) ``` now when I do ``` reverse('lamp_report') ``` i get this: ``` Traceback (most recent call last): File "<console>", line 1, in <module> File "/Users/1111/.virtualenvs/gost_photo/lib/python2.7/site-packages/django/core/urlresolvers.py", line 546, in reverse return iri_to_uri(resolver._reverse_with_prefix(view, prefix, *args, **kwargs)) File "/Users/1111/.virtualenvs/gost_photo/lib/python2.7/site-packages/django/core/urlresolvers.py", line 410, in _reverse_with_prefix self._populate() File "/Users/1111/.virtualenvs/gost_photo/lib/python2.7/site-packages/django/core/urlresolvers.py", line 269, in _populate for pattern in reversed(self.url_patterns): File "/Users/1111/.virtualenvs/gost_photo/lib/python2.7/site-packages/django/core/urlresolvers.py", line 367, in url_patterns patterns = getattr(self.urlconf_module, "urlpatterns", self.urlconf_module) File "/Users/1111/.virtualenvs/gost_photo/lib/python2.7/site-packages/django/core/urlresolvers.py", line 361, in urlconf_module self._urlconf_module = import_module(self.urlconf_name) File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/importlib/__init__.py", line 37, in import_module __import__(name) File "/Users/1111/_gost/photo/photo-monitoring/photocheck/urls.py", line 15, in <module> url(r'^users/', include('users.urls')), File "/Users/1111/.virtualenvs/gost_photo/lib/python2.7/site-packages/django/conf/urls/__init__.py", line 28, in include urlconf_module = import_module(urlconf_module) File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/importlib/__init__.py", line 37, in import_module __import__(name) File "/Users/1111/_gost/photo/photo-monitoring/users/urls.py", line 4, in <module> import views File "/Users/1111/_gost/photo/photo-monitoring/users/views.py", line 6, in <module> class MyLoginView(LoginView): File "/Users/1111/_gost/photo/photo-monitoring/users/views.py", line 8, in MyLoginView success_url = reverse('lamp_report') File "/Users/1111/.virtualenvs/gost_photo/lib/python2.7/site-packages/django/core/urlresolvers.py", line 546, in reverse return iri_to_uri(resolver._reverse_with_prefix(view, prefix, *args, **kwargs)) File "/Users/1111/.virtualenvs/gost_photo/lib/python2.7/site-packages/django/core/urlresolvers.py", line 410, in _reverse_with_prefix self._populate() File "/Users/1111/.virtualenvs/gost_photo/lib/python2.7/site-packages/django/core/urlresolvers.py", line 269, in _populate for pattern in reversed(self.url_patterns): File "/Users/1111/.virtualenvs/gost_photo/lib/python2.7/site-packages/django/core/urlresolvers.py", line 376, in url_patterns raise ImproperlyConfigured(msg.format(name=self.urlconf_name)) ImproperlyConfigured: The included urlconf 'photocheck.urls' does not appear to have any patterns in it. If you see valid patterns in the file then the issue is probably caused by a circular import. ``` so where is the circular import here? and how can I avoid it?
2015/05/26
[ "https://Stackoverflow.com/questions/30460461", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4057053/" ]
Use [`reverse_lazy()`](https://docs.djangoproject.com/en/stable/ref/urlresolvers/#reverse-lazy) instead of `reverse()`.
I got same error and solved but only `reverse_lazy()` is not enough, use `reverse_lazy()` like `reverse_lazy('app_name:url_name')`.
12,613
20,338,064
I am trying to execute a command on a file such as chmod in a python script. How can I get the file name from command line to the script? I want to execute the script like so ./addExecute.py blah Where blah is the name of some file. The code I have is this: ``` #!/usr/bin/python import sys import os file = sys.argv[1] os.system("chmod 700 file") ``` I keep getting the error that it cannot access 'file' no such file or directory.
2013/12/02
[ "https://Stackoverflow.com/questions/20338064", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3058993/" ]
``` os.system("chmod 700 file") ^^^^--- literal string, looking for a file named "file" ``` You probably want ``` os.system("chmod 700 " + file) ^^^^^^---concatenate your variable named "file" ```
It could be something like ``` os.system("chmod 700 %s" % file) ```
12,614
69,963,185
I am trying to convert excel database into python. I have a trading data which I need to import into the system in xml format. my code is following: ``` df = pd.read_excel("C:/Users/junag/Documents/XML/Portfolio2.xlsx", sheet_name="Sheet1", dtype=object) root = ET.Element('trading-data') root.set('xmlns:xsi', 'http://www.w3.org/2001/XMLSchema-instance') tree = ET.ElementTree(root) Portfolios = ET.SubElement(root, "Portfolios") Defaults = ET.SubElement(Portfolios, "Defaults", BaseCurrency="USD") for row in df.itertuples(): Portfolio = ET.SubElement(Portfolios, "Portfolio", Name=row.Name, BaseCurrency=row.BaseCurrency2, TradingPower=str(row.TradingPower), ValidationProfile=row.ValidationProfile, CommissionProfile=row.CommissionProfile) PortfolioPositions = ET.SubElement(Portfolio, "PortfolioPositions") if row.Type == "Cash": PortfolioPosition = ET.SubElement(PortfolioPositions, "PortfolioPosition", Type=row.Type, Volume=str(row.Volume)) Cash = ET.SubElement(PortfolioPosition, 'Cash', Currency=str(row.Currency)) else: PortfolioPosition = ET.SubElement(PortfolioPositions, "PortfolioPosition", Type=row.Type, Volume=str(row.Volume), Invested=str(row.Invested), BaseInvested=str(row.BaseInvested)) Instrument = ET.SubElement(PortfolioPosition, 'Instrument', Ticker=str(row.Ticker), ISIN=str(row.ISIN), Market=str(row.Market), Currency=str(row.Currency2), CFI=str(row.CFI)) ET.indent(tree, space="\t", level=0) tree.write("Portfolios_converted2.xml", encoding="utf-8") ``` The output looks like this: [enter image description here](https://i.stack.imgur.com/wDnI8.png) While I need it to look like this: [enter image description here](https://i.stack.imgur.com/ee3YF.png) How can I improve my code to make the output xml look better? please advise here the excel data: [![enter image description here](https://i.stack.imgur.com/TlR8G.png)](https://i.stack.imgur.com/TlR8G.png)
2021/11/14
[ "https://Stackoverflow.com/questions/69963185", "https://Stackoverflow.com", "https://Stackoverflow.com/users/17410005/" ]
If you are creating a binding then the property must be notifiable, that is, have an associated signal and emit it when it changes: ```py class Manager(QObject): processResult = Signal(bool) df_changed = Signal() def __init__(self): QObject.__init__(self) self.ds = "loading .." @Slot() def start_processing(self): self.set_ds("500") def read_ds(self): return self.ds def set_ds(self, val): self.ds = val self.df_changed.emit() paramDs = Property(str, read_ds, set_ds, notify=df_changed) ```
you should set Row value after setting property, like this: ``` tbModel.setRow(1, { param_name: "number of classes", value: backend.paramDs } ) ``` tbModel is id of your Table View's Model
12,619
39,533,766
I'm having a little problem with a modal in django. I have a link which calls an id and the id is a modal. However, the modal isn't opening. I'm pretty sure that this is happening because the link is inside a "automatic" form, but I'm new in django and python so I have no idea. The code is: ``` {% block body %} <div class="col-lg-12 page-content"> <h1 class="content-title">Meus Dados</h1> <hr class="star-light"> <div class="form-content"> <form class="form-horizontal" method = 'POST' action="/user/edituser/"> {% csrf_token %} {% for field in form_user %} {% bootstrap_field field exclude="password,repeat_password" %} {% endfor %} <div class="button-div"> <a class="btn btn-info btn-block btn-password" href="#change-password" data-toggle="modal">Alterar senha</a> {% buttons %} <button class="btn btn-success btn-block btn-edit" type = 'submit'>Salvar Dados</button> {% endbuttons %} </div> </form> <a class="btn btn-danger btn-block btn-delete" href="/user/delete" name="delete">Excluir minha conta</a> </div> </div> <div class="modal hide" id="change-password"> <div class="modal-header"> <button class="close" data-dismiss="modal">&times;</button> <p class="modal-title" id="myModalLabel">Change Password</p> </div> <div class="modal-body"> <div class="row"> <div class="modal-col col-sm-12"> <div class="well"> <form method="post" id="passwordForm"> <input type="password" class="input-lg form-control" name="password1" id="password1" placeholder="New Password"> <input type="password" class="input-lg form-control" name="password2" id="password2" placeholder="Repeat Password"> </form> </div> </div> </div> </div> <div class="modal-footer"> <a href="#" class="btn btn-success btn-bloc">Alterar Senha</a> </div> </div> {% endblock %} ``` Any doubts, please ask. Thanks.
2016/09/16
[ "https://Stackoverflow.com/questions/39533766", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5516104/" ]
Change your `<a>` tag to the following: ``` <a class="btn btn-info btn-block btn-password" href="#" data-toggle="modal" data-target="#change-password">Alterar senha</a> ``` At least this is how I do it in my Django templates. As I think **@souldeux** was trying to say, you need to use the `data-target` attribute to specify the modal itself, rather than the `href`; I usually just use `#` for that in cases like this. Also, make sure that you are not only loading the bootstrap css code, but also the bootstrap js libraries. In other words, make sure you have the following (or some equivalent) in your template: ``` <!-- Latest compiled and minified JavaScript (at the time of this post) --> <script src="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.7/js/bootstrap.min.js" integrity="sha384-Tc5IQib027qvyjSMfHjOMaLkfuWVxZxUPnCJA7l2mCWNIpG9mGCD8wGNIcPD7Txa" crossorigin="anonymous"></script> ``` In addition to your css: ``` <!-- Latest compiled and minified CSS (at the time of this post) --> <link rel="stylesheet" href="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.7/css/bootstrap.min.css" integrity="sha384-BVYiiSIFeK1dGmJRAkycuHAHRg32OmUcww7on3RYdg4Va+PmSTsz/K68vbdEjh4u" crossorigin="anonymous"> ``` Which I assume you already have, because otherwise things would look weird.
``` <a class="btn btn-info btn-block btn-password" href="#change-password" data-toggle="modal">Alterar senha</a> ``` You need a `data-target` attribute in addition to your `data-toggle`. <http://getbootstrap.com/javascript/#live-demo>
12,620
6,403,757
I tried installing pycurl via pip. it didn't work and instead it gives me this error. ``` running install running build running build_py running build_ext building 'pycurl' extension gcc-4.2 -fno-strict-aliasing -fno-common -dynamic -DNDEBUG -g -fwrapv -Os -Wall -Wstrict-prototypes -DENABLE_DTRACE -arch i386 -arch ppc -arch x86_64 -pipe -DHAVE_CURL_SSL=1 -I/System/Library/Frameworks/ Python.framework/Versions/2.6/include/python2.6 -c src/pycurl.c -o build/temp.macosx-10.6-universal-2.6/src/pycurl.o src/pycurl.c:85:4: warning: #warning "libcurl was compiled with SSL support, but configure could not determine which " "library was used; thus no SSL crypto locking callbacks will be set, which may " "cause random crashes on SSL requests" /usr/libexec/gcc/powerpc-apple-darwin10/4.2.1/as: assembler (/usr/bin/ ../libexec/gcc/darwin/ppc/as or /usr/bin/../local/libexec/gcc/darwin/ ppc/as) for architecture ppc not installed Installed assemblers are: /usr/bin/../libexec/gcc/darwin/x86_64/as for architecture x86_64 /usr/bin/../libexec/gcc/darwin/i386/as for architecture i386 src/pycurl.c:85:4: warning: #warning "libcurl was compiled with SSL support, but configure could not determine which " "library was used; thus no SSL crypto locking callbacks will be set, which may " "cause random crashes on SSL requests" src/pycurl.c:3906: fatal error: error writing to -: Broken pipe compilation terminated. src/pycurl.c:85:4: warning: #warning "libcurl was compiled with SSL support, but configure could not determine which " "library was used; thus no SSL crypto locking callbacks will be set, which may " "cause random crashes on SSL requests" lipo: can't open input file: /var/tmp//ccspAJOg.out (No such file or directory) error: command 'gcc-4.2' failed with exit status 1 ```
2011/06/19
[ "https://Stackoverflow.com/questions/6403757", "https://Stackoverflow.com", "https://Stackoverflow.com/users/200412/" ]
I got it working using this ``` sudo env ARCHFLAGS="-arch x86_64" pip install pycurl ```
if you are on linux with apt-get: ``` lnx#> apt-get search pycurl ``` To install: ``` lnx#> sudo apt-get install python-pycurl ``` if on linux with yum: ``` lnx#> yum search pycurl I get this on my comp: python-pycurl.x86_64 : A Python interface to libcurl ``` To install i've did: `lnx#> sudo yum install python-pycurl` Another alternative, is to use easy\_install: yum, or apt-get, install setuptools. --- If you're using Holly Windows, then get pycurl from [HERE](http://www.lfd.uci.edu/~gohlke/pythonlibs/#pycurl)
12,622
33,442,411
In one of our homework problems, we need to write a class in python called Gate which contains the drawing and function of many different gates in a circuit. It describes as follows: ``` in1 = Gate("input") out1 = Gate("output") not1 = Gate("not") ``` Here `in1`, `out1`, `not1` are all instances of this class. What do the `("input")` `("output")` `("not")` mean? are they subclass or something? We are only told that when we define a class using: ``` class Gate(object) ``` when we make an instance we use: ``` in1 = Gate() ``` I haven't seen stuff inside a () after the class name, how to understand that?
2015/10/30
[ "https://Stackoverflow.com/questions/33442411", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5508199/" ]
Create a wrapper function around `fun` to select an element of the array. For example, the following will integrate the first element of the array. ``` from scipy.integrate import quad # The function you want to integrate def fun(x, a): return np.asarray([a * x, a * x * x]) # The wrapper function def wrapper(x, a, index): return fun(x, a)[index] # The integration quad(wrapper, 0, 1, args=(1, 0)) ``` Following @RobertB's suggestion, you should avoid defining a function `int` because it messes with the builtin names.
Your function returns an array, `integrate.quad` needs a float to integrate. So you want to give it a function that returns one of the elements from your array instead of the function itself. You can do that via a quick `lambda`: ``` def integrate(a, index=0) return quad(lambda x,y: fun(x, y)[index], 0, 1, args=a) ```
12,623
38,668,389
Im in need of help outputting the json key with python. I tried to output the name "carl". Python code : ``` from json import loads import json,urllib2 class yomamma: def __init__(self): url = urlopen('http://localhost/name.php').read() name = loads(url) print "Hello" (name) ``` Php code (for the json which i made): ``` <?php $arr = array('person_one'=>"Carl", 'person_two'=>"jack"); echo json_encode($arr); ``` the output of the php is : {"person\_one":"Carl","person\_two":"jack"}
2016/07/29
[ "https://Stackoverflow.com/questions/38668389", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6580871/" ]
I'll just assume the PHP code works correctly, I don't know PHP very well. On the client, I recommend using [`requests`](http://docs.python-requests.org/en/master/) (installable through `pip install requests`): ``` import requests r = requests.get('http://localhost/name.php') data = r.json() print data['person_one'] ``` The `.json` method returns a Python dictionary. Taking a closer look at your code, it seems you're trying to concatenate two strings by just writing them next to eachother. Instead, use either the concatenation operator (`+`): ``` print "Hello" + data['person_one'] ``` Alternatively, you can use the string formatting functionality: ``` print "Hello {}".format(data['person_one']) ``` Or even fancier (but maybe a bit complex to understand for the start): ``` r = requests.get('http://localhost/name.php') print "Hello {person_one}".format(**r.json()) ```
try this: ``` import json person_data = json.loads(url) print "Hello {}".format(person_data["person_one"]) ```
12,624
31,346,790
I would like to write a simple script to iterate through all the files in a folder and unzip those that are zipped (.zip) to that same folder. For this project, I have a folder with nearly 100 zipped .las files and I'm hoping for an easy way to batch unzip them. I tried with following script ``` import os, zipfile folder = 'D:/GISData/LiDAR/SomeFolder' extension = ".zip" for item in os.listdir(folder): if item.endswith(extension): zipfile.ZipFile.extract(item) ``` However, when I run the script, I get the following error: ``` Traceback (most recent call last): File "D:/GISData/Tools/MO_Tools/BatchUnzip.py", line 10, in <module> extract = zipfile.ZipFile.extract(item) TypeError: unbound method extract() must be called with ZipFile instance as first argument (got str instance instead) ``` I am using the python 2.7.5 interpreter. I looked at the documentation for the zipfile module (<https://docs.python.org/2/library/zipfile.html#module-zipfile>) and I would like to understand what I'm doing incorrectly. I guess in my mind, the process would go something like this: 1. Get folder name 2. Loop through folder and find zip files 3. Extract zip files to folder Thanks Marcus, however, when implementing the suggestion, I get another error: ``` Traceback (most recent call last): File "D:/GISData/Tools/MO_Tools/BatchUnzip.py", line 12, in <module> zipfile.ZipFile(item).extract() File "C:\Python27\ArcGIS10.2\lib\zipfile.py", line 752, in __init__ self.fp = open(file, modeDict[mode]) IOError: [Errno 2] No such file or directory: 'JeffCity_0752.las.zip' ``` When I use print statements, I can see that the files are in there. For example: ``` for item in os.listdir(folder): if item.endswith(extension): print os.path.abspath(item) filename = os.path.basename(item) print filename ``` yields: ``` D:\GISData\Tools\MO_Tools\JeffCity_0752.las.zip JeffCity_0752.las.zip D:\GISData\Tools\MO_Tools\JeffCity_0753.las.zip JeffCity_0753.las.zip ``` As I understand the documentation, ``` zipfile.ZipFile(file[, mode[, compression[, allowZip64]]]) ``` > > Open a ZIP file, where file can be either a path to a file (a string) or a file-like object > > > It appears to me like everything is present and accounted for. I just don't understand what I'm doing wrong. Any suggestions? Thank You
2015/07/10
[ "https://Stackoverflow.com/questions/31346790", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4688131/" ]
Below is the code that worked for me: ``` import os, zipfile dir_name = 'C:\\SomeDirectory' extension = ".zip" os.chdir(dir_name) # change directory from working dir to dir with files for item in os.listdir(dir_name): # loop through items in dir if item.endswith(extension): # check for ".zip" extension file_name = os.path.abspath(item) # get full path of files zip_ref = zipfile.ZipFile(file_name) # create zipfile object zip_ref.extractall(dir_name) # extract file to dir zip_ref.close() # close file os.remove(file_name) # delete zipped file ``` Looking back at the code I had amended, the directory was getting confused with the directory of the script. The following also works while not ruining the working directory. First remove the line ``` os.chdir(dir_name) # change directory from working dir to dir with files ``` Then assign file\_name as ``` file_name = dir_name + "/" + item ```
You need to construct a `ZipFile` object with the filename, and *then* extract it: ``` zipfile.ZipFile.extract(item) ``` is wrong. ``` zipfile.ZipFile(item).extractall() ``` will extract all files from the zip file with the name contained in `item`. I think you should more closely read the documentation to `zipfile` :) but you're on the right track!
12,625
55,355,567
I am writing code to read data from a CSV file to a pandas dataframe and to get the unique values and concatenate them as a string. The problem is that one of the columns contains the values `True` and `False`. So while concatenating the values I am getting the error > > > ``` > sequence item 0: expected str instance, bool found > > ``` > > I want python to treat `True` as string rather than boolean value. I have tried many options but none worked. The full code and a traceback are attached below. ``` import pandas as pd df=pd.read_csv('C:/Users/jaiveeru/Downloads/run_test1.csv') cols=df.columns.tolist() for i in cols: lst=df[i].unique().tolist() str1 = ','.join(lst) lst2=[str1] ``` > > > ``` > ----> 5 str1 = ','.join(lst) > TypeError: sequence item 0: expected str instance, bool found > > ``` > > `lst2` should have values `['True,False']`
2019/03/26
[ "https://Stackoverflow.com/questions/55355567", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3584680/" ]
Python 3 does not preform implicit casts. You will need to explicitly cast your booleans to strings. This can be done easily with [`map` builtin function](https://docs.python.org/3/library/functions.html?highlight=builtin%20filter#map) which applies a function on each item of an iterable and returns the result: ``` str1 = ','.join(map(str, lst)) ```
Use `.astype(str)` **Ex:** ``` df[i].unique().astype(str).tolist() ```
12,634
54,880,349
I have installed Python 3.7 64-bit on my 64-bit OS. I have also Installed mysql-installer-community-8.0.15.0 plus I installed MySQL connector using this code `python -m pip install mysql-connector` and still when I try to import mysql.connector. I get this error. > > "C:\Users\Basir > Payenda\PycharmProjects\newprj\venv\Scripts\python.exe" > "C:/Users/Basir Payenda/PycharmProjects/newprj/app.py" Traceback (most > recent call last): File "C:/Users/Basir > Payenda/PycharmProjects/newprj/app.py", line 1, in > import mysql.connector ModuleNotFoundError: No module named 'mysql' > > > In addition, I installed mysql-connector-python-8.0.15-py3.7-windows-x86-64bit as well on my machine. Please help me solve this problem, I have tried all possible things I could. Thank you edit: Used the following codes to install mysql.connector ``` C:\Users\Basir Payenda\AppData\Local\Programs\Python\Python37\Scripts>python -m pip install mysql-connector ``` and ``` pip3 install mysql-connector ``` using pip3 I get this message: > > Requirement already satisfied: mysql-connector in C:\users\basir > payenda\appdata\local\programs\python\python37\lib\site-packages > <2.1.6> > > > and again when I go to pycharm and import mysql.connector I get above stated message no module 'mysql' found edit after 2 hours: No answers, I tried this. I uninstalled everything, python, mysql, pychar and reinstalled. Again the same problem. SHOULD I CHANGE MY COMPUTER?
2019/02/26
[ "https://Stackoverflow.com/questions/54880349", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11080590/" ]
You are doing ``` throw new InvalidTestScore("Invalid Test Score"); ``` so you have to declare that your method is actually going to throw this exception ``` public static void inTestScore(int[] arr) throws InvalidTestScore ```
You must declare that your method may throw this exception: ``` public static void inTestScore(int[] arr) throws InvalidTestScore { ... } ``` This allows the compiler to force any method that calls your method to either catch this exception, or declare that it may throw it.
12,635
63,710,044
I am trying to use Serverless Framework to deploy a Python Fast API WebApp. Is is related to issue: <https://github.com/jordaneremieff/mangum/issues/126> When I deploy it using serverless, sls depoy and Invoke the function I am getting the following error: ``` [ERROR] KeyError: 'requestContext' Traceback (most recent call last): File "/var/task/mangum/adapter.py", line 110, in __call__ return self.handler(event, context) File "/var/task/mangum/adapter.py", line 130, in handler if "eventType" in event["requestContext"]: ``` I have tried with python 3.8 and 3.7. Not able to find a resolution on the web for the same. Also tried using the parameters spec\_version=2(which is not required I feel). I feel something is missing here, the issue is somewhere around: ``` Adapter requires the information in the event and request context to form the ASGI connection scope. ``` Wondering if anyone has got FastAPI working on AWS Lambda using serverless Framework. My handler: ``` from fastapi import FastAPI from mangum import Mangum app = FastAPI() handler = Mangum(app) @app.get("/ping") def ping(): return {'response': 'pong'} ``` serverless.yml: ``` provider: name: aws runtime: python3.8 stage: dev region: ap-southeast-1 memorySize: 256 functions: ping: handler: ping.handler events: - http: path: ping method: get cors: true ``` My requirements.txt ``` appnope==0.1.0 backcall==0.2.0 certifi==2020.6.20 chardet==3.0.4 click==7.1.2 decorator==4.4.2 fastapi==0.61.1 h11==0.9.0 httpcore==0.10.2 httptools==0.1.1 httpx==0.14.1 idna==2.10 ipython==7.17.0 ipython-genutils==0.2.0 jedi==0.17.2 mangum==0.9.2 parso==0.7.1 pexpect==4.8.0 pickleshare==0.7.5 prompt-toolkit==3.0.6 ptyprocess==0.6.0 pydantic==1.6.1 Pygments==2.6.1 requests==2.24.0 rfc3986==1.4.0 six==1.15.0 sniffio==1.1.0 starlette==0.13.6 traitlets==4.3.3 typing-extensions==3.7.4.2 urllib3==1.25.10 uvicorn==0.11.8 uvloop==0.14.0 wcwidth==0.2.5 websockets==8.1 ```
2020/09/02
[ "https://Stackoverflow.com/questions/63710044", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1694699/" ]
The issue lies within the `mangum` adapter expecting input similar to the `event` content specified by [AWS API Gateway shown here](https://docs.aws.amazon.com/lambda/latest/dg/services-apigateway.html). You'll see that there's a `requestResponse` dictionary there that the Mangum adapter seems to strictly require to function. If your AWS Lambda event doesn't contain that key, then you'll need to add a placeholder field or construct a new context dictionary for it to look for. This can be done by creating a custom handler [as shown within this part of the Mangum docs](https://mangum.io/adapter/). If you don't particularly care about what the adapter reads as the context then you can get a working version going with the following: ``` def handler(event, context): event['requestContext'] = {} # Adds a dummy field; mangum will process this fine asgi_handler = Mangum(app) response = asgi_handler(event, context) return response ```
You need to provide at least the **minimal** `Event data`, like in the example below, when you invoke a FastAPI-based lambda function (for example via AWS console Lambda -> Test Event): ``` { "resource": "/", "path": "/api/v1/test/", "httpMethod": "GET", "requestContext": { }, "multiValueQueryStringParameters": null } ``` Please see details about the **Event format** in <https://docs.aws.amazon.com/lambda/latest/dg/services-apigateway.html#apigateway-example-event> The need comes from the `mangum` adapter for FastAPI, explained in the [answer](https://stackoverflow.com/a/64403553/392118).
12,636
24,478,623
I trying to setup virtualenvwrapper in GitBash (Windows 7). I write the next lines: `1 $ export WORKON_HOME=$HOME/.virtualenvs 2 $ export MSYS_HOME=/c/msys/1.0 3 $ source /usr/local/bin/virtualenvwrapper.sh` And the last line give me an error: `source /usr/local/bin/virtualenvwrapper.sh sh.exe: /usr/local/bin/virtualenvwrapper.sh: No such file or directory` I find, where on my drive is `virtualenvwrapper.sh` and change directory name. On my computer it's `/c/Python27/Scripts/virtualenvwrapper.sh`. When I again run command `$source /c/Python27/Scripts/virtualenvwrapper.sh` I get the next ERROR message: `sh.exe":mktemp:command not found ERROR: virtualenvwrapper could not create a temporary file name` I check my enviroment variable: `C:\python27\;C:\python27\scripts\;C:\python27\scripts\virtualenvwrapper.sh\;C:\msys;C:\Program Files (x86)\Git\cmd;C:\Program Files (x86)\Git\bin\` I don't know where i made a mistake
2014/06/29
[ "https://Stackoverflow.com/questions/24478623", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
I believe it has something to do with the "image" input. have you considered using a button element instead? ``` <button type="submit" name="someName" value="someValue"><img src="someImage.png" alt="SomeAlternateText"></button> ```
Try this :- ``` <form action="dologin.php" method="post"> <input type="text" name="email" class="form-control" placeholder="Username"> <input type="password" name="password" class="form-control" placeholder="Password"> <input type="image" src="img/login.png" type="submit" alt="Login"> </form> ``` And in dologin.php : ``` email = $_POST['email']; echo $email; ```
12,637
38,316,477
so i'm trying to convert a bash script, that i wrote, into python, that i'm learning, and the python equivalent of the bash whois just can't give me the answer that i need. this is what i have in bash- ``` whois 'ip address' | grep -i abuse | \ grep -o [[:alnum:]]*\@[[:alnum:]]*\.[[:alpha:]]* | sort -u ``` and it works perfectly. when trying to do something similar in python(3.5.2)- ``` IPWhois('ip address').lookup_whois() ``` it's giving me a dictionary with the object that i'm looking for in the first value about half way through the string. i have tried to put it into `str(dict).splice('\n')[index]`, yet with each iteration the index changes so i can't put it into a script like that. also the bash whois can do both ip addresses and domain names with out having to convert. i think that i have figured out the conversion, yet trying to grab the results from the IPWhois is giving me a pain in the butt. i could call the bash `whois` from `subprocess.call`, yet would like to figure out how to do it in python. i know that i can grab part of it with `re.configure`, yet again the return changes so `re.compile` would have to change each time also. do i keep trying or do i just stick with the bash script that works so well? i have already written most of the python script and the things that i have to look up are helping me learn. any ideas? you can see the bash script [here](https://drive.google.com/open?id=0BwhDqxZzf5XHWk9nSTdLc2FiX28) thanks, em
2016/07/11
[ "https://Stackoverflow.com/questions/38316477", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6576450/" ]
This should do what you are looking for. It works correctly in the snippet. ```js window.onload = onPageLoad(); function onPageLoad() { document.getElementById("1403317").checked = true; } ``` ```html <input type="checkbox" id="1403317"> ```
try this one maybe ? ``` $(document).ready(function() { $('#1403317').attr('checked', true) }; ```
12,638
49,771,589
I just made the transition from Spyder to VScode for my python endeavours. Is there a way to run individual lines of code? That's how I used to do my on-the-spot debugging, but I can't find an option for it in VScode and really don't want to keep setting and removing breakpoints. Thanks.
2018/04/11
[ "https://Stackoverflow.com/questions/49771589", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4482349/" ]
If you highlight some code, you can right-click or run the command, `Run Selection/Line in Python Terminal`. We are also planning on [implementing Ctrl-Enter](https://github.com/Microsoft/vscode-python/issues/1349) to do the same thing and looking at [Ctr-Enter executing the current line](https://github.com/Microsoft/vscode-python/issues/480).
One way you can do it is through the Integrated Terminal. Here is the guide to open/use it: <https://code.visualstudio.com/docs/editor/integrated-terminal> After that, type `python3` or `python` since it is depending on what version you are using. Then, copy and paste the fraction of code you want to run into the terminal. It now has the same functionality as the console in Spyder. Hope this helps.
12,639
61,394,928
I have just started using Node.js, and I don't know how to get user input. I am looking for the JavaScript counterpart of the python function `input()` or the C function `gets`. Thanks.
2020/04/23
[ "https://Stackoverflow.com/questions/61394928", "https://Stackoverflow.com", "https://Stackoverflow.com/users/13262787/" ]
This also works well: ``` const fs = require('fs'); process.stdin.resume(); process.stdin.setEncoding('utf-8'); let inputString = ''; let currentLine = 0; process.stdin.on('data', inputStdin => { inputString += inputStdin; }); process.stdin.on('end', _ => { inputString = inputString.replace(/\s*$/, '') .split('\n') .map(str => str.replace(/\s*$/, '')); main(); }); function readLine() { return inputString[currentLine++]; } function main() { const ws = fs.createWriteStream(process.env.OUTPUT_PATH); const n = parseInt(readLine(), 10); // Read and integer like this // Read an array like this const c = readLine().split(' ').map(cTemp => parseInt(cTemp, 10)); let result; // result of some calculation as an example ws.write(result + "\n"); ws.end(); } ``` Here my process.env.OUTPUT\_PATH is set, if yours is not, you can use something else.
First, install prompt-sync: `npm i prompt-sync` Then in your JavaScript file: ``` const ps = require("prompt-sync"); const prompt = ps(); let name = prompt("What is your name? "); console.log("Hello, ", name); ``` That's it. You can improve this by adding `{sigint: true}` when initialising ps. With this configuration, the node app will stop at that point. With the above code, when you exit (Ctrl + C) when you're asked for the name, you will see `Hello, null`, but you will not get that with the change below: ``` const ps = require("prompt-sync"); const prompt = ps({sigint: true}); // Change on this line. let name = prompt("What is your name? "); console.log("Hello, ", name); ``` Of course, you simplify the above code... ``` const prompt = require("prompt-sync")({sigint: true}); let name = prompt("What is your name? "); console.log("Hello, ", name); ```
12,647
46,776,264
My application requirement is to use our LDAP directory to authenticate a User based on their network login. I setup the LDAP correctly using ldap3 in my system.py. I'm able to bind to a user and identify credentials in python, not using Django. Which authentication backend would I set Django up to use to make my login function as I want?
2017/10/16
[ "https://Stackoverflow.com/questions/46776264", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8570299/" ]
This is a broad question and I am not sure your experience with Django so without more information I would suggest trying [this](https://github.com/etianen/django-python3-ldap) or [this](http://fle.github.io/combine-ldap-and-classical-authentication-in-django.html)
I am running Python 3 and have used the excellent `django-python3-ldap` package with both OpenLDAP and Active Directory from Django 1.6 through 2.0. You can find it here: <https://github.com/etianen/django-python3-ldap> It is a well maintained package that we've been able to use as we upgrade Django from version to version.
12,657
49,352,889
I installed `fiona` as follows: ``` conda install -c conda-forge fiona ``` It installed without any errors. When I try to import `fiona`, I get the following error: Traceback (most recent call last): ``` File "<stdin>", line 1, in <module> File "/home/name/anaconda3/lib/python3.6/site-packages/fiona/__init__.py", line 69, in <module> from fiona.collection import Collection, BytesCollection, vsi_path File "/home/name/anaconda3/lib/python3.6/site-packages/fiona/collection.py", line 9, in <module> from fiona.ogrext import Iterator, ItemsIterator, KeysIterator ImportError: /home/name/anaconda3/lib/python3.6/site-packages/fiona/../../.././libkea.so.1.4.7: undefined symbol: _ZN2H56H5FileC1ERKSsjRKNS_17FileCreatPropListERKNS_15FileAccPropListE ``` Incase it helps with diagnosis, here is the output of `conda list`: ``` _ipyw_jlab_nb_ext_conf 0.1.0 py36he11e457_0 alabaster 0.7.10 py36h306e16b_0 anaconda custom py36hbbc8b67_0 anaconda-client 1.6.9 py36_0 anaconda-navigator 1.7.0 py36_0 anaconda-project 0.8.2 py36h44fb852_0 asn1crypto 0.24.0 py36_0 astroid 1.6.1 py36_0 astropy 2.0.3 py36h14c3975_0 attrs 17.4.0 py36_0 automat 0.6.0 py36_0 conda-forge Automat 0.6.0 <pip> babel 2.5.3 py36_0 backports 1.0 py36hfa02d7e_1 backports.shutil_get_terminal_size 1.0.0 py36hfea85ff_2 beautifulsoup4 4.6.0 py36h49b8c8c_1 bitarray 0.8.1 py36h14c3975_1 bkcharts 0.2 py36h735825a_0 blaze 0.11.3 py36h4e06776_0 bleach 2.1.2 py36_0 bokeh 0.12.13 py36h2f9c1c0_0 boost 1.66.0 py36_1 conda-forge boost-cpp 1.66.0 1 conda-forge boto 2.48.0 py36h6e4cd66_1 bottleneck 1.2.1 py36haac1ea0_0 bzip2 1.0.6 h9a117a8_4 ca-certificates 2018.1.18 0 conda-forge cairo 1.14.12 h77bcde2_0 certifi 2018.1.18 py36_0 conda-forge cffi 1.11.4 py36h9745a5d_0 chardet 3.0.4 py36h0f667ec_1 click 6.7 py36h5253387_0 click-plugins 1.0.3 py36_0 conda-forge cligj 0.4.0 py36_0 conda-forge cloudpickle 0.5.2 py36_1 clyent 1.2.2 py36h7e57e65_1 colorama 0.3.9 py36h489cec4_0 conda 4.3.34 py36_0 conda-forge conda-build 3.4.1 py36_0 conda-env 2.6.0 0 conda-forge conda-verify 2.0.0 py36h98955d8_0 constantly 15.1.0 py_0 conda-forge contextlib2 0.5.5 py36h6c84a62_0 cryptography 2.1.4 py36hd09be54_0 cssselect 1.0.3 py_0 conda-forge curl 7.58.0 h84994c4_0 cycler 0.10.0 py36h93f1223_0 cython 0.27.3 py36h1860423_0 cytoolz 0.9.0 py36h14c3975_0 dask 0.16.1 py36_0 dask-core 0.16.1 py36_0 datashape 0.5.4 py36h3ad6b5c_0 dbus 1.12.2 hc3f9b76_1 decorator 4.2.1 py36_0 distributed 1.20.2 py36_0 docutils 0.14 py36hb0f60f5_0 entrypoints 0.2.3 py36h1aec115_2 et_xmlfile 1.0.1 py36hd6bccc3_0 expat 2.2.5 he0dffb1_0 fastcache 1.0.2 py36h14c3975_2 filelock 2.0.13 py36h646ffb5_0 fiona 1.7.11 py36_3 conda-forge flask 0.12.2 py36hb24657c_0 flask-cors 3.0.3 py36h2d857d3_0 fontconfig 2.12.4 h88586e7_1 freetype 2.8 hab7d2ae_1 freexl 1.0.5 0 conda-forge gdal 2.2.2 py36hc209d97_1 geos 3.6.2 1 conda-forge get_terminal_size 1.0.0 haa9412d_0 gevent 1.2.2 py36h2fe25dc_0 giflib 5.1.4 0 conda-forge glib 2.53.6 h5d9569c_2 glob2 0.6 py36he249c77_0 gmp 6.1.2 h6c8ec71_1 gmpy2 2.0.8 py36hc8893dd_2 graphite2 1.3.10 hf63cedd_1 greenlet 0.4.12 py36h2d503a6_0 gst-plugins-base 1.12.4 h33fb286_0 gstreamer 1.12.4 hb53b477_0 h5py 2.7.1 py36h3585f63_0 harfbuzz 1.7.4 hc5b324e_0 hdf4 4.2.13 0 conda-forge hdf5 1.10.1 h9caa474_1 heapdict 1.0.0 py36_2 html5lib 1.0.1 py36h2f9c1c0_0 hyperlink 17.3.1 py_0 conda-forge icu 58.2 h9c2bf20_1 idna 2.6 py36h82fb2a8_1 imageio 2.2.0 py36he555465_0 imagesize 0.7.1 py36h52d8127_0 incremental 17.5.0 py_0 conda-forge intel-openmp 2018.0.0 hc7b2577_8 ipykernel 4.8.0 py36_0 ipython 6.2.1 py36h88c514a_1 ipython_genutils 0.2.0 py36hb52b0d5_0 ipywidgets 7.1.1 py36_0 isort 4.2.15 py36had401c0_0 itsdangerous 0.24 py36h93cc618_1 jbig 2.1 hdba287a_0 jdcal 1.3 py36h4c697fb_0 jedi 0.11.1 py36_0 jinja2 2.10 py36ha16c418_0 jpeg 9b h024ee3a_2 json-c 0.12.1 0 conda-forge jsonschema 2.6.0 py36h006f8b5_0 jupyter 1.0.0 py36_4 jupyter_client 5.2.2 py36_0 jupyter_console 5.2.0 py36he59e554_1 jupyter_core 4.4.0 py36h7c827e3_0 jupyterlab 0.31.5 py36_0 jupyterlab_launcher 0.10.2 py36_0 kealib 1.4.7 4 conda-forge krb5 1.14.2 0 conda-forge lazy-object-proxy 1.3.1 py36h10fcdad_0 libcurl 7.58.0 h1ad7b7a_0 ``` (...) Any ideas what may be the problem?
2018/03/18
[ "https://Stackoverflow.com/questions/49352889", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9400561/" ]
I guess this problem comes from conflicts with stuff already installed in the Anaconda distribution. My inelegant workaround is: ``` conda install -c conda-forge geopandas conda remove geopandas fiona pip install geopandas fiona ```
Because I did not want to uninstall geopandas, I solved the issue by upgrading fiona via pip ``` pip install --upgrade fiona ```
12,658
56,378,783
I'm doing a supposedly simple python challenge a friend gave me involving an elevator and the logic behind its movements. Everything was going well and good until I got to the point where I had to write how to determine if the elevator could move hit a called floor en route to its next queued floor. ```py def floorCompare(currentFloor,destinationFloor,calledFloor): if calledFloor > currentFloor and calledFloor < destinationFloor: return(True) elif calledFloor < currentFloor and calledFloor > destinationFloor: return(True) else: return(False) floor = "1" doors = "closed" queue = [] def elevator(): # function defines how the elevator should move print("The Elevator is on floor: 1. The doors are "+doors+".") for x in range(int(input("How many floors need to be visited? "))): callFloor = int(input("Floor to call the elevator to: ")) queue.append(callFloor) if callFloor > 10 or callFloor < 1: raise Exception(str(callFloor)+" is not a valid floor.") if queue[0] == 1: del queue[0] print("The elevator has arrived on floor 1, and the doors are open.") print("The queue of floors to visit is...",queue) for x in queue: print("The elevator's doors are closed and it's moving to floor:",x) floor = str(x) print("...") print() print("The elevator has arrived on floor "+floor+", and the doors are open.") queue.remove(x) addFloor = int(input("Floor to call the elevator to: ")) if addFloor > 10 or addFloor < 1: raise Exception(str(addFloor)+" is not a valid floor.") print(queue) if floorCompare(int(floor), int(queue[0]), int(addFloor)) == True: print("The elevator can hit this stop en route to its next one.") else: print("The elevator must hit this stop separately.") print("Continuing Queue") elevator() ``` So in the For loop, there is a nested If/Else loop, which I would assume would be iterated along with the rest of the code in the for loop. However, when I run the code, upon reaching the If/Else loop, it breaks out of the For loop and continues on its merry way, disregarding any further iterations that need to be done in the array. What's going on here? When I run the code with a basic trial set of floors, here's what I get as output. ``` The Elevator is on floor: 1. The doors are closed. How many floors need to be visited? 4 Floor to call the elevator to: 3 Floor to call the elevator to: 6 Floor to call the elevator to: 9 Floor to call the elevator to: 10 The queue of floors to visit is... [3, 6, 9, 10] The elevator's doors are closed and it's moving to floor: 3 ... The elevator has arrived on floor 3, and the doors are open. Floor to call the elevator to: 7 [6, 9, 10] The elevator must hit this stop seperately. The elevator's doors are closed and it's moving to floor: 9 ... The elevator has arrived on floor 9, and the doors are open. Floor to call the elevator to: 3 [6, 10] The elevator must hit this stop separately. Process finished with exit code 0 ```
2019/05/30
[ "https://Stackoverflow.com/questions/56378783", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10760049/" ]
I think your problem lies in the use of a for loop in conjunction with the queue.remove() function. It seems like the `for x in queue:` operator runs into problems when you edit the list while it runs. I would recommend using `while queue:` instead and setting x to the first element. ``` while queue: x = queue[0] print("The elevator's doors are closed and it's moving to floor:",x) floor = str(x) print("...") print() print("The elevator has arrived on floor "+floor+", and the doors are open.") queue.remove(x) addFloor = int(input("Floor to call the elevator to: ")) if addFloor > 10 or addFloor < 1: raise Exception(str(addFloor)+" is not a valid floor.") print(queue) if floorCompare(int(floor), int(queue[0]), int(addFloor)) == True: print("The elevator can hit this stop en route to its next one.") else: print("The elevator must hit this stop separately.") ```
The reason for it to skip floor 6, is because of removing the data from the list, which is being iterated. ``` l=[3,6,9,10,14] for i in l: print(i) ``` Output: 3 6 9 10 14 ``` for i in l: print(i) l.remove(i) ``` output: 3 9 14
12,659
69,869,854
Can I get the `__doc__` string of the main script? Here is the starting script, which would be run from command line: `python a.py` module a.py ```py import b b.func() ``` module b.py ```py def func(): ???.__doc__ ``` How can I get the calling module, as an object? I am not asking about how to get the file name as string. I know how to retrieve the file's name from stack trace. I don't want to retrieve the **doc** string by parsing manually. Also, I don't think I can just import by `m = __import__(a)` because of circular import loop.
2021/11/07
[ "https://Stackoverflow.com/questions/69869854", "https://Stackoverflow.com", "https://Stackoverflow.com/users/84196/" ]
a.py ``` """ Foo bar """ import b if __name__ == '__main__': b.show_your_docs() ``` b.py ``` def show_your_docs(): name = caller_name(1) print(__import__(name).__doc__) ``` Where caller\_name is code from this [gist](https://gist.github.com/techtonik/2151727#gistcomment-2333747) The weakness of this approach though is it is getting a string representation of the module name and reimporting it, instead of getting a reference to the module (type).
This is the full solution from @MatthewMartin 's accepted answer: ``` def show_your_docs(): name = caller_name(1) print(__import__(name).__doc__) def caller_docstring(level=1): name = caller_name(level) return __import__(name).__doc__ def caller_name(skip=2): def stack_(frame): framelist = [] while frame: framelist.append(frame) frame = frame.f_back return framelist stack = stack_(sys._getframe(1)) start = 0 + skip if len(stack) < start + 1: return '' parentframe = stack[start] name = [] module = inspect.getmodule(parentframe) if module: name.append(module.__name__) if 'self' in parentframe.f_locals: name.append(parentframe.f_locals['self'].__class__.__name__) codename = parentframe.f_code.co_name if codename != '<module>': # top level usually name.append(codename) # function or a method del parentframe return ".".join(name) ``` ``` text = caller_docstring(2) ```
12,662
18,587,208
can any one tell me how to simulate touch on Image Button using android view client python
2013/09/03
[ "https://Stackoverflow.com/questions/18587208", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2741623/" ]
Replace ``` android:src="res/drawable-hdpi/capture.PNG" ``` with ``` android:src="@drawable/capture" ``` Hope it helps.
change ``` android:src="res/drawable-hdpi/capture.PNG" ``` with ``` android:src="@drawable/capture" ```
12,663
28,180,511
I wrote a simple triple quote print statement. See below. For the OVER lineart, it gets truncated into two different lines (when you copy paste this into the interpreter.) But, if I insert a space or any at the end of each of the lines, then it prints fine. Any idea why this behavior in python. I am inclined to think this is due to \ and / at the end of the lines, but I cannot find a concrete reason. I tried removing them and and have some observations but would like a clear reasoning.. ``` print( """ _____ ____ __ __ ______ / ____| / _ | / | /| | ____| | | / / | | / /| /| | | |___ | | _ / /__| | / / |_/| | | ___| | |__| | / / | | / / | | | |____ \_____/ /_/ |_| /_/ |_| |______| ______ _ _ ______ _____ / __ \ | | / / | ____| | _ \ | | | | | | / / | |___ | |_| | | | | | | | / / | ___| | _ / | |__| | | |_/ / | |____ | | \ \ \______/ |____/ |______| |_| \_\ """ ) ```
2015/01/27
[ "https://Stackoverflow.com/questions/28180511", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4500493/" ]
You have `\` backslash escapes in your string, one each on the last two lines as well as on the first line spelling *over*, all three part of the letter *R*. These signal to Python that you wanted to *ignore* the newline right after it. Either use a space right after each `\` backslash at the end of a line, *double* the backslashes to escape the escape, or use a *raw* string by prefixing the triple quote with a `r`: ``` print( r""" _____ ____ __ __ ______ / ____| / _ | / | /| | ____| | | / / | | / /| /| | | |___ | | _ / /__| | / / |_/| | | ___| | |__| | / / | | / / | | | |____ \_____/ /_/ |_| /_/ |_| |______| ______ _ _ ______ _____ / __ \ | | / / | ____| | _ \ | | | | | | / / | |___ | |_| | | | | | | | / / | ___| | _ / | |__| | | |_/ / | |____ | | \ \ \______/ |____/ |______| |_| \_\ """ ) ``` Raw strings don't support backslash escapes, except for escaped quotes (`\"` and `\'`) which would be included *with the backslash*.
The problem is with `\` at the end of line so you need to escape them. For that i use another backslash. ``` print( """ _____ ____ __ __ ______ / ____| / _ | / | /| | ____| | | / / | | / /| /| | | |___ | | _ / /__| | / / |_/| | | ___| | |__| | / / | | / / | | | |____ \_____/ /_/ |_| /_/ |_| |______| ______ _ _ ______ _____ / __ \ | | / / | ____| | _ \\ | | | | | | / / | |___ | |_| | | | | | | | / / | ___| | _ / | |__| | | |_/ / | |____ | | \ \\ \______/ |____/ |______| |_| \_\\ """ ) ```
12,667
54,914,306
This code fails when it runs: ``` import datetime import subprocess startdate = datetime.datetime(2010,4,9) for i in range(1): startdate += datetime.timedelta(days=1) enddate = datetime.datetime(2010,4,10) for i in range(1): enddate += datetime.timedelta(days=1) subprocess.call("sudo mam-list-usagerecords -s \"" + str(startdate) + "\" -e \"" + str(enddate) + " --format csv --full") ``` The program has these errors when it runs: ``` File "QuestCommand.py", line 12, in <module> subprocess.call("sudo mam-list-usagerecords -s \"" + str(startdate) + "\" -e \"" + str(enddate) + " --format csv --full") File "/usr/lib64/python2.7/subprocess.py", line 524, in call return Popen(*popenargs, **kwargs).wait() File "/usr/lib64/python2.7/subprocess.py", line 711, in __init__ errread, errwrite) File "/usr/lib64/python2.7/subprocess.py", line 1327, in _execute_child raise child_exception ``` I have ran this code multiple times with other ways, changing quotes and whatnot. I am fairly new to system calls and utilizing an HPC allocation database. I am stuck and if anyone can help me with resolving this issue that would be very helpful. Thank you!
2019/02/27
[ "https://Stackoverflow.com/questions/54914306", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11035382/" ]
When possible, pass a *list* containing your command name and its arguments. ``` subprocess.call(["sudo", "mam-list-usagerecords", "-s", str(startdate), "-e", str(enddate), "--format", "csv", "--full"]) ``` This avoids the need to even know how the shell will process a command line.
When I first started using some of the subprocess methods I ran into some of the same issues. Try running your code like this: ``` import datetime import subprocess import shlex startdate = datetime.datetime(2010, 4, 9) + datetime.timedelta(days=1) enddate = datetime.datetime(2010, 4, 10) + datetime.timedelta(days=1) command = ( "sudo mam-list-usagerecords -s " + str(startdate) + "-e" + str(enddate) + " --format csv --full" ) print(command) print(type(command)) print(shlex.split(command)) subprocess.call(shlex.split(command)) ``` OUTPUT: > > sudo mam-list-usagerecords -s 2010-04-10 00:00:00-e2010-04-11 00:00:00 --format csv --full > > > class 'str' > > > ['sudo', 'mam-list-usagerecords', '-s', '2010-04-10', '00:00:00-e2010-04-11', '00:00:00', '--format', 'csv', '--full'] > > > (Command output redacted.) When the kwarg `shell` is set to `False` which is the default, the command may have to be a collection which is what [shlex.split](https://docs.python.org/3/library/shlex.html#shlex.split) does. > > args should be a sequence of program arguments or else a single string. By default, the program to execute is the first item in args if args is a sequence. If args is a string, the interpretation is platform-dependent and described below. See the shell and executable arguments for additional differences from the default behavior. Unless otherwise stated, it is recommended to pass args as a sequence. > > > [Popen constructor](https://docs.python.org/3/library/subprocess.html#popen-constructor) This issue used to confuse me to no end until I found this in the docs.
12,668
34,999,726
Imagine I have an initializer with optional parameter ``` def __init__(self, seed = ...): ``` Now if parameter is not specified I want to provide a default value. But the seed is hard to calculate, so I have a class method that suggests the seed based on some class variables ``` MyClass.seedFrom(...) ``` Now how can I call it from that place? If I use `self`: ``` def __init__(self, seed = self.__class__.seedFrom(255)): ``` I definitely don't know what is self at that point. If I use the class name directly (which I hate to do): ``` def __init__(self, seed = MyClass.seedFrom(255)): ``` it complains that > > name 'MyClass' is not defined > > > I'm also interested to learn the pythonic way of doing this. And I hope that pythonic way is not hardcoding stuff and making it nil by default and checking it later…
2016/01/25
[ "https://Stackoverflow.com/questions/34999726", "https://Stackoverflow.com", "https://Stackoverflow.com/users/982238/" ]
In case you only need to call `seedFrom` once, you can do so when `__init__` is defined. ``` class MyClass: # Defining seedFrom as a function outside # the class is also an option. Defining it # as a class method is not, since you still # have the problem of not having a class to # pass as the first argument when it is time # to declare __init__ @staticmethod def seedFrom(): ... def __init__(self, seed=seedFrom()): ... ```
If you must do this in the **init** and want the seed method to be on the class, then you can make it a class method, eg as follows: ``` class SomeClass(object): defaultSeed = 255 @classmethod def seedFrom(cls, seed): pass # some seed def __init__(self, seed=None): self.seedFrom(seed if seed is not None else self.defaultSeed) # etc ```
12,669
4,011,526
I am a nurse and I know python but I am not an expert, just used it to process DNA sequences We got hospital records written in human languages and I am supposed to insert these data into a database or csv file but they are more than 5000 lines and this can be so hard. All the data are written in a consistent format let me show you an example ``` 11/11/2010 - 09:00am : He got nausea, vomiting and died 4 hours later ``` I should get the following data ``` Sex: Male Symptoms: Nausea Vomiting Death: True Death Time: 11/11/2010 - 01:00pm ``` Another example ``` 11/11/2010 - 09:00am : She got heart burn, vomiting of blood and died 1 hours later in the operation room ``` And I get ``` Sex: Female Symptoms: Heart burn Vomiting of blood Death: True Death Time: 11/11/2010 - 10:00am ``` the order is not consistent by when I say in ....... so in is a keyword and all the text after is a place until i find another keyword At the beginnning He or She determine sex, got ........ whatever follows is a group of symptoms that i should split according to the separator which can be a comma, hypen or whatever but it's consistent for the same line died ..... hours later also should get how many hours, sometimes the patient is stil alive and discharged ....etc That's to say we have a lot of conventions and I think if i can tokenize the text with keywords and patterns i can get the job done. So please if you know a useful function/modules/tutorial/tool for doing that preferably in python (if not python so a gui tool would be nice) Some few information: ``` there are a lot of rules to express various medical data but here are few examples - Start with the same date/time format followed by a space followd by a colon followed by a space followed by He/She followed space followed by rules separated by and - Rules: * got <symptoms>,<symptoms>,.... * investigations were done <investigation>,<investigation>,<investigation>,...... * received <drug or procedure>,<drug or procedure>,..... * discharged <digit> (hour|hours) later * kept under observation * died <digit> (hour|hours) later * died <digit> (hour|hours) later in <place> other rules do exist but they follow the same idea ```
2010/10/25
[ "https://Stackoverflow.com/questions/4011526", "https://Stackoverflow.com", "https://Stackoverflow.com/users/485991/" ]
This uses [dateutil](http://labix.org/python-dateutil) to parse the date (e.g. '11/11/2010 - 09:00am'), and [parsedatetime](http://code.google.com/p/parsedatetime/) to parse the relative time (e.g. '4 hours later'): ``` import dateutil.parser as dparser import parsedatetime.parsedatetime as pdt import parsedatetime.parsedatetime_consts as pdc import time import datetime import re import pprint pdt_parser = pdt.Calendar(pdc.Constants()) record_time_pat=re.compile(r'^(.+)\s+:') sex_pat=re.compile(r'\b(he|she)\b',re.IGNORECASE) death_time_pat=re.compile(r'died\s+(.+hours later).*$',re.IGNORECASE) symptom_pat=re.compile(r'[,-]') def parse_record(astr): match=record_time_pat.match(astr) if match: record_time=dparser.parse(match.group(1)) astr,_=record_time_pat.subn('',astr,1) else: sys.exit('Can not find record time') match=sex_pat.search(astr) if match: sex=match.group(1) sex='Female' if sex.lower().startswith('s') else 'Male' astr,_=sex_pat.subn('',astr,1) else: sys.exit('Can not find sex') match=death_time_pat.search(astr) if match: death_time,date_type=pdt_parser.parse(match.group(1),record_time) if date_type==2: death_time=datetime.datetime.fromtimestamp( time.mktime(death_time)) astr,_=death_time_pat.subn('',astr,1) is_dead=True else: death_time=None is_dead=False astr=astr.replace('and','') symptoms=[s.strip() for s in symptom_pat.split(astr)] return {'Record Time': record_time, 'Sex': sex, 'Death Time':death_time, 'Symptoms': symptoms, 'Death':is_dead} if __name__=='__main__': tests=[('11/11/2010 - 09:00am : He got nausea, vomiting and died 4 hours later', {'Sex':'Male', 'Symptoms':['got nausea', 'vomiting'], 'Death':True, 'Death Time':datetime.datetime(2010, 11, 11, 13, 0), 'Record Time':datetime.datetime(2010, 11, 11, 9, 0)}), ('11/11/2010 - 09:00am : She got heart burn, vomiting of blood and died 1 hours later in the operation room', {'Sex':'Female', 'Symptoms':['got heart burn', 'vomiting of blood'], 'Death':True, 'Death Time':datetime.datetime(2010, 11, 11, 10, 0), 'Record Time':datetime.datetime(2010, 11, 11, 9, 0)}) ] for record,answer in tests: result=parse_record(record) pprint.pprint(result) assert result==answer print ``` yields: ``` {'Death': True, 'Death Time': datetime.datetime(2010, 11, 11, 13, 0), 'Record Time': datetime.datetime(2010, 11, 11, 9, 0), 'Sex': 'Male', 'Symptoms': ['got nausea', 'vomiting']} {'Death': True, 'Death Time': datetime.datetime(2010, 11, 11, 10, 0), 'Record Time': datetime.datetime(2010, 11, 11, 9, 0), 'Sex': 'Female', 'Symptoms': ['got heart burn', 'vomiting of blood']} ``` Note: Be careful parsing dates. Does '8/9/2010' mean August 9th, or September 8th? Do all the record keepers use the same convention? If you choose to use dateutil (and I really think that's the best option if the date string is not rigidly structured) be sure to read the section on "Format precedence" in the [dateutil documentation](http://labix.org/python-dateutil) so you can (hopefully) resolve '8/9/2010' properly. If you can't guarantee that all the record keepers use the same convention for specifying dates, then the results of this script would have be checked manually. That might be wise in any case.
Here are some possible way you can solve this - 1. **Using Regular Expressions** - Define them according to the patterns in your text. Match the expressions, extract pattern and you repeat for all records. This approach needs good understanding of the format in which the data is & of course regular expressions :) 2. **String Manipulation** - This approach is relatively simpler. Again one needs a good understanding of the format in which the data is. This is what I have done below. 3. **Machine Learning** - You could define all you rules & train a model on these rules. After this the model tries to extract data using the rules you provided. This is a lot more generic approach than the first two. Also the toughest to implement. See if this work for you. Might need some adjustments. ``` new_file = open('parsed_file', 'w') for rec in open("your_csv_file"): tmp = rec.split(' : ') date = tmp[0] reason = tmp[1] if reason[:2] == 'He': sex = 'Male' symptoms = reason.split(' and ')[0].split('He got ')[1] else: sex = 'Female' symptoms = reason.split(' and ')[0].split('She got ')[1] symptoms = [i.strip() for i in symptoms.split(',')] symptoms = '\n'.join(symptoms) if 'died' in rec: died = 'True' else: died = 'False' new_file.write("Sex: %s\nSymptoms: %s\nDeath: %s\nDeath Time: %s\n\n" % (sex, symptoms, died, date)) ``` Ech record is newline separated `\n` & since you did not mention one patient record is 2 newlines separated `\n\n` from the other. **LATER:** @Nurse what did you end up doing? Just curious.
12,670
49,100,167
So I'm optimizing a game playing bot and have run out of optimizations in pure python. Currently, most of the time is spent translating one game state into the next game state for the alpha-beta search. The current thinking is that I could speed this up by writing the state-transition code in C. My problem comes from trying to convert the python State into a struct that C can operate on and back again. Currently, states are uniquely represented by a byte string: ``` import itertools import struct BINCODE = struct.Struct('BBBBBBBBBBBBBBb') class State: __slots__ = '_bstring' TOP = 1 BOTTOM = 0 def __init__(self, seeds=4, *, top=None, bottom=None, turn=0): top = top if top else [seeds] * 6 + [0] bottom = bottom if bottom else [seeds] * 6 + [0] self._bstring = BINCODE.pack(*itertools.chain(bottom, top), turn) @property def top(): ... ``` The idea was that the state.\_bstring, which is convieniently already packed as binary data could be turned nicely into a c struct similar to this: ``` struct State { unsigned int bottom[7]; unsigned int top[7]; int turn; } ``` which my C code could operate on, generate the resulting C State as new binary data and be slotted directly into a new python `State` object. However I can't seem to find any information about how to go about this. Almost all the information I can find is about packing and unpacking C data from a file. I've considered using the `PyObject_GetBuffer` on the bytes object, but the game logic is pretty complex and I'd prefer to deal with the data as a struct rather than an array. Moreover I'd like to reduce the amount of copying to a minimum. The other option I looked into was using a `PyCapsule` object defined in C as the new State for python, but I would lose all of the State classes python specific functionality. I'd really rather keep changes to the python code to an absolute minimum, as many of the previous python optimizations are dependent on the data format. Cython doesn't seem to have a way to coerce binary data into a C struct pointer. And re-writing `State` to be numba compatible would lose crucial functionality like unique hashing etc. It seems like there should be a fairly straight forward way to do this, but I can't seem to find it. Any and all help would be appreciated.
2018/03/04
[ "https://Stackoverflow.com/questions/49100167", "https://Stackoverflow.com", "https://Stackoverflow.com/users/409106/" ]
This is not the expected behaviour. Components that do not change should not re-execute the mounted hook. I would search for the problem someplace in the top of the vue component hierarchy, because it sounds like some other piece of code might force the re-rendering of the hierarchy.
When router's path changes, components will mount again,if you want to mount component only one time, you can try the Vue's build-in component **keep-alive**, it will only trigger its activated hook and deactivated hook.And you can do something in these two hooks. **The html:** ``` <div id="app"> <router-link to="/link-one">link-one</router-link> <router-link to="/link-two">link-two</router-link> <keep-alive> <router-view> </router-view> </keep-alive> </div> ``` --- **The javascript:** ``` Vue.use(VueRouter); function createLink(path) { let routePath = '/' + path; return { path: routePath, component: { name: path, template: '<span>{{path}} mountedTimes:{{mountedTimes}}, activatedTimes: {{activatedTimes}}</span>', data() { return { mountedTimes: 0, activatedTimes: 0, path: routePath } }, mounted() { console.log('mounted') this.mountedTimes++; }, activated() { this.activatedTimes++; } } } } const routes = [createLink('link-one'), createLink('link-two')]; const router = new VueRouter({ routes }) const app = new Vue({ el: "#app", router }) ``` --- [CodePen](https://codepen.io/wangyi7099/pen/OvVaQr)
12,676
74,009,340
My question is similar to this([Python sum on keys for List of Dictionaries](https://stackoverflow.com/questions/8584504/python-sum-on-keys-for-list-of-dictionaries)), but need to sum up the values based on two or more key-value elements. I have a list of dictionaries as following: ``` list_to_sum= [{'Name': 'A', 'City': 'W','amt':100}, {'Name': 'B', 'City': 'A','amt':200}, {'Name': 'A', 'City': 'W','amt':300}, {'Name': 'C', 'City': 'X','amt':400}, {'Name': 'C', 'City': 'X','amt':500}, {'Name': 'A', 'City': 'W','amt':600}] ``` So based on a combination of Name and City key values, amt should be summed. Please let me know how to solve this. ``` Output: [{'Name': 'A', 'City': 'W','amt':900}, {'Name': 'B', 'City': 'A','amt':200}, {'Name': 'C', 'City': 'X','amt':900}] ```
2022/10/10
[ "https://Stackoverflow.com/questions/74009340", "https://Stackoverflow.com", "https://Stackoverflow.com/users/20188249/" ]
You could create a [`collections.Counter`](https://docs.python.org/3/library/collections.html#collections.Counter).Then you can simply add the values as the appear using the tuple as `(Name, City)` as the key: ``` from collections import Counter list_to_sum=[ {'Name': 'A', 'City': 'W','amt':100}, {'Name': 'B', 'City': 'A','amt':200}, {'Name': 'A', 'City': 'W','amt':300}, {'Name': 'C', 'City': 'X','amt':400}, {'Name': 'C', 'City': 'X','amt':500}, {'Name': 'A', 'City': 'W','amt':600} ] totals = Counter() for d in list_to_sum: totals[(d['Name'],d['City'])] += d['amt'] print(totals[('A','W')]) # 1000 print(totals[('B','A')]) # 200 print(totals[('C','X')]) # 900 ``` This will produce a dictionary-like object `Counter`: ``` Counter({('A', 'W'): 1000, ('B', 'A'): 200, ('C', 'X'): 900}) ``` With this you can convert the dict back to a list of dicts like: ``` sums_list = [{'Name':Name, 'City':City, 'amt':amt} for (Name, City), amt in totals.items()] ``` giving `sums_list`: ``` [{'Name': 'A', 'City': 'W', 'amt': 1000}, {'Name': 'B', 'City': 'A', 'amt': 200}, {'Name': 'C', 'City': 'X', 'amt': 900}] ```
``` list_to_sum = [{'Name': 'A', 'City': 'W', 'amt': 100}, {'Name': 'B', 'City': 'A', 'amt': 200}, {'Name': 'A', 'City': 'W', 'amt': 300}, {'Name': 'C', 'City': 'X', 'amt': 400}, {'Name': 'C', 'City': 'X', 'amt': 500}, {'Name': 'A', 'City': 'W', 'amt': 600}] sum_store = {} for entry in list_to_sum: key = (entry['Name'], entry['City']) if key in sum_store: sum_store[key] += entry['amt'] else: sum_store[key] = entry['amt'] print(sum_store) ``` output: ``` {('A', 'W'): 1000, ('B', 'A'): 200, ('C', 'X'): 900} ```
12,677
50,692,816
I am getting the following SSL issue when running pip install: ``` python -m pip install zeep Collecting zeep Retrying (Retry(total=4, connect=None, read=None, redirect=None, status=None)) after connection broken by 'SSLError(SSLError("bad handshake: Error([('SSL routines', 'ssl3_get_server_certificate', 'certificate verify failed')],)",),)': /simple/zeep/ ```
2018/06/05
[ "https://Stackoverflow.com/questions/50692816", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1045057/" ]
I was able to resolve the issue by using the following: ``` python -m pip install --trusted-host pypi.org --trusted-host files.pythonhosted.org --index-url=https://pypi.org/simple/ zeep ```
If using windows then make sure the below three paths are added in Windows environment variable : 1. ....\Anaconda\Library\bin 2. ....\Anaconda\Scripts 3. ....\Anaconda If not using Anaconda then in place of Anaconda the path where python is installed.
12,679
64,251,311
First post so be gentle please. I have a bash script running on a Linux server which does a daily sftp download of an Excel file. The file is moved to a Windows share. An additional requirement has arisen in that i'd like to add the number of rows to the filename which is also timestamped so different each day. Ideally at the end before the xlsx extension. After doing some research it would seem I may be able to do it all in the same script if I use Python and one of the Excel modules. I'm a complete noob in Python but i have done some experimenting and have some working code using the Pandas module. Here's what i have working in a test spreadsheet with a worksheet named mysheet and counting a column named code. ``` >>> excel_file = pd.ExcelFile('B:\PythonTest.xlsx') >>> df = excel_file.parse('mysheet') >>> df[['code']].count() code 10 dtype: int64 >>> mycount = df[['code']].count() >>> print(mycount) code 10 dtype: int64 >>> ``` I have 2 questions please. First how do I pass todays filename into the python script to then do the count on and how do i return this to bash. Also how do i just return the count value e.g 10 in the above example. i dont want column name or dtype passed back. Thanks in advance.
2020/10/07
[ "https://Stackoverflow.com/questions/64251311", "https://Stackoverflow.com", "https://Stackoverflow.com/users/14409634/" ]
Assuming we put your python into a separate script file, something like: ```py # count_script.py import sys import pandas as pd excel_file = pd.ExcelFile(sys.argv[1]) df = excel_file.parse('mysheet') print(df[['code']].count().at(0)) ``` We could then easily call that script from within the bash script that invoked it in the first place (the one that downloads the file). ```sh TODAYS_FILE="PythonTest.xlsx" # ... # Download the file # ... # Pass the file into your python script (manipulate the file name to include # the correct path first, if necessary). # By printing the output in the python script, the bash subshell (invoking a # command inside the $(...) will slurp up the output and store it in the COUNT variable. COUNT=$(python count_script.py "${TODAYS_FILE}") # this performs a find/replace on $TODAYS_FILE, replacing the ending ".xlsx" with an # underscore, then the count obtained via pandas, then tacks on a ".xlsx" again at the end. NEW_FILENAME="${TODAYS_FILE/\.xlsx/_$COUNT}.xlsx" # Then rename it mv "${TODAYS_FILE}" "${NEW_FILENAME}" ```
You can pass command-line arguments to python programs, by invoking them as such: ``` python3 script.py argument1 argument2 ... argumentn ``` They can then be accessed within the script using `sys.argv`. You must `import sys` before using it. `sys.argv[0]` is the name of the python script, and the rest are the additional command-line arguments. Alternatively you may pass it in stdin, which can be read in Python using normal standard input functions like input(). To pass input in stdin, in bash do this: ``` echo $data_to_pass | python3 script.py ``` To give output you can write to stdout using print(). Then redirect output in bash, to say, a file: ``` echo $data_to_pass | python3 script.py > output.txt ``` To get the count value within Python, you simply need to add `.at(0)` at the end to get the first value; that is: ``` df[["code"]].count().at(0) ``` You can then `print()` it to send it to bash.
12,680
7,518,067
I have two folders: In, Out - it is not system folder on disk D: - Windows 7. Out contain "myfile.txt" I run the following command in python: ``` >>> shutil.copyfile( r"d:\Out\myfile.txt", r"D:\In" ) Traceback (most recent call last): File "<pyshell#39>", line 1, in <module> shutil.copyfile( r"d:\Out\myfile.txt", r"D:\In" ) File "C:\Python27\lib\shutil.py", line 82, in copyfile with open(dst, 'wb') as fdst: IOError: [Errno 13] Permission denied: 'D:\\In' ``` What's the problem?
2011/09/22
[ "https://Stackoverflow.com/questions/7518067", "https://Stackoverflow.com", "https://Stackoverflow.com/users/490908/" ]
Use **shutil.copy2** instead of **shutil.copyfile** ``` import shutil shutil.copy2('/src/dir/file.ext','/dst/dir/newname.ext') # file copy to another file shutil.copy2('/src/file.ext', '/dst/dir') # file copy to diff directory ```
well the questionis old, for new viewer of Python 3.6 use ``` shutil.copyfile( "D:\Out\myfile.txt", "D:\In" ) ``` instead of ``` shutil.copyfile( r"d:\Out\myfile.txt", r"D:\In" ) ``` `r` argument is passed for reading file not for copying
12,681
41,857,659
My python code works correctly in the below example. My code combines a directory of CSV files and matches the headers. However, I want to take it a step further - how do I add a column that appends the filename of the CSV that was used? ``` import pandas as pd import glob globbed_files = glob.glob("*.csv") #creates a list of all csv files data = [] # pd.concat takes a list of dataframes as an agrument for csv in globbed_files: frame = pd.read_csv(csv) data.append(frame) bigframe = pd.concat(data, ignore_index=True) #dont want pandas to try an align row indexes bigframe.to_csv("Pandas_output2.csv") ```
2017/01/25
[ "https://Stackoverflow.com/questions/41857659", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6067066/" ]
This should work: ``` import os for csv in globbed_files: frame = pd.read_csv(csv) frame['filename'] = os.path.basename(csv) data.append(frame) ``` `frame['filename']` creates a new column named `filename` and `os.path.basename()` turns a path like `/a/d/c.txt` into the filename `c.txt`.
Mike's answer above works perfectly. In case any googlers run into the following error: ``` >>> TypeError: cannot concatenate object of type "<type 'str'>"; only pd.Series, pd.DataFrame, and pd.Panel (deprecated) objs are valid ``` It's possibly because the separator is not correct. I was using a custom csv file so the separator was `^`. Becuase of that I needed to include the separator in the `pd.read_csv` call. ``` import os for csv in globbed_files: frame = pd.read_csv(csv, sep='^') frame['filename'] = os.path.basename(csv) data.append(frame) ```
12,691
34,971,363
I am heavily using python threading, and my many use-cases require that I would log separate task executions under different logger names. A typical code example would be: ``` def task(logger=logger): global_logger.info('Task executing') for item in [subtask(x) for x in range(100000)]: # While debugging, one would generally prefer to see these global_logger.debug('Task executed item {}'.format(item)) global_logger.info('Task done') def thread_task(task_index, logger=logger): task(logger=global_logger.getChild('main.task.{}'.format(task_index))) def main(): pool = Pool(10) for item in pool.map(thread_task, range(128)) pass ``` Is there any standard approach in python logging module that would allow me to set the logging level for all threads at once? In code, what I am trying to do is: ``` # For all threads logger.getChild('main.task.0').setLevel('INFO') logger.getChild('main.task.1').setLevel('INFO') logger.getChild('main.task.2').setLevel('INFO') ``` I should note that am aware that I could set the levels inside `task_thread`. My question is there to find out if this can be done easier.
2016/01/24
[ "https://Stackoverflow.com/questions/34971363", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1055356/" ]
Because loggers inherit their parent's level if not explicitly set, you could just do e.g. ``` root_name = global_logger.name logging.getLogger(root_name + '.main.task').setLevel(logging.INFO) ``` and that would mean that all child loggers inherit that level, unless a level were explicitly set for one of them. Note that unless you want to attach different handlers to the different thread loggers, you don't get much benefit from having a logger per thread - you can always use [other methods](https://docs.python.org/2/howto/logging-cookbook.html#adding-contextual-information-to-your-logging-output) to put the `task_index` value in the log.
I was having the same issue suppressing the output from the `BAC0` library. I tried changing the parent logger and that did not update the children (Python 3.9) This answer is based on the accepted answer in this post: [How to list all existing loggers using python.logging module](https://stackoverflow.com/questions/53249304/how-to-list-all-existing-loggers-using-python-logging-module) ```py for name in logging.root.manager.loggerDict: if name.startswith("main.task."): logging.getLogger(name).setLevel(logging.INFO) ``` You could probably replace the `name.startswith("main.task.")` with some regex matching for wild card notation, but this should do the trick for you. What it does is it gets and iterates over every logger which has been configured, and checks if the name matches your search criteria. If it does, it will get the logger by that name, and set the level to `INFO`.
12,693
35,204,703
one script starts automatically when my raspberry is booted up, within this script there is motion sensor, if detected, it starts a subproces camera.py (recording a video, then converts the video and emails) within the main script that starts u on booting up, there is another if statement, if button pressed then stop the camera.py and everything in it and do something else. I am unable to kill process by PID because it keeps changing. The only other option is to kill camera.py by its name, but it doesn't work. main script: ``` p1 = subprocess.Popen("sudo python /home/pi/camera.py", shell=True) ``` this is my camera.py script: ``` import os os.system("raspivid -n -o /home/pi/viseo.h264 -t 10000") os.system(.... python script0.py os.system(.... python script1.py ``` i can do: ``` os.system("sudo killall raspivid") ``` if i try ``` os.system("sudo killall camera.py") ``` it gives me a message: No process found this only stops the recording but i also want to kill every other script within camera.py Can anyone help please? thanks
2016/02/04
[ "https://Stackoverflow.com/questions/35204703", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5794219/" ]
Use `pkill`: ``` $ sudo pkill -f camera.py ```
If you make camera.py executable, put it on your $PATH and make line 1 of the script `#!/usr/bin/python`, then execute camera.py without the python command in front of it, your `"sudo killall camera.py"` command should work.
12,694
40,866,883
I use python 3.4 , pyQt5 and Qt designer (Winpython distribution). I like the idea of making guis by designer and importing them in python with setupUi. I'm able to show MainWindows and QDialogs. However, now I would like to set my MainWindow, always on top and with the close button only. I know this can be done by setting Windows flags. I tried to do as following: ``` from PyQt5 import QtCore, QtGui, QtWidgets import sys class MainWindow(QtWidgets.QMainWindow,Ui_MainWindow): def __init__(self, parent=None): super(MainWindow, self).__init__(parent) self.setupUi(self) self.setWindowFlags(QtCore.Qt.WindowCloseButtonHint | QtCore.Qt.WindowMinimizeButtonHint) self.setWindowFlags(QtCore.Qt.WindowStaysOnTopHint) if __name__ == '__main__': app = QtWidgets.QApplication(sys.argv) form = MainWindow() form.show() sys.exit(app.exec_()) ``` The MainWindow shows up (without error) but Flags are not applied. I suppose this is because I asked to change Windows properties after it was already created. Now, questions are : how I can do it without modify Ui\_MainWindow directly? It is possible to change flags in Qt designer ? Thanks
2016/11/29
[ "https://Stackoverflow.com/questions/40866883", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4491532/" ]
Every call of `setWindowFlags` will completely override the current settings, so you need to set all the flags at once. Also, you must include the `CustomizeWindowHint` flag, otherwise all the other hints will be ignored. The following will probably work on Windows: ``` self.setWindowFlags( QtCore.Qt.Window | QtCore.Qt.CustomizeWindowHint | QtCore.Qt.WindowTitleHint | QtCore.Qt.WindowCloseButtonHint | QtCore.Qt.WindowStaysOnTopHint ) ``` However, it is highly unlikely this will work on all platforms. "Hint" really does mean just that. Window managers are completely free to ignore these flags and there's no guarantee they will all behave in the same way. PS: It is not possible to set the window flags in Qt Designer.
I would propose a different solution, because it keeps the existing flags. Reason to do this, is to NOT mingle with UI-specific presets (like that a dialog has not by default a "maximize" or "minimize" button). ``` self.setWindowFlags(self.windowFlags() # reuse initial flags & ~QtCore.Qt.WindowContextHelpButtonHint # negate the flag you want to unset ) ```
12,697
47,689,456
I was trying to connect oracle database using python like below. ``` import cx_Oracle conn = cx_Oracle.connect('user/password@host:port/database') ``` I've faced an error when connecting oracle. DatabaseError: DPI-1047: 64-bit Oracle Client library cannot be loaded: "libclntsh.so: cannot open shared object file: No such file or directory". See <https://oracle.github.io/odpi/doc/installation.html#linux> for help. I've been struggling to figure it out. I used my user name, password, host, port and database('orcl') for example, `'admin/admin@10.10.10.10:1010/orcl'`. Why coudn't it connect? Ahh, btw I'm running all the code in azure notebooks.
2017/12/07
[ "https://Stackoverflow.com/questions/47689456", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3176741/" ]
That error indicates that you are missing a 64-bit Oracle client installation or it hasn't been configured correctly. Take a look at the link mentioned in the error message. It will give instructions on how to perform the Oracle client installation and configuration. [Update on behalf of Anthony: his latest cx\_Oracle release doesn't need Oracle Client libraries so you won't see the DPI-1047 error if you upgrade. The driver got renamed to python-oracledb but the API still supports the Python DB API 2.0 specification. See the [homepage](https://oracle.github.io/python-oracledb/).]
This seems a problem with version 6.X.This problem didnot appeared in 5.X.But for my case a little workaround worked.I installed in my physical machine and only thing that i need to do was a pc reboot or reopen the terminal as i have added in the path of environment variables.You can try to install in physical machine instead using azure notebooks.
12,698
53,649,039
I have a Databricks notebook setup that works as the following; * pyspark connection details to Blob storage account * Read file through spark dataframe * convert to pandas Df * data modelling on pandas Df * convert to spark Df * write to blob storage in single file My problem is, that you can not name the file output file, where I need a static csv filename. Is there way to rename this in pyspark? ``` ## Blob Storage account information storage_account_name = "" storage_account_access_key = "" ## File location and File type file_location = "path/.blob.core.windows.net/Databricks_Files/input" file_location_new = "path/.blob.core.windows.net/Databricks_Files/out" file_type = "csv" ## Connection string to connect to blob storage spark.conf.set( "fs.azure.account.key."+storage_account_name+".blob.core.windows.net", storage_account_access_key) ``` Followed by outputting file after data transformation ``` dfspark.coalesce(1).write.format('com.databricks.spark.csv') \ .mode('overwrite').option("header", "true").save(file_location_new) ``` Where the file is then write as **"part-00000-tid-336943946930983.....csv"** Where as a the goal is to have **"Output.csv"** Another approach I looked at was just recreating this in python but have not come across in the documentation yet of how to output the file back to blob storage. I know the method to retrieve from Blob storage is *.get\_blob\_to\_path* via [microsoft.docs](https://learn.microsoft.com/en-us/azure/machine-learning/team-data-science-process/explore-data-blob#blob-dataexploration) Any help here is greatly appreciated.
2018/12/06
[ "https://Stackoverflow.com/questions/53649039", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6050134/" ]
> > I know the compiler is supposed to generate an error for templates that are erroneous for any template parameter even if not instantiated. > > > That is not the case, though. If no instantiation can be generated for a template, then the program is ill-formed, **no diagnostic required**(1). So the program is ill-formed regardless of whether you get an error or it compiles "successfully." Looking at it from the other perspective, a compiler must not allow a warning-turned-error to affect SFINAE, as that could change the semantics of a valid program and would thus make the compiler non-conforming. So if a compiler wants to diagnose a warning as an error, it must do this by stopping compilation and not by introducing a substitution failure. In other words, `-Werror` can make the compiler reject a well-formed program (that is its intended purpose, after all), but it would be a compiler bug if it changed the semantics of one. --- (1) Quoting C++17 (N4659), [temp.res] 17.6/8: > > The program is > ill-formed, no diagnostic required, if: > > > * no valid specialization can be generated for a template ... and the template is not instantiated, or > * ... > > >
While it's largely a quality of implementation issue, `-Werror` can indeed (and does) interfere with SFINAE. Here is a more involved example to test it: ``` #include <type_traits> template <typename T> constexpr bool foo() { if (false) { T a; } return false; } template<typename T, typename = void> struct Check {}; template<typename T> struct Check<T, std::enable_if_t<foo<T>()>> {}; int main() { Check<int> c; } ``` The line `T a;` can trigger that warning (and error), even though the branch is dead (it's dead on purpose, so that `foo` is a constexpr function mostly regardless of `T`). Now, according to the standard itself, that is a well-formed program. But because [Clang](http://coliru.stacked-crooked.com/a/fb7678387e33b97b) and [GCC](http://coliru.stacked-crooked.com/a/d75bfb09c9eb92c9) cause an error there, and that error is in the non-immediate context of the `Check` specialization, we get a hard error. Even though according to the standard itself this should just fall back to the primary template due to substitution failure in the immediate context only.
12,703
40,476,046
i'm actually an amateur python programmer and am trying to use the django framework for an android app backend. everything is okay but my problem is actually how to pass the image in the Filefield to JSON. i have tried using SerializerMethodField as described in the rest framework documentation but didn't work. sorry if this question is off track but i seriously need help. This is from my serializer class ``` class DealSerializer(serializers.ModelSerializer): class Meta: model = Deal image = serializers.SerializerMethodField() fields = [ 'title', 'description', 'image' ] def get_image(obj): return obj.image.url ``` and this is my view ``` class DealList(APIView): def get(self, request): deals= Deal.objects.all() serializer = DealSerializer(deals, many=True) return Response(serializer) ```
2016/11/07
[ "https://Stackoverflow.com/questions/40476046", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6214350/" ]
If you want to check if two files are equal, you can check the exit code of `diff -q` (or `cmp`). This is faster since it doesn't require finding the exact differences: ``` if diff -q file1 file2 > /dev/null then echo "The files are equal" else echo "The files are different or inaccessible" fi ``` All Unix tools have an exit code, and it's usually faster, easier and more robust to check that than to capture and compare their output in a text-based way.
You can use the logic pipe: For one command: ``` diff -q file1 file2 > /dev/null && echo "The files are equal" ``` Or more commands: ``` diff -q file1 file2 > /dev/null && { echo "The files are equal"; echo "Other command" echo "More other command" } ```
12,704