title
stringlengths
10
172
question_id
int64
469
40.1M
question_body
stringlengths
22
48.2k
question_score
int64
-44
5.52k
question_date
stringlengths
20
20
answer_id
int64
497
40.1M
answer_body
stringlengths
18
33.9k
answer_score
int64
-38
8.38k
answer_date
stringlengths
20
20
tags
list
Project IA in Python -- UnboundLocalError: local variable 'x' referenced before assignment
39,399,444
<p>`</p> <pre><code>def iterativeDeepeningSearch(problem): def depthLimitedDFS(node, problem, depth): if depth==0: return if problem.isGoalState(node[-1]): return node for move, acao, c in problem.getSuccessors(node[-1]): if move not in node: ode = depthLimitedDFS(node+[move],problem, depth-1) if x: return x for depth in itertools.count(): node = depthLimitedDFS([problem.getStartState()], problem, depth) if node: return node` </code></pre> <p>I am trying to execute this code into the project(Pacman), but it returns an error: Un bound Local Error: local variable 'x' referenced before assignment....</p>
-2
2016-09-08T20:14:33Z
39,399,491
<p>What <strong>python</strong> is saying to you is that you are trying to use <code>x</code> before any assignment on it. That is: you didn't use <code>x</code> <strong>at all</strong> and you are trying to inspect a possible value on it (which doesn't make sense).</p> <p>What is <code>x</code> supposed to do in your code? Think about that and you will probably figure out what to do to solve your issue.</p>
1
2016-09-08T20:17:29Z
[ "python", "python-2.7", "search", "graph" ]
Django form creates new instance instead updating a existing one
39,399,547
<p>I've seen this question asked a lot of time here but I can't figure out why it doesn't work in my case. I have the following view code:</p> <pre><code>def edit(request, coffee_id=None): coffee = get_object_or_404(Drink, pk=coffee_id) if coffee_id else Drink() if request.method == 'POST': form = CoffeeForm(request.POST, instance=coffee) if form.is_valid(): form.save() return HttpResponseRedirect(urlresolvers.reverse('coffee:index')) else: form = CoffeeForm(instance=coffee) return render(request, 'edit.html', {'coffee_form': form}) </code></pre> <p>This is supposed to create a new instance of coffee or to update a new one if coffee_id given in argument exists in database.</p> <p>However even if coffee_id exists in database a new instance of coffee is always created.</p> <p>I also tried to save the coffee instance without saving the form but it does the same.</p> <p>Is there something that i'm doing wrong ? Should I set something special in the model to allow update ?</p> <p><strong>Edit</strong></p> <p>This is the Drink form</p> <pre><code>class CoffeeForm(forms.ModelForm): class Meta: model = Drink fields = ('time', 'location', 'type') def __init__(self, *args, **kwargs): super(forms.ModelForm, self).__init__(*args, **kwargs) coffee_category = Category.objects.get(name='coffee') coffee_drink_types = DrinkType.objects.filter(category=coffee_category.id) self.fields['type'].choices = ((x.id, str(x)) for x in coffee_drink_types) </code></pre> <p>And the Drink model:</p> <pre><code>class Drink(models.Model): time = models.DateTimeField('time', default=datetime.datetime.now) location = models.ForeignKey(Location) type = models.ForeignKey(DrinkType) </code></pre> <p>** Edit ** </p> <p>Add the urls:</p> <pre><code>urlpatterns = [ url(r'^$', views.index, name='index'), url(r'^edit/$', views.edit, name='edit'), url(r'^edit/(?P&lt;coffee_id&gt;[0-9]*)/$', views.edit, name='edit') ] </code></pre>
0
2016-09-08T20:20:59Z
39,399,849
<p>get_object_or_404 errors out if the Drink with the given id doesn't exist.</p> <p>Are you sure POSTing is done against /edit/1? Because if the form action doesn't have the ID, a new Drink is created.</p> <p>I'd recommend you go with separate create and edit views. Possibly use class based views, but if you have reasons not to use them, something like this could work:</p> <pre><code>def create_coffee(request): if request.method == 'POST': form = CoffeeForm(request.POST) if form.is_valid(): coffee = form.save() return redirect(reverse('edit_coffee', kwargs={'coffee_id': coffee.pk})) else: form = CoffeeForm() return render(request, 'create.html', {'coffee_form': form}) def edit_coffee(request, coffee_id=None): coffee = Drink.objects.filter(pk=coffee_id).first() if not coffee: return redirect(reverse('create_coffee')) else: if request.method == 'POST': form = CoffeeForm(request.POST, instance=coffee) if form.is_valid(): form.save() else: form = CoffeeForm(instance=coffee) return render(request, 'edit.html', {'coffee_form': form}) </code></pre>
0
2016-09-08T20:40:04Z
[ "python", "django" ]
Django form creates new instance instead updating a existing one
39,399,547
<p>I've seen this question asked a lot of time here but I can't figure out why it doesn't work in my case. I have the following view code:</p> <pre><code>def edit(request, coffee_id=None): coffee = get_object_or_404(Drink, pk=coffee_id) if coffee_id else Drink() if request.method == 'POST': form = CoffeeForm(request.POST, instance=coffee) if form.is_valid(): form.save() return HttpResponseRedirect(urlresolvers.reverse('coffee:index')) else: form = CoffeeForm(instance=coffee) return render(request, 'edit.html', {'coffee_form': form}) </code></pre> <p>This is supposed to create a new instance of coffee or to update a new one if coffee_id given in argument exists in database.</p> <p>However even if coffee_id exists in database a new instance of coffee is always created.</p> <p>I also tried to save the coffee instance without saving the form but it does the same.</p> <p>Is there something that i'm doing wrong ? Should I set something special in the model to allow update ?</p> <p><strong>Edit</strong></p> <p>This is the Drink form</p> <pre><code>class CoffeeForm(forms.ModelForm): class Meta: model = Drink fields = ('time', 'location', 'type') def __init__(self, *args, **kwargs): super(forms.ModelForm, self).__init__(*args, **kwargs) coffee_category = Category.objects.get(name='coffee') coffee_drink_types = DrinkType.objects.filter(category=coffee_category.id) self.fields['type'].choices = ((x.id, str(x)) for x in coffee_drink_types) </code></pre> <p>And the Drink model:</p> <pre><code>class Drink(models.Model): time = models.DateTimeField('time', default=datetime.datetime.now) location = models.ForeignKey(Location) type = models.ForeignKey(DrinkType) </code></pre> <p>** Edit ** </p> <p>Add the urls:</p> <pre><code>urlpatterns = [ url(r'^$', views.index, name='index'), url(r'^edit/$', views.edit, name='edit'), url(r'^edit/(?P&lt;coffee_id&gt;[0-9]*)/$', views.edit, name='edit') ] </code></pre>
0
2016-09-08T20:20:59Z
39,399,963
<p>You should use <code>CoffeeForm</code>, not <code>ModelForm</code> when calling <code>super</code>. </p> <pre><code>class CoffeeForm(forms.ModelForm): class Meta: model = Drink fields = ('time', 'location', 'type') def __init__(self, *args, **kwargs): super(CoffeeForm, self).__init__(*args, **kwargs) </code></pre>
1
2016-09-08T20:48:13Z
[ "python", "django" ]
How to access MIFARE 1k memory blocks with Gemalto Prox-SU reader?
39,399,577
<p>I have just recently jumped into smartcard programming.</p> <p>I am using Gemalto Prox-SU reader and have several blank MIFARE Classic 1k cards available, on a Ubuntu 16.04 machine. I have installed the Gemalto Prox-SU reader and got the reader to detect a card through a script in python using <a href="https://github.com/LudovicRousseau/pyscard" rel="nofollow">Ludovic Russeau's pyscard</a>.</p> <p>I have managed to write a script which sends APDUs to the reader/card connection. I can read ATR, send GetData command to read card's serial number and have been trying to send several APDUs to the card to try and read the card memory blocks. Aside from LoadKey commands, however, everything else is returning "0x6982: security status not satisfied"</p> <p>I know I am supposed to send a General Authentication command before every read and write, as stated in the manual, but even General Authenticate command is returning "security status not satisfied". From what I have been reading this should be really simple. What am I missing? How to set up my script so that authentication succeeds and I can read data from the memory blocks?</p>
1
2016-09-08T20:22:56Z
39,460,391
<p>The typical flow for reading MIFARE Classic 1K cards with the Prox-SU reader (see the <a href="http://support.gemalto.com/fileadmin/user_upload/drivers/Prox-DU_and_Prox-SU/DOC118569D_Prox-DUSU_RefMan.pdf" rel="nofollow">manual</a>) is:</p> <ol> <li><p>Load an authentication key. For instance, if your card is readable using key A with the default value ("transport key") <code>FF FF FF FF FF FF</code>, you would use the following LOAD KEY command:</p> <pre> FF 82 00 50 06 FFFFFFFFFFFF ^^ ^^ ^^^^^^^^^^^^ | | \-- Key | | | \------------------ Key slot 80 (0x50) | \--------------------- Key in RAM (0x00) </pre> <p>This stores the key <code>FF FF FF FF FF FF</code> into the first key slot (0x50) in volatile memory (RAM) of the reader.</p></li> <li><p>Authenticate to a sector using the GENERAL AUTHENTICATE command. Eventhough you authenticate to the whole sector you need to address the sector by a block number (typically the first block of the sector):</p> <pre> FF 86 00 00 05 01 0004 60 50 ^^^^ ^^ ^^ | | \-- Key slot 80 (0x50) | | | \----- Key type (0x60 = Key A, 0x61 = Key B) | \-------- Block number (block 4) </pre></li> <li><p>Finally, you can read a block using the READ BINARY command:</p> <pre> FF B0 0004 10 ^^^^ \-------- Block number (block 4) </pre></li> </ol> <p>If you receive a status code <code>69 82</code> during GENERAL AUTHENTICATE, this most likely indicates that you try to authenticate with an incorrect key.</p>
0
2016-09-12T23:59:13Z
[ "python", "mifare", "apdu", "contactless-smartcard", "gemalto" ]
ValueError: unsupported format character '}' when use % string formatting
39,399,600
<p>I'm trying to generate some LaTeX markup by using Python % string formatting. I use named fields in the string and use a dictionary with matching keys for the data. However, I get the error <code>ValueError: unsupported format character '}'</code>. Why isn't this code working?</p> <pre><code>LaTeXentry = '''\\subsection{{%(title)}} \\begin{{itemize}} \\item %(date) \\item %(description) \\item Source:\\cite{{%(title)}} \\item filename(s): %(filename) \\item Contributed by %(name)''' LaTeXcodeToAdd = LaTeXentry % { "time" : Timestamp, "date" : date, "description" : summary, "filename" : filename, "name" : name, "title": title, } </code></pre> <pre><code>Traceback (most recent call last): File "file_directory", line 115, in &lt;module&gt; "title": title, ValueError: unsupported format character '}' (0x7d) at index 21 </code></pre>
1
2016-09-08T20:24:45Z
39,399,693
<p>You have to add <code>s</code> like in standard formating <code>%s</code> - so you need <code>%(title)s</code>, <code>%(date)s</code>, etc.</p>
1
2016-09-08T20:30:41Z
[ "python", "string-formatting" ]
Deploy fabric script to multiple users of a same machine
39,399,620
<p>Is it possible (and how) to run a fab script for multiple users on a same machine. For example, say I have users <code>user1</code> and <code>user2</code> on a remote machine and I want to use a fabfile that runs a bash script that uses the <code>$HOME</code> environment variable. I need to run the bash script twice, first as <code>user1</code> and then as <code>user2</code>.</p>
2
2016-09-08T20:26:19Z
39,400,581
<p>Something like this? You could run multiple commands and add as many users as you need.</p> <pre><code>for i in user1 user2 user3 ; do su -c "/some/directory/Script" ${i} done </code></pre>
0
2016-09-08T21:35:43Z
[ "python", "bash", "fabric" ]
interacting python and abaqus
39,399,635
<p>I need for an algorithm to update an input file, I found out that I can modify a .py file and run it in abaqus.</p> <p>But because of the process is necessary to automatize, I'm trying to open a script and run it in abaqus </p> <p>I tried this: os.system('abaqus cae script=C:\Users\Samuel\abaqus-1\script1.py')</p> <pre><code>import os import subprocess HERE = os.path.dirname(os.path.abspath(__file__)) def create_script(name): path = os.path.join(HERE, 'abaqus-1', name) return path name = 'script1.py' script_path = create_script(name) print (script_path) args = ['abaqus', 'cae', 'script={0}'.format(script_path)] print (args) p = subprocess.Popen(args) # Success! print(p.communicate()) </code></pre> <p>this works on the cmd dos windows but doesn’t work on python, if anyone can help me I would appreciate it</p> <p>error </p> <pre><code>['abaqus', 'cae', 'script=C:\\Users\\Samuel\\abaqus-1\\script1.py'] Traceback (most recent call last): File "&lt;stdin&gt;", line 1, in &lt;module&gt; File "C:\Users\Samuel\Anaconda3\lib\site-packages\spyderlib\widgets\externalshell\sitecustomize.py", line 714, in runfile execfile(filename, namespace) File "C:\Users\Samuel\Anaconda3\lib\site-packages\spyderlib\widgets\externalshell\sitecustomize.py", line 89, in execfile exec(compile(f.read(), filename, 'exec'), namespace) File "C:/Users/Samuel/prueba control.py", line 28, in &lt;module&gt; p = subprocess.Popen(args) # Success! File "C:\Users\Samuel\Anaconda3\lib\subprocess.py", line 947, in __init__ restore_signals, start_new_session) File "C:\Users\Samuel\Anaconda3\lib\subprocess.py", line 1224, in _execute_child startupinfo) FileNotFoundError: [WinError 2] The system cannot find the file specified </code></pre>
1
2016-09-08T20:27:06Z
40,007,983
<p>Maybe this sentence is incorrect - </p> <pre><code>os.system('abaqus cae script=C:\Users\Samuel\abaqus-1\script1.py') </code></pre> <p>You have to run a python script in Abaqus using the command </p> <pre><code>abaqus cae noGUI=nameOfScript.py </code></pre> <p>So in your case, </p> <pre><code>os.system('abaqus cae noGUI=C:\\Users\\Samuel\\abaqus-1\\script1.py') </code></pre> <p>I am not sure about the '\' since I usually open abaqus in the same folder where I have my python scripts. </p>
1
2016-10-12T20:41:16Z
[ "python", "abaqus" ]
How to get the value from a list + input
39,399,672
<p>So sorry for the vague title but this is my problem. (I just started this study)</p> <h2>For example</h2> <pre><code>list_1 = [a, b, c, d, e, f] input_1 = input('question1') input_2 = input('question2') </code></pre> <p>lets say they chose</p> <pre><code>input_1 = b input_2 = f </code></pre> <p>I need something that prints the letters between the 2 chosen ones. So you would get</p> <pre><code>c d e </code></pre> <p>What I tried is using the for statement with an if statement but it didnt work out very well. I also need something that checks if input2 is further in the list than input1. I tried with</p> <pre><code>if input_2 &gt; input_1: print('yey') else: print('not recognized, automatically f') input_2 = list_1[-1] </code></pre> <p>sorry for the beginner question =(</p>
0
2016-09-08T20:29:35Z
39,399,754
<p>You will probably want to loop through the list and compare characters like you would numbers.</p> <pre><code>for i in range(len(list_1)): if list_1[i] &gt; input_1 and list_1[i] &lt; input_2: print list_1[i] </code></pre>
0
2016-09-08T20:34:56Z
[ "python", "list", "for-loop", "input" ]
How to get the value from a list + input
39,399,672
<p>So sorry for the vague title but this is my problem. (I just started this study)</p> <h2>For example</h2> <pre><code>list_1 = [a, b, c, d, e, f] input_1 = input('question1') input_2 = input('question2') </code></pre> <p>lets say they chose</p> <pre><code>input_1 = b input_2 = f </code></pre> <p>I need something that prints the letters between the 2 chosen ones. So you would get</p> <pre><code>c d e </code></pre> <p>What I tried is using the for statement with an if statement but it didnt work out very well. I also need something that checks if input2 is further in the list than input1. I tried with</p> <pre><code>if input_2 &gt; input_1: print('yey') else: print('not recognized, automatically f') input_2 = list_1[-1] </code></pre> <p>sorry for the beginner question =(</p>
0
2016-09-08T20:29:35Z
39,399,763
<pre><code>if input_1 in list_1 and input_2 in list_1: print(list_1[list_1.index(input_1)+1:list_1.index(input_2)]) else: print('didn't find input_1 or input_2) </code></pre> <p>This?</p> <p>if you need to auto print the rest of the list from before input_2 or after input_1 you can just add if statements like </p> <pre><code>if input_1 in list_1 and input_2 in list_1: print(list_1[list_1.index(input_1)+1:list_1.index(input_2)]) elif input_1 in list_1 and not input_2 in list_1: print(list_1[list_1.index(input_1)+1:]) elif not input_1 in list_1 and input_2 in list_1: print(list_1[0:list_1.index(input_2)]) else: print('didn't find input_1 or input_2) </code></pre>
0
2016-09-08T20:35:15Z
[ "python", "list", "for-loop", "input" ]
How to get the value from a list + input
39,399,672
<p>So sorry for the vague title but this is my problem. (I just started this study)</p> <h2>For example</h2> <pre><code>list_1 = [a, b, c, d, e, f] input_1 = input('question1') input_2 = input('question2') </code></pre> <p>lets say they chose</p> <pre><code>input_1 = b input_2 = f </code></pre> <p>I need something that prints the letters between the 2 chosen ones. So you would get</p> <pre><code>c d e </code></pre> <p>What I tried is using the for statement with an if statement but it didnt work out very well. I also need something that checks if input2 is further in the list than input1. I tried with</p> <pre><code>if input_2 &gt; input_1: print('yey') else: print('not recognized, automatically f') input_2 = list_1[-1] </code></pre> <p>sorry for the beginner question =(</p>
0
2016-09-08T20:29:35Z
39,399,783
<p><code>input_1</code> and <code>input_2</code> are elements in the list. </p> <p>You can use slice in python to get the elements present between the input elements.</p> <pre><code> index_1 = list_1.index(input_1) index_2 = list_l.index(input_2) new_list = list_1[index_1:index_2] print new_list </code></pre>
0
2016-09-08T20:36:19Z
[ "python", "list", "for-loop", "input" ]
How to get the value from a list + input
39,399,672
<p>So sorry for the vague title but this is my problem. (I just started this study)</p> <h2>For example</h2> <pre><code>list_1 = [a, b, c, d, e, f] input_1 = input('question1') input_2 = input('question2') </code></pre> <p>lets say they chose</p> <pre><code>input_1 = b input_2 = f </code></pre> <p>I need something that prints the letters between the 2 chosen ones. So you would get</p> <pre><code>c d e </code></pre> <p>What I tried is using the for statement with an if statement but it didnt work out very well. I also need something that checks if input2 is further in the list than input1. I tried with</p> <pre><code>if input_2 &gt; input_1: print('yey') else: print('not recognized, automatically f') input_2 = list_1[-1] </code></pre> <p>sorry for the beginner question =(</p>
0
2016-09-08T20:29:35Z
39,399,797
<p>You probably want to use the list.index() method, you can find documentation <a href="https://docs.python.org/2/tutorial/datastructures.html#more-on-lists" rel="nofollow">here</a>.</p> <p>In your case, this will look like </p> <pre><code>i1 = list_1.index(input_1) i2 = list_1.index(input_2) if i2 &gt; i1: print('yey') </code></pre> <p>And then you can use a slice and loop to print the sublist</p> <pre><code>sublist = list_1[i1:i2] for c in sublist: print(c) </code></pre>
0
2016-09-08T20:37:17Z
[ "python", "list", "for-loop", "input" ]
How to get the value from a list + input
39,399,672
<p>So sorry for the vague title but this is my problem. (I just started this study)</p> <h2>For example</h2> <pre><code>list_1 = [a, b, c, d, e, f] input_1 = input('question1') input_2 = input('question2') </code></pre> <p>lets say they chose</p> <pre><code>input_1 = b input_2 = f </code></pre> <p>I need something that prints the letters between the 2 chosen ones. So you would get</p> <pre><code>c d e </code></pre> <p>What I tried is using the for statement with an if statement but it didnt work out very well. I also need something that checks if input2 is further in the list than input1. I tried with</p> <pre><code>if input_2 &gt; input_1: print('yey') else: print('not recognized, automatically f') input_2 = list_1[-1] </code></pre> <p>sorry for the beginner question =(</p>
0
2016-09-08T20:29:35Z
39,399,823
<p>You can loop through the list and take out indices for each of the inputs then test for the conditions you want. Something like this:</p> <pre><code>list1 = ['a', 'b', 'c', 'd', 'f'] input_1 = 'b' input_2 = 'f' input_1_index = -1 input_2_index = -1 for i in range(len(list1)): if list1[i] == input_1: input_1_index = i elif list1[i] == input_2: input_2_index = i if input_1_index == -1 or input_2_index == -1 or input_1_index &gt; input_2_index: print('error') else: print(list1[(input_1_index+1):input_2_index]) </code></pre>
0
2016-09-08T20:39:02Z
[ "python", "list", "for-loop", "input" ]
How to get the value from a list + input
39,399,672
<p>So sorry for the vague title but this is my problem. (I just started this study)</p> <h2>For example</h2> <pre><code>list_1 = [a, b, c, d, e, f] input_1 = input('question1') input_2 = input('question2') </code></pre> <p>lets say they chose</p> <pre><code>input_1 = b input_2 = f </code></pre> <p>I need something that prints the letters between the 2 chosen ones. So you would get</p> <pre><code>c d e </code></pre> <p>What I tried is using the for statement with an if statement but it didnt work out very well. I also need something that checks if input2 is further in the list than input1. I tried with</p> <pre><code>if input_2 &gt; input_1: print('yey') else: print('not recognized, automatically f') input_2 = list_1[-1] </code></pre> <p>sorry for the beginner question =(</p>
0
2016-09-08T20:29:35Z
39,399,950
<p>This is my version of what you're trying to do. Make sure alphabet is the whole list of letters and you a way of checking if user input is a string from the alphabet.</p> <pre><code>alphabet = ['a', 'b', 'c', 'd'] input_1 = str(input("")) input_2 = str(input("")) Input1Pos = alphabet.index(input_1) Input2Pos = alphabet.index(input_2) if (Input2Pos &gt; Input1Pos): Input1Pos = Input1Pos + 1 print alphabet[Input1Pos:Input2Pos] elif (Input2Pos == Input1Pos): print "There are no characters between these two inputs" else: print "The second input was before the first input" </code></pre>
0
2016-09-08T20:47:22Z
[ "python", "list", "for-loop", "input" ]
Python list return 'None'
39,399,870
<p>Code:</p> <pre><code>puzzle1= [ [7,0,0,0,0,0,2,1,8], [0,4,8,6,2,9,0,0,0], [0,0,3,0,0,1,0,0,0], [0,0,7,0,0,8,0,3,2], [0,0,9,7,0,6,5,0,0], [6,8,0,1,0,0,7,0,0], [0,0,0,2,0,0,4,0,0], [0,0,0,4,1,5,8,7,0], [3,5,4,0,0,0,0,0,6] ] def eliminate_values(puzzle): redo = False for i in range(9): for j in range(9): if puzzle[i][j]==0 or isinstance(puzzle[i][j], list): puzzle[i][j] = [] for num in range(1,10): num_check = True; for x in range(9): if puzzle[i][x]==num: num_check = False if puzzle[x][j]==num: num_check = False if i&lt;3: aa=0 elif i&lt;6 and i&gt;2: aa=3 else: aa=6 if j&lt;3: bb=0 elif j&lt;6 and j&gt;2: bb=3 else: bb=6 for a in range(3): for b in range(3): if puzzle[a+aa][b+bb]==num: num_check = False if num_check: puzzle[i][j].append(num) if len(puzzle[i][j]) == 1: puzzle[i][j] = puzzle[i][j][0] redo = True; if redo: eliminate_values(puzzle) else: print(puzzle) return puzzle puzzle=eliminate_values(puzzle1) print(puzzle) </code></pre> <p>Console: </p> <pre><code>[[7, 9, 6, 3, 5, 4, 2, 1, 8], [1, 4, 8, 6, 2, 9, 3, 5, 7], [5, 2, 3, 8, 7, 1, 9, 6, 4], [4, 1, 7, 5, 9, 8, 6, 3, 2], [2, 3, 9, 7, 4, 6, 5, 8, 1], [6, 8, 5, 1, 3, 2, 7, 4, 9], [8, 7, 1, 2, 6, 3, 4, 9, 5], [9, 6, 2, 4, 1, 5, 8, 7, 3], [3, 5, 4, 9, 8, 7, 1, 2, 6]] None </code></pre> <p>Comments:</p> <p>I am new to python but i dont understand why print IS working within the function and NOT after it is returned to the main program. (expecting it to print twice but only prints once then 'None')</p>
0
2016-09-08T20:41:14Z
39,399,935
<p>If <code>redo</code> is <code>True</code>, you are recursively calling your function, and somewhere down the stack, once <code>redo</code> is <code>False</code>, you print and return the result. However, this result is not propagated up the call stack, thus the outermost function call will return nothing, i.e. <code>None</code>, which is then printed. For this, you also have to <code>return</code> the result of the recursive call:</p> <pre><code> if redo: return eliminate_values(puzzle) # try again and return result else: return puzzle # result found in this try, return it </code></pre> <p>Alternatively, instead of using recursion, you could wrap your function body in a <code>while</code> loop.</p>
1
2016-09-08T20:46:02Z
[ "python", "list", "return" ]
Python list return 'None'
39,399,870
<p>Code:</p> <pre><code>puzzle1= [ [7,0,0,0,0,0,2,1,8], [0,4,8,6,2,9,0,0,0], [0,0,3,0,0,1,0,0,0], [0,0,7,0,0,8,0,3,2], [0,0,9,7,0,6,5,0,0], [6,8,0,1,0,0,7,0,0], [0,0,0,2,0,0,4,0,0], [0,0,0,4,1,5,8,7,0], [3,5,4,0,0,0,0,0,6] ] def eliminate_values(puzzle): redo = False for i in range(9): for j in range(9): if puzzle[i][j]==0 or isinstance(puzzle[i][j], list): puzzle[i][j] = [] for num in range(1,10): num_check = True; for x in range(9): if puzzle[i][x]==num: num_check = False if puzzle[x][j]==num: num_check = False if i&lt;3: aa=0 elif i&lt;6 and i&gt;2: aa=3 else: aa=6 if j&lt;3: bb=0 elif j&lt;6 and j&gt;2: bb=3 else: bb=6 for a in range(3): for b in range(3): if puzzle[a+aa][b+bb]==num: num_check = False if num_check: puzzle[i][j].append(num) if len(puzzle[i][j]) == 1: puzzle[i][j] = puzzle[i][j][0] redo = True; if redo: eliminate_values(puzzle) else: print(puzzle) return puzzle puzzle=eliminate_values(puzzle1) print(puzzle) </code></pre> <p>Console: </p> <pre><code>[[7, 9, 6, 3, 5, 4, 2, 1, 8], [1, 4, 8, 6, 2, 9, 3, 5, 7], [5, 2, 3, 8, 7, 1, 9, 6, 4], [4, 1, 7, 5, 9, 8, 6, 3, 2], [2, 3, 9, 7, 4, 6, 5, 8, 1], [6, 8, 5, 1, 3, 2, 7, 4, 9], [8, 7, 1, 2, 6, 3, 4, 9, 5], [9, 6, 2, 4, 1, 5, 8, 7, 3], [3, 5, 4, 9, 8, 7, 1, 2, 6]] None </code></pre> <p>Comments:</p> <p>I am new to python but i dont understand why print IS working within the function and NOT after it is returned to the main program. (expecting it to print twice but only prints once then 'None')</p>
0
2016-09-08T20:41:14Z
39,400,081
<p>@tobias_k is right. </p> <p>In every recursive function you have a base case and a recursive case. The base case is when you reach the end of your recursion and return the final value from your recursive function. The recursive case is where the function calls itself again.</p> <p><strong>You need to be returning in both cases though.</strong></p> <p>If you don't, then even if you're eventually hitting your base case, the return value of the base case doesn't get passed up the stack.</p> <p>i.e.: </p> <pre><code>def recursiveDecrement(x): if x &gt; 0: print("Recursive case. x = %s" %x) recursiveDecrement(x - 1) print("I should have returned...x = %s" %x) else: print("Base case. x = %s" %x) return x </code></pre> <p>If I call <code>recursiveDecrement(5)</code> my output would be:</p> <pre><code>Recursive case. x = 5 Recursive case. x = 4 Recursive case. x = 3 Recursive case. x = 2 Recursive case. x = 1 Base case. x = 0 I should have returned...x = 1 I should have returned...x = 2 I should have returned...x = 3 I should have returned...x = 4 I should have returned...x = 5 </code></pre> <p>However, once the base case is hit, the method just continues to execute and at the end nothing is returned and x is still equal to 5.</p> <p>Change your if statement to return in both cases and everything should work.</p> <pre><code>if redo: return eliminate_values(puzzle) else: return puzzle </code></pre>
2
2016-09-08T20:57:16Z
[ "python", "list", "return" ]
How do I split an input String into seperate usable integers in python
39,399,962
<p>I'm trying to split a Input String xyz into 3 tokens and then seperating into 3 integers called x, y, and z. I want it to do this so that I have to do less input and then be able to use them for the coordinates of <code>mc.setblocks(x1, y1, z1, x, y, z, BlockId)</code>. How do I Separate it so that it turns out as 3 different ints and\or split them into tokens to do so? I know how I would do this in java but I have no clue of how to do it in python. It should look something like this :</p> <pre><code>xyz1 = input("enter first coordinates example: 102 36 74") st = StringTokenizer(xyz1) x = st.nextToken y = st.nextToken z = st.nextToken </code></pre>
0
2016-09-08T20:48:03Z
39,400,016
<p>You can take the string that is in <code>xyz1</code> and split it and turn them into integers like this</p> <pre><code>xyz_list = [int(x) for x in xyz1.split(' ')] </code></pre> <p>If you don't want these integers in a list and would prefer to store them into separate variables, just do this</p> <pre><code>x = xyz_list[0] y = xyz_list[1] z = xyz_list[2] </code></pre>
1
2016-09-08T20:52:15Z
[ "python", "split", "coordinates", "tokenize", "minecraft" ]
How do I split an input String into seperate usable integers in python
39,399,962
<p>I'm trying to split a Input String xyz into 3 tokens and then seperating into 3 integers called x, y, and z. I want it to do this so that I have to do less input and then be able to use them for the coordinates of <code>mc.setblocks(x1, y1, z1, x, y, z, BlockId)</code>. How do I Separate it so that it turns out as 3 different ints and\or split them into tokens to do so? I know how I would do this in java but I have no clue of how to do it in python. It should look something like this :</p> <pre><code>xyz1 = input("enter first coordinates example: 102 36 74") st = StringTokenizer(xyz1) x = st.nextToken y = st.nextToken z = st.nextToken </code></pre>
0
2016-09-08T20:48:03Z
39,400,039
<p>You can use the <code>split()</code> method of the <code>string</code> object, which defaults to splitting on whitespace characters. This will give you a list of separate strings. To convert each string to an <code>integer</code>, you could use a comprehension. Assuming the input is in correct form, the following one-liner will do it:</p> <pre><code>x, y, z = ( int(coord) for coord in xyz1.split() ) </code></pre>
1
2016-09-08T20:54:17Z
[ "python", "split", "coordinates", "tokenize", "minecraft" ]
How do I split an input String into seperate usable integers in python
39,399,962
<p>I'm trying to split a Input String xyz into 3 tokens and then seperating into 3 integers called x, y, and z. I want it to do this so that I have to do less input and then be able to use them for the coordinates of <code>mc.setblocks(x1, y1, z1, x, y, z, BlockId)</code>. How do I Separate it so that it turns out as 3 different ints and\or split them into tokens to do so? I know how I would do this in java but I have no clue of how to do it in python. It should look something like this :</p> <pre><code>xyz1 = input("enter first coordinates example: 102 36 74") st = StringTokenizer(xyz1) x = st.nextToken y = st.nextToken z = st.nextToken </code></pre>
0
2016-09-08T20:48:03Z
39,400,221
<p>I tried writing this less "pythonically" so you can see what's happening:</p> <pre><code>xyz1 = input("Enter first 3 coordinates (example: 102 36 74): ") tokens = xyz1.split() # returns a list (ArrayList) of tokenized values try: x = int(tokens[0]) # sets x,y,z to the first three tokens y = int(tokens[1]) z = int(tokens[2]) except IndexError: # if 0,1,2 are out of bounds print("You must enter 3 values") except ValueError: # if the entered number was of invalid int format print("Invalid integer format") </code></pre> <p>If you have more than just three coordinates being entered, you would tokenize the input and loop over it, converting each token into an int and appending it to a list:</p> <pre><code>xyz1 = input("Enter first 3 coordinates (example: 102 36 74): ") tokens = xyz1.split() # returns a list (ArrayList) of tokenized values coords = [] # initialise empty list for tkn in tokens: try: coords.append(int(tkn)) # convert token to int except ValueError: # if the entered number was of invalid int format print("Invalid integer format: {}".format(tkn)) raise print(coords) # coords is now a list of integers that were entered </code></pre> <p>Fun fact, you can do the above in a mostly one-liner. This is a more pythonic way so you can contrast this with the above to get a sense of what that means:</p> <pre><code>try: xyz1 = input("Enter first 3 coordinates (example: 102 36 74): ") coords = [int(tkn) for tkn in xyz1.split()] except ValueError: # if the entered number was of invalid int format print("Invalid integer format: {}".format(tkn)) raise </code></pre>
0
2016-09-08T21:07:26Z
[ "python", "split", "coordinates", "tokenize", "minecraft" ]
Kivy- kv language VKeyboard
39,400,003
<p>I'd like to add VKeyboard widget for my app written using Kivy 1.9.0. I using Python 2.7.12. Is there a way to add this widget to application via kv language? Because while trying method below there is bug: 'ValueError: No JSON object could be decoded'</p> <pre><code> Button: background_color:1,0,0,0.5 text:'Next word' size_hint:.5,.2 font_size:25 pos_hint:{'center_x':.5} on_press:root.word_dict() VKeyboard: layout:'layout.json' </code></pre> <p><strong>layout.json</strong></p> <pre><code> { "title":"KeyboardPinyin", "description":"Keyboard using for writing pinyin characters", "cols":5, "rows":3, "normal_1":[ ["ā","ā","ā",1], ["ē","ē","ē",1], ["ī","ī","ī",1], ["ō","ō","ō",1], ["ū","ū","ū",1] ], "normal_2": [ ["á","á","á",1], ["é","é","é",1], ["í","í","í",1], ["ó","ó","ó",1], ["ú","ú","ú",1] ], "normal_3": [ ["ǎ","ǎ","ǎ",1], ["ě","ě","ě",1], ["ǐ","ǐ","ǐ",1], ["ǒ","ǒ","ǒ",1], ["ǔ","ǔ","ǔ",1] ], "normal_4": [ ["à","à","à",1], ["è","è","è",1], ["ì","ì","ì",1], ["ò","ò","ò",1], ["ù","ù","ù",1] ] } </code></pre>
1
2016-09-08T20:51:17Z
39,400,464
<pre><code>Traceback (most recent call last): File "C:/Users/joran/.PyCharm50/config/scratches/scratch_33", line 3, in &lt;module&gt; data = json.load(open(os.path.expanduser("~/layout1.json"))) File "C:\Python27\lib\json\__init__.py", line 290, in load **kw) File "C:\Python27\lib\json\__init__.py", line 338, in loads return _default_decoder.decode(s) File "C:\Python27\lib\json\decoder.py", line 366, in decode obj, end = self.raw_decode(s, idx=_w(s, 0).end()) File "C:\Python27\lib\json\decoder.py", line 382, in raw_decode obj, end = self.scan_once(s, idx) UnicodeDecodeError: 'utf8' codec can't decode byte 0xe1 in position 0: unexpected end of data </code></pre> <p>you cannot it tries to decode the input using utf8 ... you have non-utf8 characters in your json</p> <p>you should instead use the unicode representation <code>"\u00e0"</code> etc (or you must somehow specify the character encoding of the json ... I am not sure how you might do that offhand)</p>
0
2016-09-08T21:26:17Z
[ "python", "widget", "kivy", "virtual-keyboard" ]
Python Pandas Group by date using datetime data
39,400,115
<p>I have a column <code>Date_Time</code> that I wish to groupby date time without creating a new column. Is this possible the current code I have does not work.</p> <pre><code>df = pd.groupby(df,by=[df['Date_Time'].date()]) </code></pre>
4
2016-09-08T20:59:53Z
39,400,136
<p>You can use <code>groupby</code> by dates of column <code>Date_Time</code> by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.dt.date.html" rel="nofollow"><code>dt.date</code></a>:</p> <pre><code>df = df.groupby([df['Date_Time'].dt.date]).mean() </code></pre> <p>Sample:</p> <pre><code>df = pd.DataFrame({'Date_Time': pd.date_range('10/1/2001 10:00:00', periods=3, freq='10H'), 'B':[4,5,6]}) print (df) B Date_Time 0 4 2001-10-01 10:00:00 1 5 2001-10-01 20:00:00 2 6 2001-10-02 06:00:00 print (df['Date_Time'].dt.date) 0 2001-10-01 1 2001-10-01 2 2001-10-02 Name: Date_Time, dtype: object df = df.groupby([df['Date_Time'].dt.date])['B'].mean() print(df) Date_Time 2001-10-01 4.5 2001-10-02 6.0 Name: B, dtype: float64 </code></pre> <p>Another solution with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.resample.html" rel="nofollow"><code>resample</code></a>:</p> <pre><code>df = df.set_index('Date_Time').resample('D')['B'].mean() print(df) Date_Time 2001-10-01 4.5 2001-10-02 6.0 Freq: D, Name: B, dtype: float64 </code></pre>
4
2016-09-08T21:01:12Z
[ "python", "date", "datetime", "pandas", "group-by" ]
Python Pandas Group by date using datetime data
39,400,115
<p>I have a column <code>Date_Time</code> that I wish to groupby date time without creating a new column. Is this possible the current code I have does not work.</p> <pre><code>df = pd.groupby(df,by=[df['Date_Time'].date()]) </code></pre>
4
2016-09-08T20:59:53Z
39,400,375
<p>Credit to @jezrael for his setup dataframe</p> <hr> <p>You can set the index to be <code>'Date_Time'</code> and use <code>pd.TimeGrouper</code></p> <pre><code>df.set_index('Date_Time').groupby(pd.TimeGrouper('D')).mean().dropna() </code></pre> <p><a href="http://i.stack.imgur.com/JpCfM.png" rel="nofollow"><img src="http://i.stack.imgur.com/JpCfM.png" alt="enter image description here"></a></p>
4
2016-09-08T21:19:02Z
[ "python", "date", "datetime", "pandas", "group-by" ]
Error when saving multiple figures to a one multi-page PDF document
39,400,135
<p>I'm trying to save several figures to one multi-page PDF document. My code is as follows:</p> <pre><code>import matplotlib.backends.backend_pdf pdf = matplotlib.backends.backend_pdf.PdfPages('output.pdf') sns.set_style('darkgrid') g = sns.factorplot(data=df, x='Date', y='Product_Count', col='Company', col_wrap=4, sharey=False) g.set_xlabels('') g.set_ylabels('product count') g.set_xticklabels(rotation=45) plt.locator_params(axis = 'x', nbins = 8) f = sns.factorplot(data=df, x='Date', y='Volume_Count', col='Company', col_wrap=4, sharey=False) f.set_xlabels('') f.set_ylabels('volume count') f.set_xticklabels(rotation=45) plt.locator_params(axis = 'x', nbins = 8) figures = [g, f] for figure in figures: pdf.savefig(figure) pdf.close() </code></pre> <p>I'm seeing this error message: </p> <pre><code>ValueError: No such figure: &lt;seaborn.axisgrid.FacetGrid object at 0x237CD5F0&gt; </code></pre> <p>Is there something wrong with the iteration?</p>
1
2016-09-08T21:01:07Z
39,408,904
<p><code>g</code> and <code>f</code> are not <a href="http://matplotlib.org/api/figure_api.html#matplotlib.figure.Figure" rel="nofollow"><code>matplotlib.figure.Figure</code></a> objects, they are <a href="https://stanford.edu/~mwaskom/software/seaborn/generated/seaborn.FacetGrid.html" rel="nofollow"><code>seaborn.axisgrid.FacetGrid</code></a> objects (as mentioned by <a href="http://stackoverflow.com/questions/39400135/error-when-saving-multiple-figures-to-a-one-multi-page-pdf-document-python-sea/39408904#comment66127098_39400135">@tcaswell in comments</a>).</p> <p><code>PdfPages</code> needs the <code>Figure</code> instances, and luckily they are easy to extract from the <code>FacetGrid</code> objects, using <code>g.fig</code> and <code>f.fig</code>.</p> <p>So, all you need to do is change one line, from </p> <pre><code>figures = [g, f] </code></pre> <p>to:</p> <pre><code>figures = [g.fig, f.fig] </code></pre>
3
2016-09-09T10:00:13Z
[ "python", "matplotlib", "seaborn", "pdfpages" ]
Sending data from Autodesk Maya to an external application using commandPort (python scripting)
39,400,302
<p>I have a implemented a C# application which communicates to Autodesk Maya using a TCP Connection. Maya acts as the server and my application acts as the host. </p> <p>The python script that is executed in Maya is - </p> <pre><code>import socket import maya.cmds as cmds flag = None cmds.commandPort(name = "localhost:7777", stp = "python") def start(): global flag flag = True def stop(): global flag flag = False def close(): cmds.commandPort(name = "localhost:7777", close = True) windowZ = cmds.window(title="Object Navigate", w= 350) cmds.columnLayout(adjustableColumn = True) startbtn= cmds.button(label = "Start", c = "start()") stopbtn= cmds.button(label = "Stop", c = "stop()") closebtn= cmds.button(label = "close", c = "close()") cmds.showWindow(windowZ) </code></pre> <p>I have written a TCPClient C# application (which runs perfectly fine). The data that the application sends looks like this.</p> <pre><code>Connection.sendData(String.Format("if flag:\n" + "\tcmds.dolly(10,20,30)")); </code></pre> <p>The problem with this statement is the flag variable that was declared in the python script is not recognized here. When I just say cmds.dolly(10,20,30) this command gets executed perfectly.</p> <p>Now, my question is how do I make the flag variable visible to my C# application OR is there a way to send the value of flag from Maya to the C# application through the commandPort?</p> <p>Any ideas would be appreciated! </p>
0
2016-09-08T21:13:33Z
39,407,511
<p>i changed some parts for mayas side</p> <pre><code>import socket import maya.cmds as cmds from functools import partial FLAG = None class Hermes(object): def __init__(self, **kwargs): self.host_id = kwargs['host_id'] cmds.commandPort(name = "localhost:{0}".format(self.host_id), stp = "python") def delete_win(self, **kwargs): if cmds.window(kwargs["win_id"], q=True, exists=True): cmds.deleteUI(kwargs["win_id"], window=True) def submitter(self, **kwargs): mod = globals()['Hermes']() meth = getattr(mod , kwargs['method']) #your flag FLAG = meth() def start(self, flag = True, *args): return flag def stop(self, flag = False, *args): return flag def close(self, *args): cmds.commandPort(name = "localhost:{0}".format(self.host_id), close = True) def hermes_ui(self, *args): delete_win(win_id = "windowZ") toll_ui = {} toll_ui["windowZ"] = cmds.window("windowZ", title="Object Navigate", w= 350) toll_ui["main_pan"] = cmds.paneLayout( configuration='quad', parent = toll_ui["windowZ"]) toll_ui["start_btn"]= cmds.button(label = "Start", parent = toll_ui["main_pan"], c = partial(self.submitter, method = "start")) toll_ui["stop_btn"]= cmds.button(label = "Stop", parent = toll_ui["main_pan"],c = partial(self.submitter, method = "stop")) toll_ui["close_btn"]= cmds.button(label = "close", parent = toll_ui["main_pan"],c = partial(self.submitter,method = "close")) cmds.showWindow(toll_ui["windowZ"]) def run(*args): hrm = Hermes(host_id = '7777') hrm.hermes_ui() </code></pre> <p>you need not really the '\n', the seperating is done by simicolon: <code>Connection.sendData(String.Format("if FLAG: cmds.dolly(10,20,30) else cmds.warning('press Start')"));</code> you can also create a node and add a custom attr and write the flag by start or close than you read the flag as attr(you can change that really easy and friendly without any ui) like:</p> <pre><code>Connection.sendData(String.Format("if cmds.getAttr('{0}.type'.format('hermes_node')): cmds.dolly(10,20,30)")); </code></pre>
0
2016-09-09T08:49:54Z
[ "c#", "python", "sockets", "tcpclient", "maya" ]
Is making a python wrapper for the Azure-PowerShell SDK a stupid idea?
39,400,319
<p>My opinion of the Azure-Python SDK is not high for Azure RM. What takes 1 line in PowerShell takes 10 in Python. That is the opposite of what python is supposed to do.</p> <p>So, my idea is to create python package which comes with a directory containing a few template .ps1 scripts. You would define a few variables like vmname, resourcegroup, location, etc... inject these into the .ps1 templates, then call commands from the REPL. </p> <p>Right now I'm having trouble using the subprocess module to keep PS open until I tell it to close. As it stands now, I need to include</p> <pre><code>login-azurerm </code></pre> <p>and authenticate before running <strong>any</strong> command. This won't do. I'd like to fix this, but frankly right now I'm wondering whether or not the premise is a good idea to start with.</p> <p>Any input is much appreciated!</p>
0
2016-09-08T21:15:16Z
39,467,065
<p>@RobTruxal, It seems that a feasible way to call PowerShell in Python is using the module <code>subprocess</code>, such as the code below as reference.</p> <pre><code>import subprocess subprocess.call(["C:\\WINDOWS\\system32\\WindowsPowerShell\\v1.0\\powershell.exe", "your-script.ps1", "arguments"]) </code></pre> <p>You need to write your powershell script which includes <code>login-azurerm</code> command, and interaction with cmd that running python script.</p> <p>Or you can choose the AzureCLI for cross platform in node.js as the command line tool for managing Azure Resoure.</p> <p>Hope it helps.</p>
0
2016-09-13T09:51:48Z
[ "python", "powershell", "azure" ]
Is making a python wrapper for the Azure-PowerShell SDK a stupid idea?
39,400,319
<p>My opinion of the Azure-Python SDK is not high for Azure RM. What takes 1 line in PowerShell takes 10 in Python. That is the opposite of what python is supposed to do.</p> <p>So, my idea is to create python package which comes with a directory containing a few template .ps1 scripts. You would define a few variables like vmname, resourcegroup, location, etc... inject these into the .ps1 templates, then call commands from the REPL. </p> <p>Right now I'm having trouble using the subprocess module to keep PS open until I tell it to close. As it stands now, I need to include</p> <pre><code>login-azurerm </code></pre> <p>and authenticate before running <strong>any</strong> command. This won't do. I'd like to fix this, but frankly right now I'm wondering whether or not the premise is a good idea to start with.</p> <p>Any input is much appreciated!</p>
0
2016-09-08T21:15:16Z
39,477,085
<p>@RobTruxal, the new CLI for Azure will be in Python and will be released as a preview soon. You can already try it from the github account: <a href="https://github.com/Azure/azure-cli" rel="nofollow">https://github.com/Azure/azure-cli</a></p> <p>The Azure SDK for Python is not supposed to mimic the Powershell cmdlets, but to be a language SDK (like C#, Java, Ruby, etc.).</p> <p>If you have any suggestion/comments about the Python SDK itself, please do not hesitate to fill an issue on the issue tracker: <a href="https://github.com/Azure/azure-sdk-for-python/issues" rel="nofollow">https://github.com/Azure/azure-sdk-for-python/issues</a></p> <p>(FYI, I'm the owner of the Python SDK repo at MS)</p>
1
2016-09-13T18:51:56Z
[ "python", "powershell", "azure" ]
Comparing two DataFrames by one column with a return of three different outputs with Panadas
39,400,332
<p>I am beginner in Python and coding. I need help comparing two dataframes of different lengths and with different column labels except one. The column that is the same between the two datasets is the column I want to compare the dataframe by. My data looks like this:</p> <pre><code> df: 'fruits' 'trees' 'sports' 'countries' bananas mongolia basketball Spain grapes Oak rugby Thailand oranges Osage Orange baseball Egypt apples Maple golf Chile df2: 'cars' 'flowers' 'countries' 'vegetables' Audi Rose Spain Carrots BMW Tulip Nigeria Celery Honda Dandelion Egypt Onion </code></pre> <p>I would to compare these two dataframes based on the column 'countries'and create three separate outputs each in their own dataframe. I have been using Pandas and have used pd.concat to combine df1 and df2 into one. I would also like to keep the rows of the rest of the dataframe even though they don't match. </p> <p>Here are my desired outputs:</p> <p>Output# 1: Values in df NOT in df2:</p> <pre><code> d3: 'fruits' 'trees' 'sports' 'countries' grapes Oak rugby Thailand apples Maple golf Chile </code></pre> <p>Output# 2: Values in df2 NOT in df</p> <pre><code> df4: 'cars' 'flowers' 'countries' 'vegetables' BMW Tulip Nigeria Celery </code></pre> <p>Output# 3: Values in both df AND df2 (with the columns from the different dataframes combined.) </p> <pre><code>df5: 'fruits' 'trees' 'sports' 'cars' 'flowers' 'countries' 'vegetables' bananas mongolia basketball Audi Rose Spain Carrots Oranges Osage Orange baseball Honda Dandelion Egypt Onion </code></pre> <p>Hope this all makes sense. I have tried so many different things (isin, DataFrame.diff and .difference, df-df2, numpy arrays, etc.) I have looked all over and I can't find exactly what I'm looking for. Any help would be greatly appreciated! Thank you! </p>
6
2016-09-08T21:16:03Z
39,400,455
<p><strong><em>Setup Reference</em></strong></p> <pre><code>from StringIO import StringIO import pandas as pd txt1 = """fruits,trees,sports,countries bananas,mongolia,basketball,Spain grapes,Oak,rugby,Thailand oranges,Osage,Orange baseball,Egypt apples,Maple,golf,Chile""" txt2 = """cars,flowers,countries,vegetables Audi,Rose,Spain,Carrots BMW,Tulip,Nigeria,Celery Honda,Dandelion,Egypt,Onion""" df = pd.read_csv(StringIO(txt1)) df2 = pd.read_csv(StringIO(txt2)) </code></pre> <hr> <h3>Solution</h3> <pre><code>def outer_parts(df1, df2): df3 = df1.merge(df2, indicator=True, how='outer') return {n: g.drop('_merge', 1) for n, g in df3.groupby('_merge')} dfs = outer_parts(df, df2) </code></pre> <hr> <h3>Demonstration</h3> <pre><code>dfs['both'] </code></pre> <p><a href="http://i.stack.imgur.com/qsDiA.png" rel="nofollow"><img src="http://i.stack.imgur.com/qsDiA.png" alt="enter image description here"></a></p> <pre><code>dfs['left_only'] </code></pre> <p><a href="http://i.stack.imgur.com/Z7ccG.png" rel="nofollow"><img src="http://i.stack.imgur.com/Z7ccG.png" alt="enter image description here"></a></p> <pre><code>dfs['right_only'] </code></pre> <p><a href="http://i.stack.imgur.com/4Lle9.png" rel="nofollow"><img src="http://i.stack.imgur.com/4Lle9.png" alt="enter image description here"></a></p>
4
2016-09-08T21:25:39Z
[ "python", "pandas", "numpy", "dataframe" ]
Using a list comprehension to label data that is common to two lists
39,400,336
<p>I have two lists, A and B. I want to generate a third list that is 1 if the corresponding entry in A has an entry in the list B at the end of the string and 0 otherwise.</p> <pre><code>A = ['Mary Sue', 'John Doe', 'Alice Stella', 'James May', 'Susie May'] B = ['Smith', 'Stirling', 'Doe'] </code></pre> <p>I want a list comprehension that will give the result</p> <pre><code>[0, 1, 0, 0, 0] </code></pre> <p>Keep in mind that this is a specific case of a more general problem. Elements in A can have arbitrary white space and contain an arbitrary number of words in them. Likewise elements in B can have an arbitrary number of words. For example</p> <pre><code>A = [' Tom Barry Stirling Adam', 'Maddox Smith', 'George Washington Howard Smith'] B = ['Washington Howard Smith', 'Stirling Adam'] </code></pre> <p>should return</p> <pre><code>[1, 0, 1] </code></pre> <p>So far I have the following</p> <pre><code>[1 if y.endswith(x) else 0 for x in B for y in A] </code></pre> <p>However the length of the returned list is not the dimension that I want because it gives a 0 or 1 for every combination of A[i], B[j] elements. I am not interested in solutions using for loops, I need a list comprehension for speed.</p>
4
2016-09-08T21:16:18Z
39,400,405
<pre><code>&gt;&gt;&gt; [[0, 1][name.split()[-1] in set(B)] for name in A] [0, 1, 0, 0, 0] </code></pre> <p>edit: For finer control over the check.</p> <p><a href="https://docs.python.org/2/library/stdtypes.html#str.split" rel="nofollow"><strong><code>str.split</code></strong></a> can take a parameter which is the maximum splits to make. e.g:</p> <pre><code>&gt;&gt;&gt; 'Mary Pat Sue'.split(maxsplit=1) ['Mary', 'Pat Sue'] </code></pre> <p>Then we could compare to see if any name in <code>B</code> is in the second value of split:</p> <pre><code>&gt;&gt;&gt; B = ['Pat', 'Sue'] &gt;&gt;&gt; any(name in 'Pat Sue' for name in B) True </code></pre> <p>So altogether:</p> <pre><code>&gt;&gt;&gt; [[0, 1][any(surname in fullname.split(maxsplit=1)[-1] for surname in B)] for fullname in A] </code></pre>
0
2016-09-08T21:21:41Z
[ "python", "python-3.x", "list-comprehension" ]
Using a list comprehension to label data that is common to two lists
39,400,336
<p>I have two lists, A and B. I want to generate a third list that is 1 if the corresponding entry in A has an entry in the list B at the end of the string and 0 otherwise.</p> <pre><code>A = ['Mary Sue', 'John Doe', 'Alice Stella', 'James May', 'Susie May'] B = ['Smith', 'Stirling', 'Doe'] </code></pre> <p>I want a list comprehension that will give the result</p> <pre><code>[0, 1, 0, 0, 0] </code></pre> <p>Keep in mind that this is a specific case of a more general problem. Elements in A can have arbitrary white space and contain an arbitrary number of words in them. Likewise elements in B can have an arbitrary number of words. For example</p> <pre><code>A = [' Tom Barry Stirling Adam', 'Maddox Smith', 'George Washington Howard Smith'] B = ['Washington Howard Smith', 'Stirling Adam'] </code></pre> <p>should return</p> <pre><code>[1, 0, 1] </code></pre> <p>So far I have the following</p> <pre><code>[1 if y.endswith(x) else 0 for x in B for y in A] </code></pre> <p>However the length of the returned list is not the dimension that I want because it gives a 0 or 1 for every combination of A[i], B[j] elements. I am not interested in solutions using for loops, I need a list comprehension for speed.</p>
4
2016-09-08T21:16:18Z
39,400,411
<p>Your condition needs to hold the <code>B</code> list. Your proposed solution will generate a <code>0</code> or <code>1</code> for every pair of (A, B) elements.</p> <pre><code>[1 if any(full.endswith(last) for last in B) else 0 for full in A] </code></pre> <p>But you can also take advantage of <code>bool</code> to <code>int</code> conversion</p> <pre><code>[int(any(full.endswith(last) for last in B)) for full in A] </code></pre> <p>You can save some time by using a <code>set</code> and the <code>in</code> operator as well:</p> <pre><code>B = {'Smith', 'Stirling', 'Doe'} # set for a more efficient `in` [int(full.split()[-1] in B) for full in A] </code></pre>
2
2016-09-08T21:22:03Z
[ "python", "python-3.x", "list-comprehension" ]
Using a list comprehension to label data that is common to two lists
39,400,336
<p>I have two lists, A and B. I want to generate a third list that is 1 if the corresponding entry in A has an entry in the list B at the end of the string and 0 otherwise.</p> <pre><code>A = ['Mary Sue', 'John Doe', 'Alice Stella', 'James May', 'Susie May'] B = ['Smith', 'Stirling', 'Doe'] </code></pre> <p>I want a list comprehension that will give the result</p> <pre><code>[0, 1, 0, 0, 0] </code></pre> <p>Keep in mind that this is a specific case of a more general problem. Elements in A can have arbitrary white space and contain an arbitrary number of words in them. Likewise elements in B can have an arbitrary number of words. For example</p> <pre><code>A = [' Tom Barry Stirling Adam', 'Maddox Smith', 'George Washington Howard Smith'] B = ['Washington Howard Smith', 'Stirling Adam'] </code></pre> <p>should return</p> <pre><code>[1, 0, 1] </code></pre> <p>So far I have the following</p> <pre><code>[1 if y.endswith(x) else 0 for x in B for y in A] </code></pre> <p>However the length of the returned list is not the dimension that I want because it gives a 0 or 1 for every combination of A[i], B[j] elements. I am not interested in solutions using for loops, I need a list comprehension for speed.</p>
4
2016-09-08T21:16:18Z
39,400,421
<p><code>[1 if a.split(' ')[1] in B else 0 for a in A]</code></p>
0
2016-09-08T21:22:53Z
[ "python", "python-3.x", "list-comprehension" ]
Using a list comprehension to label data that is common to two lists
39,400,336
<p>I have two lists, A and B. I want to generate a third list that is 1 if the corresponding entry in A has an entry in the list B at the end of the string and 0 otherwise.</p> <pre><code>A = ['Mary Sue', 'John Doe', 'Alice Stella', 'James May', 'Susie May'] B = ['Smith', 'Stirling', 'Doe'] </code></pre> <p>I want a list comprehension that will give the result</p> <pre><code>[0, 1, 0, 0, 0] </code></pre> <p>Keep in mind that this is a specific case of a more general problem. Elements in A can have arbitrary white space and contain an arbitrary number of words in them. Likewise elements in B can have an arbitrary number of words. For example</p> <pre><code>A = [' Tom Barry Stirling Adam', 'Maddox Smith', 'George Washington Howard Smith'] B = ['Washington Howard Smith', 'Stirling Adam'] </code></pre> <p>should return</p> <pre><code>[1, 0, 1] </code></pre> <p>So far I have the following</p> <pre><code>[1 if y.endswith(x) else 0 for x in B for y in A] </code></pre> <p>However the length of the returned list is not the dimension that I want because it gives a 0 or 1 for every combination of A[i], B[j] elements. I am not interested in solutions using for loops, I need a list comprehension for speed.</p>
4
2016-09-08T21:16:18Z
39,400,970
<p>How big is B in real life? You could turn it into a regular expression.</p> <pre><code>A = ['Mary Sue', 'John Doe', 'Alice Stella', 'James May', 'Susie May'] B = ['Smith', 'Stirling', 'Doe'] </code></pre> <p>turn B into <code>".*(?:Smith|Stirling|Doe)$"</code> then compile to regex </p> <pre><code>import re end_with_b = re.compile(".*(?:{})$".format("|".join(B)) a_matches = [1 if ends_with_b.match(a) else 0 for a in A] </code></pre> <p>Or, create your own filter function</p> <pre><code>def my_filter(a): return 1 if any(a.endswith(b) for b in B) else 0 a_matches = [my_filter(a) for a in A] </code></pre>
0
2016-09-08T22:14:13Z
[ "python", "python-3.x", "list-comprehension" ]
Using a list comprehension to label data that is common to two lists
39,400,336
<p>I have two lists, A and B. I want to generate a third list that is 1 if the corresponding entry in A has an entry in the list B at the end of the string and 0 otherwise.</p> <pre><code>A = ['Mary Sue', 'John Doe', 'Alice Stella', 'James May', 'Susie May'] B = ['Smith', 'Stirling', 'Doe'] </code></pre> <p>I want a list comprehension that will give the result</p> <pre><code>[0, 1, 0, 0, 0] </code></pre> <p>Keep in mind that this is a specific case of a more general problem. Elements in A can have arbitrary white space and contain an arbitrary number of words in them. Likewise elements in B can have an arbitrary number of words. For example</p> <pre><code>A = [' Tom Barry Stirling Adam', 'Maddox Smith', 'George Washington Howard Smith'] B = ['Washington Howard Smith', 'Stirling Adam'] </code></pre> <p>should return</p> <pre><code>[1, 0, 1] </code></pre> <p>So far I have the following</p> <pre><code>[1 if y.endswith(x) else 0 for x in B for y in A] </code></pre> <p>However the length of the returned list is not the dimension that I want because it gives a 0 or 1 for every combination of A[i], B[j] elements. I am not interested in solutions using for loops, I need a list comprehension for speed.</p>
4
2016-09-08T21:16:18Z
39,401,250
<p>A much faster way is to pass a tuple to <em>endswith</em>:</p> <pre><code>In [8]: A = ['Mary Sue', 'John Doe', 'Alice Stella', 'James May', 'Susie May'] In [9]: B = ['Smith', 'Stirling', 'Doe'] In [10]: A *= 1000 In [11]: %%timeit t = tuple(B) [int(s.endswith(t)) for s in A] ....: 100 loops, best of 3: 5.02 ms per loop In [12]: timeit [int(any(full.endswith(last) for last in B)) for full in A] 100 loops, best of 3: 21.3 ms per loop </code></pre> <p>You make <em>one function call</em> per element in <code>A</code> as opposed to one function call for potentially every element in B for each in <code>A</code> and without the overhead of the generator used with <em>any</em>. </p> <p>You can see using a larger set of words just how much faster it is especially if the matches are sparse:</p> <pre><code>In [2]: from random import sample In [6]: A = [s.strip() for s in open("/usr/share/dict/american-english")][:20000] In [7]: B = sample([s.strip() for s in open("/usr/share/dict/british-english")], 2000) In [8]: %%timeit t = tuple(B) [int(s.endswith(t)) for s in A] ...: 1 loop, best of 3: 2.16 s per loop In [9]: timeit [int(any(full.endswith(last) for last in B)) for full in A] 1 loop, best of 3: 26.6 s per loop </code></pre> <p>You said you don't want loops but as the lists grow sorting might be a better option, then bisect to find any matched strings with a log n search reversing the logic:</p> <pre><code>from bisect import bisect_left def compress(l1, l2): srt1 = sorted(s[::-1] for s in l2) hi = len(l2) for ele in l1: rev = ele[::-1] ind = bisect_left(srt1, rev, hi=hi) print(list(compress(A, B))) </code></pre> <p>The runtime is O(N log N) as opposed to the quadratic approach checking every substring.</p>
2
2016-09-08T22:44:53Z
[ "python", "python-3.x", "list-comprehension" ]
Use dataframe column as arguments in function - iPython
39,400,348
<p>I am new with python and any help would be appreciated!</p> <p>I have a dataframe with one column with coordinates:</p> <pre><code>gridReference (190000, 200000) (560000, 250000) (560000, 250000) (560000, 250000) (560000, 250000) (560000, 250000) (320000, 80000) </code></pre> <p>I also have a function which converts the coordinates to lat long positions <code>toConvert(E,N)</code> - I am trying to work out how to iterate through the gridReference column and input the values into the toConvert function as arguments then produce the new Lat-Lon coordinates in another dataframe.</p> <p>Hope that makes sense - Thanks in advance!</p>
0
2016-09-08T21:17:12Z
39,424,049
<p>To answer directly to your question you can do </p> <pre><code>f = lambda x: toConvert(x[0],x[1]) df['gridReference'].map(f) </code></pre> <p>But actually why do you need a Pandas DataFrame for that? You can just do the same with list of tuples or with tuple of tuples and performance should be the same. I believe a column made of tuples is of type object and is not handled by Numpy. Moreorever, your function is not vectorized. You could probably vectorize it using numba (see <a href="http://pandas.pydata.org/pandas-docs/stable/enhancingperf.html#vectorize" rel="nofollow">link</a>). </p> <p>So you can achieve the same without pandas by using <a href="https://docs.python.org/2/library/functions.html#map" rel="nofollow">built-in function map</a> like so:</p> <pre><code>map(df['gridReference'].values, f) </code></pre>
0
2016-09-10T08:12:38Z
[ "python", "pandas", "dataframe" ]
pip install returns code 1 error with saga_python. Any ideas?
39,400,423
<p>I am trying to install saga-python (package for SAGA GIS) and cmd python keeps returning the same error: python setup.py egg_info failed with code 1 in C:\Users\MyUser\AppData\Local\Temp\pip-build-7uieglh9\saga-python. Any ideas why it occurs? </p> <p>Also, tried a few tips from this question's answers: <a href="http://stackoverflow.com/questions/35991403/python-pip-install-gives-command-python-setup-py-egg-info-failed-with-error-c-">Python pip install gives &quot;Command &quot;python setup.py egg_info&quot; failed with error code 1&quot;</a> not working either. </p> <p>Version is 3.5.2 and setuptools and ez_setup are allright. Tried with easy_install as well. No results either, sais that the syntax is invalid. </p> <p>Also tried with virtualenv and <a href="http://docs.python-guide.org/en/latest/dev/virtualenvs/" rel="nofollow">http://docs.python-guide.org/en/latest/dev/virtualenvs/</a>. Error is caused by two different Python versions: 2.7 and 3.5.2. I cannot uninstall 2.7 (shell for some GIS software), but I need to make it work somehow. </p>
0
2016-09-08T21:23:00Z
39,400,480
<p>This package only for python 2.X. See <a href="https://github.com/radical-cybertools/saga-python/issues/399" rel="nofollow">issue</a></p>
1
2016-09-08T21:27:27Z
[ "python", "pip", "python-3.5.2" ]
Python does not start for loop
39,400,466
<pre><code>old = [[0 for x in range(3)] for y in range(10)] count =0 # check if the number has non-repeating digits def different(number): digit_list = [0] * 4 i = 0 while i: digit_list[i] = number%10 number /= 10 i += 1 for x in range(0,3): for y in range(x+1,3): if digit_list[x] == digit_list[y]: return False return True # save the tried numbers, plus and minus values # for prediction of the next number def save(number,plus,minus): global count old[count][0] = number old[count][1] = plus old[count][2] = minus count += 1 return # compare for plus values def get_plus(number1,number2): ret_value = 0 for x in range(0, 3): if number1 % 10 == number2 % 10: ret_value += 1 number1 /= 10 number2 /= 10 return ret_value # compare for minus values def get_minus(number1,number2): temp = [[0]*4 for i in range(2)] ret_value = 0 for x in range(0,3): temp[0][x] = number1 % 10 temp[0][x] = number2 % 10 number1 /= 10 number2 /= 10 for x in range(0,3): for y in range(0,3): if x != y: if temp[0][x] == temp[1][y]: ret_value += 1 return ret_value # compare the number to be asked with the numbers in the array def control(number): for x in range(0,count-1): if get_plus(old[x][0],number) != old[x][1]: return False if get_minus(old[x][0],number) != old[x][2]: return False return True def main(): flag = False print('1023 ??') plus = input('plus ?') minus = input('minus ?') save(1023, plus, minus) print('4567 ??') plus = input('plus ?') minus = input('minus ?') save(4567, plus, minus) for i in range(1024, 9876): if different(i): if control(i): print(i + ' ??') plus = input('plus ?') minus = input('minus ?') save(i, plus, minus) if plus == 4 and minus == 0: print('I WON !!!') flag = True break if not flag: print('False') return main() </code></pre> <p>I am trying to make an AI for mindgame in python. But in this function it doesn't even start the for loop. Can anyone know why ? </p>
0
2016-09-08T21:26:24Z
39,400,556
<p>The while loop in your different() function does nothing as while(0) will prevent the loop from running. Even if that would run, your different() function will always return false. At least in the last loop it will compare digit_list[3] == digit_list[3] as both loop range until 3. This is always true and the function will return false. Thus the code within your main loop will never be entered.</p> <pre><code>def different(number): digit_list = [0] * 4 i = 0 while i: digit_list[i] = number%10 number /= 10 i += 1 for x in range(0,3): for y in range(x+1,3): if digit_list[x] == digit_list[y]: return False return True </code></pre> <p>Try this one:</p> <pre><code>import random def different(num): digits = [] while num &gt;= 1: cur = num%10 if cur in digits: return False digits.append(cur) num = (num - cur) / 10 return True for i in range(0, 10000): rand = random.randrange(1000, 10000) if not different(rand): print(rand) </code></pre>
0
2016-09-08T21:33:52Z
[ "python", "python-3.x", "for-loop" ]
Pandas: keeping only first row of data in each 60 second bin
39,400,684
<p>What's the best way to keep only the first row of each 60 second bin of data in pandas? i.e. For every row that occurs at increasing time <code>t</code>, I want to delete all rows that occur up to <code>t+60</code> seconds.</p> <p>I know there's some combination of <code>groupby().first()</code> that I can probably use, but the code examples I've seen (e.g. using <code>pandas.Grouper(freq='60s')</code>) will discard the original datetimes in favor of every 60 seconds offset from midnight rather than my original datetimes.</p> <p>For example, the following:</p> <pre><code> time value 0 2016-05-11 13:00:10.841015028 0.215978 1 2016-05-11 13:02:05.760595780 0.155666 2 2016-05-11 13:02:05.760903860 0.155666 3 2016-05-11 13:02:18.325613076 0.157788 4 2016-05-11 13:02:18.486519052 0.157788 5 2016-05-11 13:02:20.243748548 0.157788 6 2016-05-11 13:02:20.533101692 0.157788 7 2016-05-11 13:02:20.646061652 0.157788 8 2016-05-11 13:02:21.121409820 0.157788 9 2016-05-11 13:04:24.660609068 0.211649 10 2016-05-11 13:04:24.660845612 0.211649 11 2016-05-11 13:04:24.660957596 0.211649 12 2016-05-11 13:04:24.661378132 0.211649 13 2016-05-11 13:04:24.661450628 0.211649 14 2016-05-11 13:04:24.661607044 0.211649 </code></pre> <p>should become this:</p> <pre><code> time value 0 2016-05-11 13:00:10.841015028 0.215978 1 2016-05-11 13:02:05.760595780 0.155666 3 2016-05-11 13:04:24.660609068 0.211649 </code></pre>
3
2016-09-08T21:46:40Z
39,400,830
<p>see <a class='doc-link' href="http://stackoverflow.com/documentation/pandas/1751/indexing-and-selecting-data/19351/path-dependent-slicing#t=201609082151350926644">Path Dependent Slicing</a></p> <h3>Solution</h3> <pre><code>def td60(ta): d = np.timedelta64(int(6e10)) tp = ta + d j = 0 yield j for i, tx in enumerate(ta): if tx &gt; tp[j]: yield i j = i def pir(df): slc = list(td60(df.time.values)) return pd.DataFrame(df.values[slc], df.index[slc]) </code></pre> <hr> <p><strong><em>Example usage</em></strong></p> <pre><code>pir(df) </code></pre> <p><a href="http://i.stack.imgur.com/iKaJE.png" rel="nofollow"><img src="http://i.stack.imgur.com/iKaJE.png" alt="enter image description here"></a></p> <hr> <h3>Setup for timing 500,000 rows</h3> <pre><code>pop_n, smp_n = 1000000, 500000 np.random.seed([3,1415]) tidx = pd.date_range('2016-09-08', periods=pop_n, freq='5s') tidx = np.random.choice(tidx, smp_n, False) tidx = pd.to_datetime(tidx).sort_values() df = pd.DataFrame(dict(time=tidx, value=np.random.rand(smp_n))) </code></pre> <hr> <h3>Timing</h3> <p><a href="http://i.stack.imgur.com/oHpxc.png" rel="nofollow"><img src="http://i.stack.imgur.com/oHpxc.png" alt="enter image description here"></a></p> <p><strong><em>Cythonize</em></strong><br> In Jupyter</p> <pre><code>%load_ext Cython </code></pre> <hr> <pre><code>%%cython import numpy as np import pandas as pd def td60(ta): d = np.timedelta64(int(6e10)) tp = ta + d j = 0 yield j for i, tx in enumerate(ta): if tx &gt; tp[j]: yield i j = i def pir(df): slc = list(td60(df.time.values)) return pd.DataFrame(df.values[slc], df.index[slc]) </code></pre> <p><strong><em>After Cythonizing</em></strong><br> Not much different</p> <p><a href="http://i.stack.imgur.com/IQf1x.png" rel="nofollow"><img src="http://i.stack.imgur.com/IQf1x.png" alt="enter image description here"></a></p> <hr> <h3>reference setup for OP example</h3> <pre><code>from StringIO import StringIO import pandas as pd text = """time,value 2016-05-11 13:00:10.841015028,0.215978 2016-05-11 13:02:05.760595780,0.155666 2016-05-11 13:02:05.760903860,0.155666 2016-05-11 13:02:18.325613076,0.157788 2016-05-11 13:02:18.486519052,0.157788 2016-05-11 13:02:20.243748548,0.157788 2016-05-11 13:02:20.533101692,0.157788 2016-05-11 13:02:20.646061652,0.157788 2016-05-11 13:02:21.121409820,0.157788 2016-05-11 13:04:24.660609068,0.211649 2016-05-11 13:04:24.660845612,0.211649 2016-05-11 13:04:24.660957596,0.211649 2016-05-11 13:04:24.661378132,0.211649 2016-05-11 13:04:24.661450628,0.211649 2016-05-11 13:04:24.661607044,0.211649""" df = pd.read_csv(StringIO(text), parse_dates=[0]) </code></pre>
3
2016-09-08T22:00:34Z
[ "python", "pandas", "dataframe" ]
Pandas: keeping only first row of data in each 60 second bin
39,400,684
<p>What's the best way to keep only the first row of each 60 second bin of data in pandas? i.e. For every row that occurs at increasing time <code>t</code>, I want to delete all rows that occur up to <code>t+60</code> seconds.</p> <p>I know there's some combination of <code>groupby().first()</code> that I can probably use, but the code examples I've seen (e.g. using <code>pandas.Grouper(freq='60s')</code>) will discard the original datetimes in favor of every 60 seconds offset from midnight rather than my original datetimes.</p> <p>For example, the following:</p> <pre><code> time value 0 2016-05-11 13:00:10.841015028 0.215978 1 2016-05-11 13:02:05.760595780 0.155666 2 2016-05-11 13:02:05.760903860 0.155666 3 2016-05-11 13:02:18.325613076 0.157788 4 2016-05-11 13:02:18.486519052 0.157788 5 2016-05-11 13:02:20.243748548 0.157788 6 2016-05-11 13:02:20.533101692 0.157788 7 2016-05-11 13:02:20.646061652 0.157788 8 2016-05-11 13:02:21.121409820 0.157788 9 2016-05-11 13:04:24.660609068 0.211649 10 2016-05-11 13:04:24.660845612 0.211649 11 2016-05-11 13:04:24.660957596 0.211649 12 2016-05-11 13:04:24.661378132 0.211649 13 2016-05-11 13:04:24.661450628 0.211649 14 2016-05-11 13:04:24.661607044 0.211649 </code></pre> <p>should become this:</p> <pre><code> time value 0 2016-05-11 13:00:10.841015028 0.215978 1 2016-05-11 13:02:05.760595780 0.155666 3 2016-05-11 13:04:24.660609068 0.211649 </code></pre>
3
2016-09-08T21:46:40Z
39,400,864
<p><strong>UPDATE:</strong> thanks to <a href="http://stackoverflow.com/questions/39400684/pandas-keeping-only-first-row-of-data-in-each-60-second-bin/39400864?noredirect=1#comment66127957_39400864">@piRSquared</a> - he noticed that my previous solution was incorrect. Here is another attempt:</p> <p>data:</p> <pre><code>In [8]: df = pd.DataFrame(dict(time=pd.date_range('2001-01-01', periods=20, freq='9S'), value=np.random.rand(20))) In [9]: df Out[9]: time value 0 2001-01-01 00:00:00 0.440696 1 2001-01-01 00:00:09 0.135540 2 2001-01-01 00:00:18 0.008243 3 2001-01-01 00:00:27 0.389259 4 2001-01-01 00:00:36 0.128253 5 2001-01-01 00:00:45 0.566704 6 2001-01-01 00:00:54 0.386797 7 2001-01-01 00:01:03 0.426411 8 2001-01-01 00:01:12 0.438114 9 2001-01-01 00:01:21 0.918711 10 2001-01-01 00:01:30 0.715565 11 2001-01-01 00:01:39 0.422044 12 2001-01-01 00:01:48 0.199396 13 2001-01-01 00:01:57 0.827872 14 2001-01-01 00:02:06 0.986887 15 2001-01-01 00:02:15 0.305749 16 2001-01-01 00:02:24 0.030092 17 2001-01-01 00:02:33 0.338214 18 2001-01-01 00:02:42 0.773635 19 2001-01-01 00:02:51 0.816478 </code></pre> <p>Solution:</p> <pre><code>In [10]: df.groupby((df.time - df.ix[0, 'time']).dt.total_seconds() // 60, as_index=False).first() Out[10]: time value 0 2001-01-01 00:00:00 0.440696 1 2001-01-01 00:01:03 0.426411 2 2001-01-01 00:02:06 0.986887 </code></pre> <p>Explanation:</p> <pre><code>In [17]: (df.time - df.ix[0, 'time']).dt.total_seconds() Out[17]: 0 0.0 1 9.0 2 18.0 3 27.0 4 36.0 5 45.0 6 54.0 7 63.0 8 72.0 9 81.0 10 90.0 11 99.0 12 108.0 13 117.0 14 126.0 15 135.0 16 144.0 17 153.0 18 162.0 19 171.0 Name: time, dtype: float64 In [18]: (df.time - df.ix[0, 'time']).dt.total_seconds() // 60 Out[18]: 0 -0.0 1 0.0 2 0.0 3 0.0 4 0.0 5 0.0 6 0.0 7 1.0 8 1.0 9 1.0 10 1.0 11 1.0 12 1.0 13 1.0 14 2.0 15 2.0 16 2.0 17 2.0 18 2.0 19 2.0 Name: time, dtype: float64 </code></pre> <p><strong>OLD incorrect answer:</strong></p> <pre><code>In [102]: df[df.time.diff().fillna(pd.Timedelta('60S')) &gt;= pd.Timedelta('60S')] Out[102]: time value 0 2016-05-11 13:00:10.841015028 0.215978 1 2016-05-11 13:02:05.760595780 0.155666 9 2016-05-11 13:04:24.660609068 0.211649 </code></pre> <p>Explanation:</p>
3
2016-09-08T22:03:17Z
[ "python", "pandas", "dataframe" ]
Python - Type Checking Instances of Objects
39,400,708
<p>I'm creating a class that emulates numeric types so as to be able to use basic arithmetic operators, like <code>+</code>, <code>-</code>, etc on instances of this class. However, I want to be able to handle the operation in different ways depending on what types the operands are. For instance, if I'm creating a class <code>foo_c</code> with <code>__add__()</code> as a function, then I want to be able to handle addition cases where one operand is of type <code>foo_c</code> and the other is of type <code>int</code> or <code>float</code> or <code>numpy.ndarray</code> (or <code>foo_c</code>).</p> <p>The solution I want to implement is to have a collection of 'adder' functions to switch between based off of the operand type. The different functions are being assigned to a dictionary, as such:</p> <pre><code>class foo_c: ... self.addOps = { int: self.addScalar, float: self.addScalar, foo_c: self.addFoo } ... self.addScalar(self, sclr): ... self.addFoo(self, foo): ... self.__add__(self, operand): return self.addOps[type(operand)](operand) </code></pre> <p>The problem that I'm having is that I can't get the <code>type()</code> function to return the appropriate value to be used as a key for the dictionary. After creating an instance of the class as <code>foo = foo_c()</code>, the built-in function <code>type(foo)</code> returns <code>instance</code>instead of <code>foo_c</code>. I assume this is because I'm not creating the object in question; rather I am creating an instance of the object. I've used <code>foo.__class__</code> as well, but am getting <code>__main__.foo_c</code> as the returned class, which isn't right either...</p> <p>I don't want to have to use lines of <code>isinstance()</code> checks, so is there a way to get <code>type()</code> to return the class as desired?</p>
0
2016-09-08T21:48:32Z
39,400,778
<p>You forgot to have <code>foo_c</code> inherit from <code>object</code>, so you're getting an old-style class. Make it inherit from <code>object</code>:</p> <pre><code>class foo_c(object): ... </code></pre>
1
2016-09-08T21:55:12Z
[ "python", "class", "object", "instance", "typechecking" ]
Can I use range of str to get a loop of 12 months?
39,400,722
<p>I would like to use this loop-function for all 12 months. Any idea how to alter the date, tried with a string of values, but I am just lost. Thankful for inputs.</p> <pre><code>y=str('02','03','04') # Months (Can I use Range here in some way?) q=('01.'+(y)+'.'+(year)) # Date in file, where the loop break with open('f.txt','r') as f: for line in f: if z in line: # Work after this line "z" i=0 c=0 temp_y=[] # Create empty list for line in f: temp_1= float((line.split()[i+8])) # Read every 8:th element in file temp_y.append(temp_1) # Adds a new value at the neareast empty spot c += 1 # For average equation print(temp_1) if q in line: # (This is where I am stuck) break </code></pre>
0
2016-09-08T21:49:38Z
39,400,771
<p>Here is one way:</p> <pre><code>for month in range(1, 13): month = '%02d' % month print('01.' + month + '.' + '2016') </code></pre> <p>Here is another:</p> <pre><code>months = ['%02d' % month for month in range(1, 13)] print (months) </code></pre>
0
2016-09-08T21:54:44Z
[ "python", "string", "loops" ]
Can I use range of str to get a loop of 12 months?
39,400,722
<p>I would like to use this loop-function for all 12 months. Any idea how to alter the date, tried with a string of values, but I am just lost. Thankful for inputs.</p> <pre><code>y=str('02','03','04') # Months (Can I use Range here in some way?) q=('01.'+(y)+'.'+(year)) # Date in file, where the loop break with open('f.txt','r') as f: for line in f: if z in line: # Work after this line "z" i=0 c=0 temp_y=[] # Create empty list for line in f: temp_1= float((line.split()[i+8])) # Read every 8:th element in file temp_y.append(temp_1) # Adds a new value at the neareast empty spot c += 1 # For average equation print(temp_1) if q in line: # (This is where I am stuck) break </code></pre>
0
2016-09-08T21:49:38Z
39,400,865
<p>If you want to loop through an array of strings, you don't need range.</p> <pre><code>Months = ['Jan', 'Feb', 'Mar'] for mo in Months: print mo </code></pre> <p>If you are just looking for how to pick out the month field from the date.</p> <pre><code>date='01.11.2016' print date.split('.')[1] '11' </code></pre>
0
2016-09-08T22:03:19Z
[ "python", "string", "loops" ]
How to process Python Pandas data frames in batches?
39,400,851
<p>I have three very long lists of Pandas data frames. For example:</p> <pre><code>list_a = [tablea1, tablea2, tablea3, tablea4] list_b = [tableb1, tableb2, tableb3, tableb4] list_c = [tablec1, tablec2, tablec3, tablec4] </code></pre> <p>I want to do something like this:</p> <pre><code>tablea1 = pd.concat([tablea1, tableb1, tablec1], axis=1) </code></pre> <p>So naively, I wrote such codes:</p> <pre><code>for i in range(len(list_a)): list_a[i] = pd.concat([list_a[i], list_b[i], list_c[i]], axis=1) </code></pre> <p>This code failed to work, b/c list_a[0] is a reference to tablea1 initially, then inside the loop, list_a[0] will be re-assigned to point to </p> <pre><code>pd.concat([tablea1, tableb1, tablec1], axis=1), </code></pre> <p>which is a new object. In the end, tablea1 is not modified. (list_a does contain the desired result. But I do want to modify tablea1.) I have spent hours on this and cannot find out a solution. Any help? Thanks.</p>
0
2016-09-08T22:02:00Z
39,412,726
<p>@qqzj The problem that you will run into is that python doesn't exactly have this feature available. As @Boud <a href="http://stackoverflow.com/a/8989916/624829">mentions</a>, the reference to tablea1, tableb1, tablec1 etc are lost after concatenation. </p> <p>I'll illustrate a quick and dirty example of the workaround (which is very inefficient, but will get the job done). </p> <p>Without your data, I'm basically creating random data frames. </p> <pre><code>tablea1 = pd.DataFrame(np.random.randn(10, 4)) tableb1 = pd.DataFrame(np.random.randn(10, 4)) tablec1 = pd.DataFrame(np.random.randn(10, 4)) tablea2 = pd.DataFrame(np.random.randn(10, 4)) tableb2 = pd.DataFrame(np.random.randn(10, 4)) tablec2 = pd.DataFrame(np.random.randn(10, 4)) </code></pre> <p>Applying your code to iterate over this list</p> <pre><code>list_a = [tablea1, tablea2] list_b = [tableb1, tableb2] list_c = [tablec1, tablec2] for i in range(len(list_a)): list_a[i] = pd.concat([list_a[i], list_b[i], list_c[i]], axis=1) </code></pre> <p>Once you run a compare here, you see the issue that you have highlighted, namely that while <code>list_a[i]</code> has been concatenated with <code>tablea1</code>, <code>tableb1</code>, and <code>tablec1</code>, this hasn't been assigned back to <code>tablea1</code>. </p> <p>As I mentioned in the comment, the answer is to assign <code>tablea1</code> with the list[0]</p> <pre><code>tablea1=list_a[0] </code></pre> <p>You would repeat this for <code>tablea2</code> <code>tablea3</code> etc. </p> <p>Doing the compare, you can see now that tablea1 matches the values in list[0]</p> <pre><code>tablea1==list_a[0] 0 1 2 3 0 1 2 3 0 1 2 3 0 True True True True True True True True True True True True 1 True True True True True True True True True True True True 2 True True True True True True True True True True True True 3 True True True True True True True True True True True True 4 True True True True True True True True True True True True 5 True True True True True True True True True True True True 6 True True True True True True True True True True True True 7 True True True True True True True True True True True True 8 True True True True True True True True True True True True 9 True True True True True True True True True True True True </code></pre> <p>Again this is not the ideal solution, but what you are looking for doesn't seem to be the 'pythonic' way. </p>
0
2016-09-09T13:30:10Z
[ "python", "pandas" ]
How to process Python Pandas data frames in batches?
39,400,851
<p>I have three very long lists of Pandas data frames. For example:</p> <pre><code>list_a = [tablea1, tablea2, tablea3, tablea4] list_b = [tableb1, tableb2, tableb3, tableb4] list_c = [tablec1, tablec2, tablec3, tablec4] </code></pre> <p>I want to do something like this:</p> <pre><code>tablea1 = pd.concat([tablea1, tableb1, tablec1], axis=1) </code></pre> <p>So naively, I wrote such codes:</p> <pre><code>for i in range(len(list_a)): list_a[i] = pd.concat([list_a[i], list_b[i], list_c[i]], axis=1) </code></pre> <p>This code failed to work, b/c list_a[0] is a reference to tablea1 initially, then inside the loop, list_a[0] will be re-assigned to point to </p> <pre><code>pd.concat([tablea1, tableb1, tablec1], axis=1), </code></pre> <p>which is a new object. In the end, tablea1 is not modified. (list_a does contain the desired result. But I do want to modify tablea1.) I have spent hours on this and cannot find out a solution. Any help? Thanks.</p>
0
2016-09-08T22:02:00Z
39,456,370
<p>Thanks for zhqiat's sample codes. Let me expand a bit on it. Here this problem can be solved using exec statement.</p> <pre><code>import pandas as pd import numpy as np tablea1 = pd.DataFrame(np.random.randn(10, 4)) tableb1 = pd.DataFrame(np.random.randn(10, 4)) tablec1 = pd.DataFrame(np.random.randn(10, 4)) tablea2 = pd.DataFrame(np.random.randn(10, 4)) tableb2 = pd.DataFrame(np.random.randn(10, 4)) tablec2 = pd.DataFrame(np.random.randn(10, 4)) list_a = [tablea1, tablea2] list_b = [tableb1, tableb2] list_c = [tablec1, tablec2] for i in range(1, len(list_a)+1): exec 'tablea' + str(i) + ' = pd.concat([tablea' + str(i) + ', ' + 'tableb' + str(i) + ', ' + 'tablec' + str(i) + '], axis=1)' print tablea1 </code></pre> <p>I have been using this approach for a while. But after the code got more complicated. exec started complaining </p> <pre><code>'SyntaxError: unqualified exec is not allowed in function 'function name' it contains a nested function with free variables'. </code></pre> <p>Here is the troubled codes:</p> <pre><code>def overall_function(): def dummy_function(): return True tablea1 = pd.DataFrame(np.random.randn(10, 4)) tableb1 = pd.DataFrame(np.random.randn(10, 4)) tablec1 = pd.DataFrame(np.random.randn(10, 4)) tablea2 = pd.DataFrame(np.random.randn(10, 4)) tableb2 = pd.DataFrame(np.random.randn(10, 4)) tablec2 = pd.DataFrame(np.random.randn(10, 4)) list_a = ['tablea1', 'tablea2'] list_b = ['tableb1', 'tableb2'] list_c = ['tablec1', 'tablec2'] for i, j, k in zip(list_a, list_b, list_c): exec(i + ' = pd.concat([' + i + ',' + j + ',' + k + '], axis=1)') print tablea1 overall_function() </code></pre> <p>This code will generate the error message. The funny thing is that there is no other 'def' statement in my real function at all. So I have no nested function. I am very puzzled why I got such an error message. My question is whether there is a way to ask Python telling me which variable is the culprit, i.e. the free variable that cause the problem? Or, which sub function is the responsible for the failure of my code. Ideally, for this example, I wish I could force python to tell me that dummy_function is the cause.</p>
0
2016-09-12T18:17:15Z
[ "python", "pandas" ]
await for any future asyncio
39,400,885
<p>I'm trying to use asyncio to handle concurrent network I/O. A very large number of functions are to be scheduled at a single point which vary greatly in time it takes for each to complete. Received data is then processed in a separate process for each output.</p> <p>The order in which the data is processed is not relevant, so given the potentially very long waiting period for output I'd like to <code>await</code> for whatever future finishes first instead of a predefined order.</p> <pre><code>def fetch(x): sleep() async def main(): futures = [loop.run_in_executor(None, fetch, x) for x in range(50)] for f in futures: await f loop = asyncio.get_event_loop() loop.run_until_complete(main()) </code></pre> <p>Normally, awaiting in order in which futures were queued is fine:</p> <p><a href="http://i.stack.imgur.com/8UBZP.png" rel="nofollow"><img src="http://i.stack.imgur.com/8UBZP.png" alt="Well behaved functions profiler graph"></a></p> <p>Blue color represents time each task is in executor's queue, i.e. <code>run_in_executor</code> has been called, but the function was not yet executed, as the executor runs only 5 tasks simultaneously; green is time spent on executing the function itself; and the red is the time spent waiting for all previous futures to <code>await</code>.</p> <p><a href="http://i.stack.imgur.com/VKs6f.png" rel="nofollow"><img src="http://i.stack.imgur.com/VKs6f.png" alt="Volatile functions profiler graph"></a></p> <p>In my case where functions vary in time greatly, there is a lot of time lost on waiting for previous futures in queue to await, while I could be locally processing GET output. This makes my system idle for a while only to get overwhelmed when several outputs complete simultaneously, then jumping back to idle waiting for more requests to finish.</p> <p>Is there a way to <code>await</code> whatever future is first completed in the executor?</p>
1
2016-09-08T22:05:10Z
39,407,084
<p>Looks like you are looking for <a href="https://docs.python.org/3/library/asyncio-task.html#asyncio.wait" rel="nofollow">asyncio.wait</a> with <code>return_when=asyncio.FIRST_COMPLETED</code>.</p> <pre><code>def fetch(x): sleep() async def main(): futures = [loop.run_in_executor(None, fetch, x) for x in range(50)] while futures: done, futures = await asyncio.wait(futures, loop=loop, return_when=asyncio.FIRST_COMPLETED) for f in done: await f loop = asyncio.get_event_loop() loop.run_until_complete(main()) </code></pre>
1
2016-09-09T08:26:02Z
[ "python", "python-asyncio", "executor" ]
Return any number of matching groups with re findall in python
39,400,940
<p>I have a relatively complex string that contains a bunch of data. I am trying to extract the relevant pieces of the string using a regex command. The portions I am interested in are contained in square brackets, like this:</p> <pre><code>s = '"data":["value":3.44}] lol haha "data":["value":55.34}] "data":["value":2.44}] lol haha "data":["value":56.34}]' </code></pre> <p>And the regex expression I have built is as follows:</p> <pre><code>l = re.findall(r'\"data\"\:.*(\[.*\])', s) </code></pre> <p>I was expecting this to return</p> <pre><code>['["value":3.44}]', '["value":55.34}]', '["value":2.44}]', '["value":56.34}]'] </code></pre> <p>But instead all I get is the last one, i.e., </p> <pre><code>['["value":56.34}]'] </code></pre> <p>How can I catch 'em all? </p>
0
2016-09-08T22:11:07Z
39,400,983
<p>It's because quantifiers are greedy by default. So <code>.*</code> will match everything between the first <code>"data":</code> and the last <code>[</code>, so there's only one <code>[...]</code> left to match.</p> <p>Use non-greedy quantifiers by adding <code>?</code>.</p> <pre><code>l = re.findall(r'\"data\"\:.*?(\[.*?\])', s) </code></pre>
2
2016-09-08T22:15:15Z
[ "python", "regex" ]
Return any number of matching groups with re findall in python
39,400,940
<p>I have a relatively complex string that contains a bunch of data. I am trying to extract the relevant pieces of the string using a regex command. The portions I am interested in are contained in square brackets, like this:</p> <pre><code>s = '"data":["value":3.44}] lol haha "data":["value":55.34}] "data":["value":2.44}] lol haha "data":["value":56.34}]' </code></pre> <p>And the regex expression I have built is as follows:</p> <pre><code>l = re.findall(r'\"data\"\:.*(\[.*\])', s) </code></pre> <p>I was expecting this to return</p> <pre><code>['["value":3.44}]', '["value":55.34}]', '["value":2.44}]', '["value":56.34}]'] </code></pre> <p>But instead all I get is the last one, i.e., </p> <pre><code>['["value":56.34}]'] </code></pre> <p>How can I catch 'em all? </p>
0
2016-09-08T22:11:07Z
39,401,049
<p>You can also use <a href="https://docs.python.org/2/library/re.html#re.finditer" rel="nofollow"><code>finditer</code></a> to extract the relevant content iteratively:</p> <pre><code>import re s = '"data":["value":3.44}] lol haha "data":["value":55.34}] "data":["value":2.44}] lol haha "data":["value":56.34}]' for m in re.finditer(r'(\[.*?\])', s): print m.group(1) </code></pre> <p><strong>OUTPUT</strong></p> <pre><code>["value":3.44}] ["value":55.34}] ["value":2.44}] ["value":56.34}] </code></pre>
1
2016-09-08T22:21:53Z
[ "python", "regex" ]
pandas iterrows throwing error
39,400,957
<p>I am trying to do a change data capture on two dataframes. The logic is to merge two dataframes and group by one keys and then run a loop for groups having count >1 to see which column 'updated'. I am getting strange error. any help is appreciated. code</p> <pre><code>import pandas as pd import numpy as np pd.set_option('display.height', 1000) pd.set_option('display.max_rows', 500) pd.set_option('display.max_columns', 500) pd.set_option('display.width', 1000) print("reading wolverine xlxs") # defining metadata df_header = ['DisplayName','StoreLanguage','Territory','WorkType','EntryType','TitleInternalAlias', 'TitleDisplayUnlimited','LocalizationType','LicenseType','LicenseRightsDescription', 'FormatProfile','Start','End','PriceType','PriceValue','SRP','Description', 'OtherTerms','OtherInstructions','ContentID','ProductID','EncodeID','AvailID', 'Metadata', 'AltID', 'SuppressionLiftDate','SpecialPreOrderFulfillDate','ReleaseYear','ReleaseHistoryOriginal','ReleaseHistoryPhysicalHV', 'ExceptionFlag','RatingSystem','RatingValue','RatingReason','RentalDuration','WatchDuration','CaptionIncluded','CaptionExemption','Any','ContractID', 'ServiceProvider','TotalRunTime','HoldbackLanguage','HoldbackExclusionLanguage'] df_w01 = pd.read_excel("wolverine_1.xlsx", names = df_header) df_w02 = pd.read_excel("wolverine_2.xlsx", names = df_header) df_w01['version'] = 'OLD' df_w02['version'] = 'NEW' #print(df_w01) df_m_d = pd.concat([df_w01, df_w02], ignore_index = True) first_pass = df_m_d[df_m_d.duplicated(['StoreLanguage','Territory','TitleInternalAlias','LocalizationType','LicenseType','FormatProfile'], keep=False)] first_pass_keep_duplicate = df_m_d[df_m_d.duplicated(['StoreLanguage','Territory','TitleInternalAlias','LocalizationType','LicenseType','FormatProfile'], keep='first')] group_by_1 = first_pass.groupby(['StoreLanguage','Territory','TitleInternalAlias','LocalizationType','LicenseType','FormatProfile']) for i,rows in group_by_1.iterrows(): print("rownumber", i) print (rows) print(first_pass) </code></pre> <p>And The error I get :</p> <pre><code>AttributeError: Cannot access callable attribute 'iterrows' of 'DataFrameGroupBy' objects, try using the 'apply' method </code></pre> <p>Any help is much appreciated.</p>
0
2016-09-08T22:12:58Z
39,401,927
<p>Why not do as suggested and use <code>apply</code>? Something like:</p> <pre><code>def print_rows(rows): print rows group_by_1.apply(print_rows) </code></pre>
0
2016-09-09T00:10:15Z
[ "python", "pandas" ]
pandas iterrows throwing error
39,400,957
<p>I am trying to do a change data capture on two dataframes. The logic is to merge two dataframes and group by one keys and then run a loop for groups having count >1 to see which column 'updated'. I am getting strange error. any help is appreciated. code</p> <pre><code>import pandas as pd import numpy as np pd.set_option('display.height', 1000) pd.set_option('display.max_rows', 500) pd.set_option('display.max_columns', 500) pd.set_option('display.width', 1000) print("reading wolverine xlxs") # defining metadata df_header = ['DisplayName','StoreLanguage','Territory','WorkType','EntryType','TitleInternalAlias', 'TitleDisplayUnlimited','LocalizationType','LicenseType','LicenseRightsDescription', 'FormatProfile','Start','End','PriceType','PriceValue','SRP','Description', 'OtherTerms','OtherInstructions','ContentID','ProductID','EncodeID','AvailID', 'Metadata', 'AltID', 'SuppressionLiftDate','SpecialPreOrderFulfillDate','ReleaseYear','ReleaseHistoryOriginal','ReleaseHistoryPhysicalHV', 'ExceptionFlag','RatingSystem','RatingValue','RatingReason','RentalDuration','WatchDuration','CaptionIncluded','CaptionExemption','Any','ContractID', 'ServiceProvider','TotalRunTime','HoldbackLanguage','HoldbackExclusionLanguage'] df_w01 = pd.read_excel("wolverine_1.xlsx", names = df_header) df_w02 = pd.read_excel("wolverine_2.xlsx", names = df_header) df_w01['version'] = 'OLD' df_w02['version'] = 'NEW' #print(df_w01) df_m_d = pd.concat([df_w01, df_w02], ignore_index = True) first_pass = df_m_d[df_m_d.duplicated(['StoreLanguage','Territory','TitleInternalAlias','LocalizationType','LicenseType','FormatProfile'], keep=False)] first_pass_keep_duplicate = df_m_d[df_m_d.duplicated(['StoreLanguage','Territory','TitleInternalAlias','LocalizationType','LicenseType','FormatProfile'], keep='first')] group_by_1 = first_pass.groupby(['StoreLanguage','Territory','TitleInternalAlias','LocalizationType','LicenseType','FormatProfile']) for i,rows in group_by_1.iterrows(): print("rownumber", i) print (rows) print(first_pass) </code></pre> <p>And The error I get :</p> <pre><code>AttributeError: Cannot access callable attribute 'iterrows' of 'DataFrameGroupBy' objects, try using the 'apply' method </code></pre> <p>Any help is much appreciated.</p>
0
2016-09-08T22:12:58Z
39,402,263
<p>Your <code>GroupBy</code> object supports iteration, so instead of</p> <pre><code>for i,rows in group_by_1.iterrows(): print("rownumber", i) print (rows) </code></pre> <p>you need to do something like</p> <pre><code>for name, group in group_by_1: print name print group </code></pre> <p>then you can do what you need to do with each <code>group</code></p> <p>See <a href="http://pandas.pydata.org/pandas-docs/stable/groupby.html#iterating-through-groups" rel="nofollow">the docs</a></p>
0
2016-09-09T00:54:27Z
[ "python", "pandas" ]
A Python list of Numpy Arrays to CSV?
39,401,012
<p>I have a Python list of Numpy arrays storing X, Y, Z coordinates - like this:</p> <pre><code>[array([-0.22424938, 0.16117005, -0.39249256]) array([-0.22424938, 0.16050598, -0.39249256]) array([-0.22424938, 0.1598419 , -0.39249256]) ..., array([ 0.09214371, -0.26184322, -0.39249256]) array([ 0.09214371, -0.26250729, -0.39249256]) array([ 0.09214371, -0.26317136, -0.39249256])] </code></pre> <p>And I would like to get them into a CSV file so I can plot them in GIS software. I am new to Numpy arrays and I keep getting errors using methods like numpy.ndarray.tofile(). </p> <p>I can iterate the list using </p> <pre><code>for item in list: f.write(str(item)) </code></pre> <p>but it writes the data to the text file as binary data.<br> I just want to have each XYZ value comma separated with each row storing one XYZ value. Any help is appreciated.</p>
0
2016-09-08T22:17:13Z
39,401,055
<p>Use the <a href="https://docs.python.org/2/library/csv.html#module-csv" rel="nofollow"><code>csv</code></a> module with its <a href="https://docs.python.org/2/library/csv.html#csv.csvwriter.writerows" rel="nofollow"><code>writerows</code></a> method:</p> <pre><code>import csv with open('my_data.txt', 'w') as f: csvwriter = csv.writer(f) csvwriter.writerows(list_of_arrays) </code></pre>
2
2016-09-08T22:22:37Z
[ "python", "arrays", "csv", "numpy" ]
A Python list of Numpy Arrays to CSV?
39,401,012
<p>I have a Python list of Numpy arrays storing X, Y, Z coordinates - like this:</p> <pre><code>[array([-0.22424938, 0.16117005, -0.39249256]) array([-0.22424938, 0.16050598, -0.39249256]) array([-0.22424938, 0.1598419 , -0.39249256]) ..., array([ 0.09214371, -0.26184322, -0.39249256]) array([ 0.09214371, -0.26250729, -0.39249256]) array([ 0.09214371, -0.26317136, -0.39249256])] </code></pre> <p>And I would like to get them into a CSV file so I can plot them in GIS software. I am new to Numpy arrays and I keep getting errors using methods like numpy.ndarray.tofile(). </p> <p>I can iterate the list using </p> <pre><code>for item in list: f.write(str(item)) </code></pre> <p>but it writes the data to the text file as binary data.<br> I just want to have each XYZ value comma separated with each row storing one XYZ value. Any help is appreciated.</p>
0
2016-09-08T22:17:13Z
39,401,287
<p><code>np.savetxt</code> will write the list:</p> <pre><code>In [553]: data=[array([-0.22424938, 0.16117005, -0.39249256]), ...: array([-0.22424938, 0.16050598, -0.39249256]), ...: array([-0.22424938, 0.1598419 , -0.39249256]), ...: array([ 0.09214371, -0.26184322, -0.39249256]), ...: array([ 0.09214371, -0.26250729, -0.39249256]), ...: array([ 0.09214371, -0.26317136, -0.39249256]),] In [554]: np.savetxt('test.txt',data, delimiter=', ', fmt='%12.8f') In [555]: cat test.txt -0.22424938, 0.16117005, -0.39249256 -0.22424938, 0.16050598, -0.39249256 -0.22424938, 0.15984190, -0.39249256 0.09214371, -0.26184322, -0.39249256 0.09214371, -0.26250729, -0.39249256 0.09214371, -0.26317136, -0.39249256 </code></pre> <p><code>np.savetxt</code> really saves an array, but converts the list to array if needed:</p> <pre><code>In [556]: np.array(data) Out[556]: array([[-0.22424938, 0.16117005, -0.39249256], [-0.22424938, 0.16050598, -0.39249256], [-0.22424938, 0.1598419 , -0.39249256], [ 0.09214371, -0.26184322, -0.39249256], [ 0.09214371, -0.26250729, -0.39249256], [ 0.09214371, -0.26317136, -0.39249256]]) </code></pre> <p>It then iterates over rows, and writes</p> <pre><code>f.write(fmt % tuple(row)) </code></pre> <p>where <code>fmt</code> is either the full string you provide or one constructed by replicating the shorter <code>fmt</code> I provided.</p> <p>Effectively <code>savetxt</code> is doing:</p> <pre><code>In [558]: fmt='%12.8f, %12.8f, %12.8f' In [559]: for row in data: ...: print(fmt%tuple(row)) ...: -0.22424938, 0.16117005, -0.39249256 -0.22424938, 0.16050598, -0.39249256 -0.22424938, 0.15984190, -0.39249256 0.09214371, -0.26184322, -0.39249256 0.09214371, -0.26250729, -0.39249256 0.09214371, -0.26317136, -0.39249256 </code></pre>
1
2016-09-08T22:49:20Z
[ "python", "arrays", "csv", "numpy" ]
Pandas: Query tool doesn't work if column headers are tuples: TypeError: argument of type 'int' is not iterable
39,401,029
<p>TLDR: The df.query() tool doesn't seem to work if the df's columns are tuples or even tuples converted into strings. How can I work around this to get the slice I'm aiming for?</p> <hr> <p><strong>Long Version:</strong> I have a pandas dataframe that looks like this (although there are a lot more columns and rows...):</p> <pre><code>&gt; dosage_df Score ("A_dose","Super") ("A_dose","Light") ("B_dose","Regular") 28 1 40 130 11 2 40 130 72 3 40 130 67 1 90 130 74 2 90 130 89 3 90 130 43 1 40 700 61 2 40 700 5 3 40 700 </code></pre> <p>Along with my data frame, I also have a python dictionary with the relevant ranges for each feature. The keys are the feature names, and the different values which it can take are the keys:</p> <pre><code># Original Version dosage_df.columns = ['First Score', 'Last Score', ("A_dose","Super"), ("A_dose","Light"), ("B_dose","Regular")] dict_of_dose_ranges = {("A_dose","Super"):[1,2,3], ("A_dose","Light"):[40,70,90], ("B_dose","Regular"):[130,200,500,700]} </code></pre> <p>For my purposes, I need to generate a particular combination (say A_dose = 1, B_dose = 90, and C_dose = 700), and based on those settings take the relevant slice out of my dataframe, and do relevant calculations from that smaller subset, and save the results somewhere.</p> <p>I'm doing this by implementing the following:</p> <pre><code>from itertools import product for dosage_comb in product(*dict_of_dose_ranges.values()): dosage_items = zip(dict_of_dose_ranges.keys(), dosage_comb) query_str = ' &amp; '.join('{} == {}'.format(*x) for x in dosage_items) **sub_df = dosage_df.query(query_str)** ... </code></pre> <p><strong>The problem is that is gets hung up on the query step, as it returns the following error message:</strong></p> <pre><code>TypeError: argument of type 'int' is not iterable </code></pre> <p>In this case, the query generated looks like this:</p> <p>query_str = "("A_dose","Light") == 40 &amp; ("A_dose","Super") == 1 &amp; ("B_dose","Regular") == 130"</p> <p><strong>Troubleshooting Attempts:</strong></p> <p>I've confirmed that indeed that solution should work for a dataframe with just string columns as found <a href="http://stackoverflow.com/questions/39378510/iterating-across-multiple-colums-in-pandas-df-and-slicing-dynamically">here</a>. In addition, I've also tried "tricking" the tool by converting the columns and the dictionary keys into strings by the following code... but that returned the same error.</p> <pre><code># String Version dosage_df.columns = ['First Score', 'Last Score', '("A_dose","Super")', '("A_dose","Light")', '("B_dose","Regular")'] dict_of_dose_ranges = { '("A_dose","Super")':[1,2,3], '("A_dose","Light")':[40,70,90], '("B_dose","Regular")':[130,200,500,700]} </code></pre> <p>Is there an alternate tool in python that can take tuples as inputs or a different way for me to trick it into working? </p>
0
2016-09-08T22:19:27Z
39,401,578
<p>You can build a list of conditions and logically condense them with <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.all.html" rel="nofollow"><code>np.all</code></a> instead of using <code>query</code>:</p> <pre><code>for dosage_comb in product(*dict_of_dose_ranges.values()): dosage_items = zip(dict_of_dose_ranges.keys(), dosage_comb) condition = np.all([dosage_df[col] == dose for col, dose in dosage_items], axis=0) sub_df = dosage_df[condition] </code></pre> <p>This method seems to be a bit more flexible than <code>query</code>, but when filtering across many columns I've found that <code>query</code> often performs better. I don't know if this is true in general though.</p>
1
2016-09-08T23:24:15Z
[ "python", "pandas", "typeerror" ]
Callback with Decorator or Parameter in Python?
39,401,066
<p>I'm designing an API. What's preferred using </p> <pre><code>@event1 def event_callback(): print "wooop" </code></pre> <p>or something like:</p> <pre><code>def event_callback(): print "wooop" event1(callback=event_callback) </code></pre>
1
2016-09-08T22:23:33Z
39,401,149
<p>The two alternatives are entirely equivalent. Decorator syntax was introduced because particularly for long functions it was easy to miss the decoration call following the definition, so you should use a decorator.</p>
0
2016-09-08T22:32:05Z
[ "python", "callback" ]
Why repeat(n) does not work with create in Reactive extensions
39,401,088
<pre><code>from rx import Observable, Observer from __future__ import print_function import random def create_observable(observer): while True: observer.on_next(random.randint(1,100)) Observable.create(create_observable).take_while(lambda x: x&gt;50).repeat(6).subscribe(print) </code></pre> <p>gives</p> <blockquote> <p>74 78 94 59 79 76</p> </blockquote> <p>sequence, while I'm expecting each number will be repeated 6 times</p> <p>so "repeat" never works for observables created with create method.</p>
1
2016-09-08T22:25:58Z
40,088,963
<p>The posted code gets 6 times a sequence of random integers in the range [51, 100].</p> <p>Try with </p> <pre><code>(Observable.create(create_observable) .take_while(lambda x: x &gt; 50) .select_many(lambda x: Observable.just(x).repeat(6)) .subscribe(print)) </code></pre> <p>or just </p> <pre><code>(Observable.create(create_observable) .take_while(lambda x: x &gt; 50) .select_many(lambda v: [v] * 6) .subscribe(print)) </code></pre>
0
2016-10-17T14:24:01Z
[ "python", "reactivex", "rx-py" ]
manager.GetIOSettings() -> None in Mac Python FBX SDK bindings
39,401,092
<p>Fresh install of the python bindings for the FBX SDK on a Mac, into site-packages of an anaconda python 2.7.12 installation. Success when importing fbx and FbxCommon. Success creating manager, scene, and importer objects for an fbx file import. here's the code</p> <pre><code>import fbx manager = fbx.FbxManager.Create() iosettings = manager.GetIOSettings() scene = fbx.FbxScene.Create(manager, "") importer = fbx.FbxImporter.Create(manager, "") fname = 'test.fbx' if not importer.Initialize(fname, -1, iosettings): print "INITIALIZE ", importer.GetStatus().GetErrorString() if not importer.Import(scene): print "IMPORT ", importer.GetStatus().GetErrorString() </code></pre> <p>But... manager.GetIOSettings() returns None rather than something usable. I am still able to import some files (others, with errors, are for another question), so maybe this is not a showstopper, but still...</p> <p>Any ideas about iosettings?</p>
0
2016-09-08T22:26:15Z
39,416,792
<p>If the manager does not have an IOSettings, you can create one for it:</p> <pre><code>if not manager.GetIOSettings(): ios = fbx.FbxIOSettings.Create(manager, fbx.IOSROOT) manager.SetIOSettings(ios) </code></pre> <p>(discovered in the FbxCommon.py file from the python SDK bindings)</p>
0
2016-09-09T17:24:12Z
[ "python", "fbx" ]
Add multiple Elements to Set in a Dictionary
39,401,150
<p>I am trying to achieve the following structure.</p> <pre><code>{0: set([1]), 1: set([2]), 2: set([0,3]), 3: set([3])} </code></pre> <p>Following is my code : </p> <pre><code>class Graph(object): """ This is the graph class which will store the information regarding the graph like vertices and edges. """ def __init__(self,num_vertices): self.vertices = num_vertices self.edges = [] self.indi_edges = () def enter_edges(self,source,dest): self.indi_edges = (source, dest) self.edges.append(self.indi_edges) def form_graph_structure(self): temp_dict = {} for idx,value in enumerate(self.edges): if value[0] in temp_dict: print "here" temp_dict[value[0]].update(value[1]) print "there" temp_dict[value[0]] = set() temp_dict[value[0]].add(value[1]) print temp_dict def display(self): print self.edges g = Graph(4) g.enter_edges(2,0) g.enter_edges(2,3) g.enter_edges(0,1) g.enter_edges(1,2) g.enter_edges(3,3) g.form_graph_structure() </code></pre> <p>I am getting the following error</p> <pre><code> File "DFS.py", line 20, in form_graph_structure temp_dict[value[0]].update(value[1]) TypeError: 'int' object is not iterable </code></pre> <p>Can anyone help? </p>
1
2016-09-08T22:32:11Z
39,401,190
<p><code>set.update()</code> expects an <em>iterable</em> of values. Use <a href="https://docs.python.org/2/library/stdtypes.html#set.add" rel="nofollow"><code>set.add()</code></a> to add <em>one</em> value:</p> <pre><code>if value[0] in temp_dict: temp_dict[value[0]].add(value[1]) </code></pre> <p>Rather than test for <code>value[0]</code> each time, use <a href="https://docs.python.org/2/library/stdtypes.html#dict.setdefault" rel="nofollow"><code>dict.setdefault()</code></a> to set an empty set if the key is missing:</p> <pre><code>def form_graph_structure(self): temp_dict = {} for source, dest in self.edges: temp_dict.setdefault(source, set()).add(dest) return temp_dict </code></pre>
3
2016-09-08T22:36:12Z
[ "python", "graph", "set" ]
Add multiple Elements to Set in a Dictionary
39,401,150
<p>I am trying to achieve the following structure.</p> <pre><code>{0: set([1]), 1: set([2]), 2: set([0,3]), 3: set([3])} </code></pre> <p>Following is my code : </p> <pre><code>class Graph(object): """ This is the graph class which will store the information regarding the graph like vertices and edges. """ def __init__(self,num_vertices): self.vertices = num_vertices self.edges = [] self.indi_edges = () def enter_edges(self,source,dest): self.indi_edges = (source, dest) self.edges.append(self.indi_edges) def form_graph_structure(self): temp_dict = {} for idx,value in enumerate(self.edges): if value[0] in temp_dict: print "here" temp_dict[value[0]].update(value[1]) print "there" temp_dict[value[0]] = set() temp_dict[value[0]].add(value[1]) print temp_dict def display(self): print self.edges g = Graph(4) g.enter_edges(2,0) g.enter_edges(2,3) g.enter_edges(0,1) g.enter_edges(1,2) g.enter_edges(3,3) g.form_graph_structure() </code></pre> <p>I am getting the following error</p> <pre><code> File "DFS.py", line 20, in form_graph_structure temp_dict[value[0]].update(value[1]) TypeError: 'int' object is not iterable </code></pre> <p>Can anyone help? </p>
1
2016-09-08T22:32:11Z
39,401,391
<p>You can use <a href="https://docs.python.org/2/library/collections.html#defaultdict-objects" rel="nofollow"><code>defaultdict</code></a> using <code>set</code> as its default value.</p> <pre><code>from collections import defaultdict class Graph(object): """ This is the graph class which will store the information regarding the graph like vertices and edges. """ def __init__(self, num_vertices): self.vertices = num_vertices self.edges = [] def enter_edges(self, source, dest): self.edges.extend([(source, dest)]) def form_graph_structure(self): temp_dict = defaultdict(set) for pair in self.edges: source, dest = pair temp_dict[source].add(dest) print temp_dict def display(self): print dict(self.edges) g = Graph(4) g.enter_edges(2,0) g.enter_edges(2,3) g.enter_edges(0,1) g.enter_edges(1,2) g.enter_edges(3,3) &gt;&gt;&gt; g.form_graph_structure() {0: set([1]), 1: set([2]), 2: set([0, 3]), 3: set([3])} </code></pre> <p>You could also update the graph structure after entering each pair of edges:</p> <pre><code>class Graph(object): """ This is the graph class which will store the information regarding the graph like vertices and edges. """ def __init__(self): self.graph = defaultdict(set) def enter_edges(self, source, dest): self.graph[source].add(dest) def display(self): print dict(self.graph) g = Graph() edges = [(2, 0), (2, 3), (0, 1), (1, 2), (3, 3)] for pair in edges: g.enter_edges(*pair) g.display() {0: set([1]), 1: set([2]), 2: set([0, 3]), 3: set([3])} </code></pre>
1
2016-09-08T23:00:39Z
[ "python", "graph", "set" ]
Celery equivalent of a JoinableQueue
39,401,374
<p>What would be Celery's equivalent of a <a href="https://docs.python.org/2/library/multiprocessing.html#multiprocessing.JoinableQueue"><code>multiprocessing.JoinableQueue</code></a> (or <a href="http://www.gevent.org/gevent.queue.html#gevent.queue.JoinableQueue"><code>gevent.queue.JoinableQueue</code></a>)?</p> <p>The functionality I'm looking for is the ability to <code>.join()</code> a Celery task queue from a publisher, waiting for the all tasks in the queue to be done.</p> <p>Waiting for an initial <code>AsyncResult</code> or <code>GroupResult</code> isn't going to be sufficient, as the queue dynamically fills up by the workers themselves.</p>
7
2016-09-08T22:58:40Z
39,687,121
<p>It might not be perfect, but this is what I came up with eventually.</p> <p>It's basically a <code>JoinableQueue</code> wrapper on top of an existing Celery queue, based on a shared Redis counter and a list listener. It requires the queue name to be the same as it's routing key (due to internal implementation details of the <code>before_task_publish</code> and <code>task_postrun</code> signals).</p> <p><strong>joinableceleryqueue.py</strong>:</p> <pre><code>from celery.signals import before_task_publish, task_postrun from redis import Redis import settings memdb = Redis.from_url(settings.REDIS_URL) class JoinableCeleryQueue(object): def __init__(self, queue): self.queue = queue self.register_queue_hooks() def begin(self): memdb.set(self.count_prop, 0) @property def count_prop(self): return "jqueue:%s:count" % self.queue @property def finished_prop(self): return "jqueue:%s:finished" % self.queue def task_add(self, routing_key, **kw): if routing_key != self.queue: return memdb.incr(self.count_prop) def task_done(self, task, **kw): if task.queue != self.queue: return memdb.decr(self.count_prop) if memdb.get(self.count_prop) == "0": memdb.rpush(self.finished_prop, 1) def register_queue_hooks(self): before_task_publish.connect(self.task_add) task_postrun.connect(self.task_done) def join(self): memdb.brpop(self.finished_prop) </code></pre> <p>I've chosen to use <code>BRPOP</code> instead of a pub/sub as I only need one listener listening to the "all task finished" event (the publisher).</p> <p>Using a <code>JoinableCeleryQueue</code> is pretty simple - <code>begin()</code> before adding any tasks to the queue, add tasks using regular Celery API, <code>.join()</code> to wait for all the tasks to be done.</p>
0
2016-09-25T13:00:40Z
[ "python", "queue", "celery", "python-multiprocessing", "gevent" ]
python & pandas - Split large dataframe into multiple dataframes and plot diagrams
39,401,473
<p>I'm under the similar condition with <a href="http://stackoverflow.com/questions/19790790/splitting-dataframe-into-multiple-dataframes">this case</a>. I'm working on a project which has a large dataframe with about half-million of rows. And about 2000 of users are involving in this.( I get this number by <code>value_counts()</code> counting a column called <code>NoUsager</code>).</p> <p>I'd like to split the dataframe into several array/dataframe for plotting after. (Several means an array/dataframe for each user) I gott the list of users like:</p> <pre><code>df.sort_values(by='NoUsager',inplace=True) df.set_index(keys=['NoUsager'],drop=False,inplace=True) users = df['NoUsager'].unique().tolist() </code></pre> <p>I know what's after is a loop to generate the smaller dataframes but I have no idea how to make it happen. And I combined the code above and tried the one in <a href="http://stackoverflow.com/questions/19790790/splitting-dataframe-into-multiple-dataframes">the case</a> but there was no solution for it.</p> <p>What should I do with it?</p> <hr> <p><strong>EDIT</strong></p> <p>I want both histogram and boxplot of the dataframe. With the answer provided, I already have a boxplot of all <code>NoUsager</code>. But with large amount of data, the boxplot is too small to read. So I'd like to split the dataframe by <code>NoUsager</code> and plot them separately. Diagrams that I'd like to have:</p> <ol> <li>boxplot, column=<code>DureeService</code>, by=<code>NoUsager</code> </li> <li>boxplot, column=<code>DureeService</code>, by='Weekday`</li> <li>histogram, for every <code>Weekday</code>,by=<code>DureeService</code></li> </ol> <p>I hope this time is well explained.</p> <p>DataType:</p> <pre><code> Weekday NoUsager Periods Sens DureeService DataType string string string string datetime.time </code></pre> <p>Sample of DataFrame:</p> <pre><code>Weekday NoUsager Periods Sens DureeService Lun 000001 Matin + 00:00:05 Lun 000001 Matin + 00:00:04 Mer 000001 Matin + 00:00:07 Dim 000001 Soir - 00:00:02 Lun 000001 Matin + 00:00:07 Jeu 000001 Soir - 00:00:04 Lun 000001 Matin + 00:00:07 Lun 000001 Soir - 00:00:04 Dim 000001 Matin + 00:00:05 Lun 000001 Matin + 00:00:03 Mer 000001 Matin + 00:00:04 Ven 000001 Soir - 00:00:03 Mar 000001 Matin + 00:00:03 Lun 000001 Soir - 00:00:04 Lun 000001 Matin + 00:00:04 Mer 000002 Soir - 00:00:04 Jeu 000003 Matin + 00:00:50 Mer 000003 Soir - 00:06:51 Mer 000003 Soir - 00:00:08 Mer 000003 Soir - 00:00:10 Jeu 000003 Matin + 00:12:35 Lun 000004 Matin + 00:00:05 Dim 000004 Matin + 00:00:05 Lun 000004 Matin + 00:00:05 Lun 000004 Matin + 00:00:05 </code></pre> <hr> <p>And what bothers me is that none of these data is number, so each time they have to be converted.</p> <p>Thanks in advance!</p>
0
2016-09-08T23:10:00Z
39,401,617
<p>No need to sort first. You may try this with your original DataFrame:</p> <pre><code># import third-party libraries import pandas as pd import numpy as np # Define a function takes the database, and return a dictionary def splitting_dataframe(df): d = {} # Define an empty dictionary nousager = np.unique(df.NoUsager.values) # Getting the NoUsage list for NU in nousager: # Loop over NoUsage list d[NU] = df[df.NoUsager == NU] # I guess this line is what you want most return d # Return the dictionary dictionary = splitting_dataframe(df) # Calling the function </code></pre> <p>After this, you can call the DataFrame for specific NoUsager by:</p> <pre><code>dictionary[target_NoUsager] </code></pre> <p>Hope this helps.</p> <hr> <h2>EDIT</h2> <p>If you want to do a box plot, have you tried:</p> <pre><code>df.boxplot(column='DureeService', by='NoUsager') </code></pre> <p>directly? More information here: <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.boxplot.html" rel="nofollow">http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.boxplot.html</a></p> <hr> <h2>EDIT</h2> <p>If you want a boxplot for several selected 'NoUsager':</p> <pre><code>targets = [some selected NoUsagers] mask = np.sum([df.A.values == targets[i] for i in xrange(len(targets))], dtype=bool, axis=0) df[mask].boxplot(column='DureeService', by='NoUsager') </code></pre> <p>If you want a histogram for a selected 'NoUsager':</p> <pre><code>df[target NoUsager].hist(column='DureeService') </code></pre> <p>If you still need to separate them, @Psidom's first line is good enough.</p>
0
2016-09-08T23:28:43Z
[ "python", "pandas", "matplotlib", "dataframe", "split" ]
python & pandas - Split large dataframe into multiple dataframes and plot diagrams
39,401,473
<p>I'm under the similar condition with <a href="http://stackoverflow.com/questions/19790790/splitting-dataframe-into-multiple-dataframes">this case</a>. I'm working on a project which has a large dataframe with about half-million of rows. And about 2000 of users are involving in this.( I get this number by <code>value_counts()</code> counting a column called <code>NoUsager</code>).</p> <p>I'd like to split the dataframe into several array/dataframe for plotting after. (Several means an array/dataframe for each user) I gott the list of users like:</p> <pre><code>df.sort_values(by='NoUsager',inplace=True) df.set_index(keys=['NoUsager'],drop=False,inplace=True) users = df['NoUsager'].unique().tolist() </code></pre> <p>I know what's after is a loop to generate the smaller dataframes but I have no idea how to make it happen. And I combined the code above and tried the one in <a href="http://stackoverflow.com/questions/19790790/splitting-dataframe-into-multiple-dataframes">the case</a> but there was no solution for it.</p> <p>What should I do with it?</p> <hr> <p><strong>EDIT</strong></p> <p>I want both histogram and boxplot of the dataframe. With the answer provided, I already have a boxplot of all <code>NoUsager</code>. But with large amount of data, the boxplot is too small to read. So I'd like to split the dataframe by <code>NoUsager</code> and plot them separately. Diagrams that I'd like to have:</p> <ol> <li>boxplot, column=<code>DureeService</code>, by=<code>NoUsager</code> </li> <li>boxplot, column=<code>DureeService</code>, by='Weekday`</li> <li>histogram, for every <code>Weekday</code>,by=<code>DureeService</code></li> </ol> <p>I hope this time is well explained.</p> <p>DataType:</p> <pre><code> Weekday NoUsager Periods Sens DureeService DataType string string string string datetime.time </code></pre> <p>Sample of DataFrame:</p> <pre><code>Weekday NoUsager Periods Sens DureeService Lun 000001 Matin + 00:00:05 Lun 000001 Matin + 00:00:04 Mer 000001 Matin + 00:00:07 Dim 000001 Soir - 00:00:02 Lun 000001 Matin + 00:00:07 Jeu 000001 Soir - 00:00:04 Lun 000001 Matin + 00:00:07 Lun 000001 Soir - 00:00:04 Dim 000001 Matin + 00:00:05 Lun 000001 Matin + 00:00:03 Mer 000001 Matin + 00:00:04 Ven 000001 Soir - 00:00:03 Mar 000001 Matin + 00:00:03 Lun 000001 Soir - 00:00:04 Lun 000001 Matin + 00:00:04 Mer 000002 Soir - 00:00:04 Jeu 000003 Matin + 00:00:50 Mer 000003 Soir - 00:06:51 Mer 000003 Soir - 00:00:08 Mer 000003 Soir - 00:00:10 Jeu 000003 Matin + 00:12:35 Lun 000004 Matin + 00:00:05 Dim 000004 Matin + 00:00:05 Lun 000004 Matin + 00:00:05 Lun 000004 Matin + 00:00:05 </code></pre> <hr> <p>And what bothers me is that none of these data is number, so each time they have to be converted.</p> <p>Thanks in advance!</p>
0
2016-09-08T23:10:00Z
39,401,701
<p><code>[g for _, g in df.groupby('NoUsager')]</code> gives you a list of data frames where each dataframe contains one unique <code>NoUsager</code>. But I think what you need is something like:</p> <pre><code>for k, g in df.groupby('NoUsager'): g.plot(kind = ..., x = ..., y = ...) etc.. </code></pre>
3
2016-09-08T23:41:18Z
[ "python", "pandas", "matplotlib", "dataframe", "split" ]
How to add data labels to a bar chart in Bokeh?
39,401,481
<p>In the Bokeh guide there are examples of various bar charts that can be created. <a href="http://bokeh.pydata.org/en/0.10.0/docs/user_guide/charts.html#id4" rel="nofollow">http://bokeh.pydata.org/en/0.10.0/docs/user_guide/charts.html#id4</a></p> <p>This code will create one:</p> <pre><code>from bokeh.charts import Bar, output_file, show from bokeh.sampledata.autompg import autompg as df p = Bar(df, 'cyl', values='mpg', title="Total MPG by CYL") output_file("bar.html") show(p) </code></pre> <p>My question is if it's possible to add data labels to each individual bar of the chart? I searched online but could not find a clear answer.</p>
1
2016-09-08T23:11:31Z
39,542,303
<p>Yes, you can add labels to each bar of the chart. There are a few ways to do this. By default, your labels are tied to your data. But you can change what is displayed. Here are a few ways to do that using your example:</p> <pre><code>from bokeh.charts import Bar, output_file, show from bokeh.sampledata.autompg import autompg as df from bokeh.layouts import gridplot from pandas import DataFrame from bokeh.plotting import figure, ColumnDataSource from bokeh.models import Range1d, HoverTool # output_file("bar.html") """ Adding some sample labels a few different ways. Play with the sample data and code to get an idea what does what. See below for output. """ </code></pre> <p><strong>Sample data (new labels):</strong></p> <p>I used some logic to determine the new dataframe column. Of course you could use another column already in <code>df</code> (it all depends on what data you're working). All you really need here is to supply a new column to the dataframe.</p> <pre><code># One method labels = [] for number in df['cyl']: if number == 3: labels.append("three") if number == 4: labels.append("four") if number == 5: labels.append("five") if number == 6: labels.append("six") if number == 8: labels.append("eight") df['labels'] = labels </code></pre> <p>Another way to get a new dataframe column. Again, we just need to supply <code>df</code> a new column to use on our bar plot.</p> <pre><code># Another method def new_labels(x): if x % 2 != 0 or x == 6: y = "Inline" elif x % 2 == 0: y = "V" else: y = "nan" return y df["more_labels"] = df["cyl"].map(new_labels) </code></pre> <p><strong>Now the bar chart:</strong></p> <p>I've done it two ways. p1 just specifies the new labels. Note that because I used strings it put them in alphabetical order on the chart. p2 uses the original labels, plus adds my new labels on the same bar.</p> <pre><code># Specifying your labels p1 = Bar(df, label='labels', values='mpg', title="Total MPG by CYL, remapped labels, p1", width=400, height=400, legend="top_right") p2 = Bar(df, label=['cyl', 'more_labels'], values='mpg', title="Total MPG by CYL, multiple labels, p2", width=400, height=400, legend="top_right") </code></pre> <p><strong>Another way:</strong></p> <p>Bokeh has three main "interface levels". High level <code>charts</code> provides quick easy access but limited functionality; <code>plotting</code> which gives more options; <code>models</code> gives even more options.</p> <p>Here I'm using the plotting interface and the <code>Figure</code> class that contains a <code>rect</code> method. This gives you more detailed control of your chart. </p> <pre><code># Plot with "intermediate-level" bokeh.plotting interface new_df = DataFrame(df.groupby(['cyl'])['mpg'].sum()) factors = ["three", "four", "five", "six", "eight"] ordinate = new_df['mpg'].tolist() mpg = [x * 0.5 for x in ordinate] p3 = figure(x_range=factors, width=400, height=400, title="Total MPG by CYL, using 'rect' instead of 'bar', p3") p3.rect(factors, y=mpg, width=0.75, height=ordinate) p3.y_range = Range1d(0, 6000) p3.xaxis.axis_label = "x axis name" p3.yaxis.axis_label = "Sum(Mpg)" </code></pre> <p><strong>A fourth way to add specific labels:</strong></p> <p>Here I'm using the <code>hover</code> plot tool. Hover over each bar to display your specified label.</p> <pre><code># With HoverTool, using 'quad' instead of 'rect' top = [int(x) for x in ordinate] bottom = [0] * len(top) left = [] [left.append(x-0.2) for x in range(1, len(top)+1)] right = [] [right.append(x+0.2) for x in range(1, len(top)+1)] cyl = ["three", "four", "five", "six", "eight"] source = ColumnDataSource( data=dict( top=[int(x) for x in ordinate], bottom=[0] * len(top), left=left, right=right, cyl=["three", "four", "five", "six", "eight"], ) ) hover = HoverTool( tooltips=[ ("cyl", "@cyl"), ("sum", "@top") ] ) p4 = figure(width=400, height=400, title="Total MPG by CYL, with HoverTool and 'quad', p4") p4.add_tools(hover) p4.quad(top=[int(x) for x in ordinate], bottom=[0] * len(top), left=left, right=right, color="green", source=source) p4.xaxis.axis_label = "x axis name" </code></pre> <p>Show all four charts in a grid:</p> <pre><code>grid = gridplot([[p1, p2], [p3, p4]]) show(grid) </code></pre> <p>These are the ways I am aware of. There may be others. Change whatever you like to fit your needs. Here is what running all of this will output (you'll have to run it or serve it to get the hovertool):</p> <p><a href="http://i.stack.imgur.com/rydJP.png" rel="nofollow"><img src="http://i.stack.imgur.com/rydJP.png" alt="bokeh bar specify labels"></a> <a href="http://i.stack.imgur.com/y78Fv.png" rel="nofollow"><img src="http://i.stack.imgur.com/y78Fv.png" alt="bokeh bar specify labels"></a></p>
3
2016-09-17T02:20:42Z
[ "python", "bokeh" ]
How to add data labels to a bar chart in Bokeh?
39,401,481
<p>In the Bokeh guide there are examples of various bar charts that can be created. <a href="http://bokeh.pydata.org/en/0.10.0/docs/user_guide/charts.html#id4" rel="nofollow">http://bokeh.pydata.org/en/0.10.0/docs/user_guide/charts.html#id4</a></p> <p>This code will create one:</p> <pre><code>from bokeh.charts import Bar, output_file, show from bokeh.sampledata.autompg import autompg as df p = Bar(df, 'cyl', values='mpg', title="Total MPG by CYL") output_file("bar.html") show(p) </code></pre> <p>My question is if it's possible to add data labels to each individual bar of the chart? I searched online but could not find a clear answer.</p>
1
2016-09-08T23:11:31Z
39,656,707
<p>Ben Love excellent reply to this question. It worked like charm for me. I have been struggling around to plot bar chart where we were fetching the data inside the pandas dataframe. The code block I have used to plot the Bar Chart with x_range as labels below.</p> <p>Stored labels inside the pandas dataframe. The key point here is that same x values been passed to the rect method and also to the x_range values. </p> <pre><code>label = df['promotion_type'].tolist() plot = Figure(x_range=label,plot_width=500, plot_height=500, toolbar_location="above",y_range=yr) plot.rect(x='promotion_type',y='valuebytwo',source=source, width =0.5, height = 'value', legend=title) </code></pre>
0
2016-09-23T08:57:01Z
[ "python", "bokeh" ]
How do I get all the rows of data for a specific month,or days; over a range of many years using pandas DataFrame?
39,401,487
<p>I am attempting to find seasonal trends in the stock market. I want to have the ability to see how an asset has preformed i.e. what is the average return for appl(apple computer) in the month of May, since 1990? Also I would like to see i.e how has aapl performed between September and December since 1990. Finally I would also like the ability to see what days were the most profitable i.e. what was the average return for appl on Monday, Tuesday, Wednesday, Thursday, and Friday since 1990? </p> <p>I am using pandas dataframes and am loading my data from a csv file loaded from yahoo finance. No matter what try I can't get this to work right, any help or input would be greatly appreciated. Also I am not using apple's stock for my code but ticker CDE</p> <p>In addition, when I run my code I get only the end and the beginning of my data, how do I get it to where it will display all 6000+ rows? </p> <pre><code>from pandas_datareader import data as dreader import pandas as pd df = pd.read_csv("cde_data.csv",index_col='Date') print(df['1900-05':'2016-05']) </code></pre> <p>I am trying to get the return for the month of may but I get I range instead</p> <pre><code> Open High Low Close Volume Adj Close Date 1990-04-12 26.875 26.875 26.625 26.625 6100 250.576036 1990-04-16 26.500 26.750 26.375 26.750 500 251.752449 1990-04-17 26.750 26.875 26.750 26.875 2300 252.928863 1990-04-18 26.875 26.875 26.500 26.625 3500 250.576036 1990-04-19 26.500 26.750 26.500 26.750 700 251.752449 1990-04-20 26.750 26.875 26.750 26.875 2100 252.928863 1990-04-23 26.875 26.875 26.750 26.875 700 252.928863 1990-04-24 27.000 27.000 26.000 26.000 2400 244.693970 1990-04-25 25.250 25.250 24.875 25.125 9300 236.459076 1990-04-26 25.000 25.250 24.750 25.000 1200 235.282663 1990-04-27 25.000 25.250 25.000 25.250 1100 237.635490 1990-04-30 25.125 25.250 25.000 25.125 3500 236.459076 1990-05-01 25.375 25.500 25.250 25.250 1100 237.635490 1990-05-02 25.125 25.125 24.000 24.250 1800 228.224183 1990-05-03 25.000 25.000 24.625 24.750 9100 232.929836 1990-05-04 24.625 24.875 24.375 24.750 500 232.929836 1990-05-07 25.000 25.000 24.625 24.625 900 231.753423 1990-05-08 24.875 25.250 24.875 25.125 400 236.459076 1990-05-09 25.375 25.875 25.250 25.875 6900 243.517556 1990-05-10 26.000 26.750 26.000 26.750 5500 251.752449 1990-05-11 27.000 27.000 26.875 27.000 1800 254.105276 1990-05-14 27.000 27.250 26.750 27.000 6800 254.105276 1990-05-15 27.000 27.125 26.625 26.750 3300 251.752449 1990-05-16 26.625 26.625 25.875 25.875 2600 243.517556 1990-05-17 26.125 26.500 26.000 26.375 500 248.223210 1990-05-18 26.250 26.875 26.250 26.875 1000 252.928863 1990-05-21 27.375 27.375 26.875 27.375 2700 257.634516 1990-05-22 27.625 28.250 27.500 27.875 2000 262.340169 1990-05-23 27.375 28.500 27.125 28.000 4000 263.516583 1990-05-24 28.250 28.375 27.625 27.875 1100 262.340169 ... ... ... ... ... ... ... 2016-03-18 5.490 5.750 5.390 5.590 9415600 5.590000 2016-03-21 5.560 5.940 5.550 5.760 4018800 5.760000 2016-03-22 5.810 5.890 5.680 5.800 3429600 5.800000 2016-03-23 5.330 5.570 5.200 5.250 4445500 5.250000 2016-03-24 5.260 5.400 5.150 5.280 2668800 5.280000 2016-03-28 5.320 5.480 5.210 5.440 2093700 5.440000 2016-03-29 5.400 5.850 5.380 5.710 3709800 5.710000 2016-03-30 5.640 5.780 5.490 5.650 2444900 5.650000 2016-03-31 5.800 5.860 5.570 5.620 2319800 5.620000 2016-04-01 5.410 5.650 5.210 5.640 2922400 5.640000 2016-04-04 5.620 5.690 5.430 5.550 2561200 5.550000 2016-04-05 5.620 5.770 5.440 5.730 2294900 5.730000 2016-04-06 5.630 5.880 5.610 5.820 2108400 5.820000 2016-04-07 5.900 6.110 5.870 5.940 2963100 5.940000 2016-04-08 5.790 6.030 5.750 6.010 3583700 6.010000 2016-04-11 6.160 6.500 6.110 6.490 5140100 6.490000 2016-04-12 6.580 6.730 6.330 6.720 4015000 6.720000 2016-04-13 6.640 6.990 6.600 6.700 3972300 6.700000 2016-04-14 6.660 6.750 6.220 6.380 4125700 6.380000 2016-04-15 6.410 6.750 6.370 6.670 2907800 6.670000 2016-04-18 6.700 6.830 6.530 6.790 2452900 6.790000 2016-04-19 7.110 7.450 6.970 7.380 6057600 7.380000 2016-04-20 7.410 7.680 6.820 7.000 6494400 7.000000 2016-04-21 7.300 7.530 6.940 7.140 4394000 7.140000 2016-04-22 7.080 7.380 6.730 6.890 3838700 6.890000 2016-04-25 6.850 7.040 6.720 6.870 2905400 6.870000 2016-04-26 6.900 7.190 6.700 7.100 2743900 7.100000 2016-04-27 7.160 7.280 6.870 7.180 3558900 7.180000 2016-04-28 7.350 7.960 7.080 7.440 6516000 7.440000 2016-04-29 7.650 8.140 7.650 8.100 6457000 8.100000 [6564 rows x 6 columns] Press any key to continue . . . </code></pre>
1
2016-09-08T23:12:15Z
39,401,591
<p>Use the standard libraries <a href="https://docs.python.org/2/library/datetime.html" rel="nofollow"><strong><code>datetime</code></strong></a> and write helper functions to do the conversions you want. Then make a new column by applying the helper function to the Date column.</p> <pre><code>from datetime import datetime import dateutil.parser def hour(x): return(x.hour) def dow(x): return(x.isoweekday()) def month(x): return(x.month) df.reset_index() df.Date = df.Date.apply(dateutil.parser.parse) df["hour"] = df.Date.apply(hour) df["dow"] = df.Date.apply(dow) df["month"] = df.Date.apply(month) </code></pre> <p>Now you can do group by on the column you have created or slice on the column you have created. To get all the Fridays in January slice like this:</p> <pre><code>df[(df.dow == 5) &amp; (df.month == 1)] </code></pre> <p>To print more rows you can change the settings by including the following line below your imports:</p> <pre><code>pd.options.display.max_rows = 6000 </code></pre>
1
2016-09-08T23:25:52Z
[ "python", "pandas", "dataframe", "time-series" ]
Python: "IndexError: string index out of range" Beginner
39,401,571
<p>I know, I know, this question has been asked plenty of times before. But I can't figure out how to fix it here - in this particular instance. When I subtract 2, which is what was recommended, I still get the same error within if statement. Thanks The code (at least it should) take a string "s" and measure it against the alphabet "order" and then give an output of the longest substring in s which is in alphabetical order.</p> <pre><code>order = "abcdefghijklmnopqrstuvwxyz" s = 'abcbcdabc' match = "" for i in range(len(s)): for j in range(len(order)): if (((i + j ) - 2) &lt; len(order) and order[i] == s[j]): match += s[i] print("Longest substring in alphabetical order is: " + match) </code></pre>
0
2016-09-08T23:23:34Z
39,401,685
<p>That is because you are using index <code>j</code> of <code>order</code> list to access <code>s</code> list. It is possible that <code>j</code> is greater than <code>len(s)</code> hence the <code>IndexError</code>.</p> <p>I don't know what you are trying to achieve with the code. But in any case heres what you can change to make it working: <code>match += s[i]</code> OR <code>match += order[j]</code></p>
2
2016-09-08T23:38:22Z
[ "python", "for-loop", "indexing" ]
Convert single quotes to double quotes for Dictionary key/value pair
39,401,579
<p>I have a dictionory with key value pair in single inverted commas as follows:</p> <pre><code>filename = 'sub-310621_task-EMOTION_acq-LR_bold.nii.gz' intended_for ={"IntendedFor", filename} </code></pre> <p>AS i am want to write this dictionary to a json file i have to have filename in between two inverted commas eg: <code>"sub-310621_task-EMOTION_acq-LR_bold.nii.gz"</code></p> <p>SO the output should look like:</p> <pre><code>intended_for ={"IntendedFor", "sub-310621_task-EMOTION_acq-LR_bold.nii.gz"} </code></pre> <p>This output will be written in to test.json file which should look like:</p> <pre><code>{ "IntendedFor": "sub-310621_task-EMOTION_acq-LR_bold.nii.gz" } </code></pre> <p>How can i do this in python ?</p>
0
2016-09-08T23:24:27Z
39,401,748
<p>The apostrophes or quotation marks on the ends of a string literal are not included in the string - <code>'asdf'</code> and <code>"asdf"</code> represent identical strings. Neither the key nor the value in your dict actually include the character <code>'</code> or <code>"</code>, so you don't need to perform a conversion.</p> <p>When you dump the dict to JSON, your JSON dumper will automatically surround strings with <code>"</code> characters, among other necessary escaping. For example, if you're using the <code>json</code> module, you can just do</p> <pre><code>json_string = json.dumps(intended_for) </code></pre> <p>to produce a correctly-formatted JSON string. (If this does not work, you have some other bug you're not showing us.)</p>
1
2016-09-08T23:47:43Z
[ "python", "json", "dictionary" ]
Convert single quotes to double quotes for Dictionary key/value pair
39,401,579
<p>I have a dictionory with key value pair in single inverted commas as follows:</p> <pre><code>filename = 'sub-310621_task-EMOTION_acq-LR_bold.nii.gz' intended_for ={"IntendedFor", filename} </code></pre> <p>AS i am want to write this dictionary to a json file i have to have filename in between two inverted commas eg: <code>"sub-310621_task-EMOTION_acq-LR_bold.nii.gz"</code></p> <p>SO the output should look like:</p> <pre><code>intended_for ={"IntendedFor", "sub-310621_task-EMOTION_acq-LR_bold.nii.gz"} </code></pre> <p>This output will be written in to test.json file which should look like:</p> <pre><code>{ "IntendedFor": "sub-310621_task-EMOTION_acq-LR_bold.nii.gz" } </code></pre> <p>How can i do this in python ?</p>
0
2016-09-08T23:24:27Z
39,401,959
<p>Rather than trying to build the JSON string yourself you should use the <a href="https://docs.python.org/3.5/library/json.html" rel="nofollow">json</a> module to do the encoding.</p> <p>The <code>json.dumps()</code> method takes an object such as a dictionary with key value pairs and converts it into a JSON compliant string which can then be written to a file.</p> <p>Instead of a dictionary, you created a set by using a comma <code>,</code> instead of a colon <code>:</code></p> <pre><code>intended_for = {"IntendedFor", filename} </code></pre> <p>The correct code for your input would be</p> <pre><code>filename = 'sub-310621_task-EMOTION_acq-LR_bold.nii.gz' intended_for ={"IntendedFor": filename} </code></pre> <p>Then you can encode</p> <pre><code>import json json_string = json.dumps(intended_for) </code></pre>
0
2016-09-09T00:15:41Z
[ "python", "json", "dictionary" ]
I don't understand this sentence
39,401,603
<p>I'm new to learning Python and I'm making a lot of questions these days. I tried to make a Bulls and Cows game, but I failed and then I searched on the internet for the code. I found this sentence and I don't know what it does:</p> <pre><code>while True: guess = raw_input('\nNext guess [%i]: ' % guesses).strip() if len(guess) == size and \ all(char in digits for char in guess) \ and len(set(guess)) == size: break print "Problem, try again. You need to enter %i unique digits from 1 to 9" % size </code></pre> <p>I don't understand the <code>\</code>, what exactly evaluates the Boolean and what does <code>char</code> mean in <code>all()</code> also there is one more <code>\</code>. I'm a bit confused.</p> <p>I will leave the rest of the code here:</p> <pre><code>import random digits = '123456789' size = 4 chosen = ''.join(random.sample(digits,size)) #print chosen # Debug print '''I have chosen a number from %s unique digits from 1 to 9 arranged in a random order. You need to input a %i digit, unique digit number as a guess at what I have chosen''' % (size, size) guesses = 0 while True: guesses += 1 while True: # get a good guess guess = raw_input('\nNext guess [%i]: ' % guesses).strip() if len(guess) == size and \ all(char in digits for char in guess) \ and len(set(guess)) == size: break print "Problem, try again. You need to enter %i unique digits from 1 to 9" % size if guess == chosen: print '\nCongratulations you guessed correctly in',guesses,'attempts' break bulls = cows = 0 for i in range(size): if guess[i] == chosen[i]: bulls += 1 elif guess[i] in chosen: cows += 1 print ' %i Bulls\n %i Cows' % (bulls, cows) </code></pre>
0
2016-09-08T23:27:38Z
39,401,627
<p>Typically, code in python needs complete itself on one line. If you would, instead, like to have line breaks to continue an expression to the next line (the most obvious reason being to increase readability) then you can insert a <code>\</code> at the end of the line. </p> <p>This tells python to treat the next line as if it is a part of the existing line. </p>
1
2016-09-08T23:30:40Z
[ "python" ]
Python re.search function regex issue
39,401,604
<p>I have the following code:</p> <pre><code>#!/usr/bin/python import time, uuid, hmac, hashlib, base64, json import urllib3 import certifi import datetime import requests import re from datetime import datetime http = urllib3.PoolManager( cert_reqs='CERT_REQUIRED', # Force certificate check. ca_certs=certifi.where(), # Path to the Certifi bundle. ) #Get the status response from pritunl api BASE_URL = 'https://www.vpn.trimble.cloud:443' API_TOKEN = 'gvwrfQZQPryTbX3l03AQMwTyaE0aFywE' API_SECRET = 'B0vZp5dDyOrshW1pmFFjAnIUyeGtFy9y' LOG_PATH = '/var/log/developer_vpn/' def auth_request(method, path, headers=None, data=None): auth_timestamp = str(int(time.time())) auth_nonce = uuid.uuid4().hex auth_string = '&amp;'.join([API_TOKEN, auth_timestamp, auth_nonce, method.upper(), path] + ([data] if data else [])) auth_signature = base64.b64encode(hmac.new( API_SECRET, auth_string, hashlib.sha256).digest()) auth_headers = { 'Auth-Token': API_TOKEN, 'Auth-Timestamp': auth_timestamp, 'Auth-Nonce': auth_nonce, 'Auth-Signature': auth_signature, } if headers: auth_headers.update(headers) return http.request(method, BASE_URL + path, headers=auth_headers, body=data) response1 = auth_request('GET', '/server', ) if response1.status == 200: pritunlServResponse = (json.loads(response1.data)) #print pritunlServResponse #print response1.data Name = [y['name'] for y in pritunlServResponse] Server_id = [x['id'] for x in pritunlServResponse] for srv_name, srv_id in zip(Name, Server_id): response2 = auth_request('GET', '/server/' + srv_id + '/output', ) pritunlServResponse2 = (json.loads(response2.data)) py_pritunlServResponse2 = pritunlServResponse2['output'] print("value of srv_id: ", srv_id, "\n") print("value of srv_name: ", srv_name, "\n") logfile = open(LOG_PATH + srv_name +'_vpn_out.log', 'w') for log in py_pritunlServResponse2: if re.search(r'(?!52\.39\.62\.8)', log): logfile.write("%s\n" % log) logfile.close() else: raise SystemExit </code></pre> <p>This code visits a website using authentication (the address has been redacted), grabs some text formatted in JSON, and parses two values from the output: "srv_name" and "srv_id". This code then uses the "srv_id" to construct additional HTTP requests to get log files from the server. It then grabs the log files - one for each "srv_id" and names them with the values obtained from "srv_name" and saves them on the local system.</p> <p>I want to do some additional grep-style processing before the files are written to the local system. Specifically I'd like to exclude any text exactly containing "52.39.62.8" from being written. When I run the code above, it looks like the regex is not being processed as I still see "52.39.62.8" in my output files.</p>
0
2016-09-08T23:27:40Z
39,401,682
<pre><code>re.search(r'(?!52\.39\.62\.8)', log) </code></pre> <p>You're matching <em>any</em> empty string that is not followed by the ip address - every string will match, as this will match the end of any string.</p> <p>reverse your logic and output the line to the log only if <code>re.search</code> for the ip address comes back as <code>None</code>.</p> <pre><code>if re.search(r'(?&lt;!\d)52\.39\.62\.8(?!\d)', log) is None: logfile.write("%s\n" % log) </code></pre> <p>note that this also includes it's own negative look-behind and look-ahead assertions to ensure no digits precede or follow the ip address.</p>
0
2016-09-08T23:38:03Z
[ "python", "regex" ]
Python re.search function regex issue
39,401,604
<p>I have the following code:</p> <pre><code>#!/usr/bin/python import time, uuid, hmac, hashlib, base64, json import urllib3 import certifi import datetime import requests import re from datetime import datetime http = urllib3.PoolManager( cert_reqs='CERT_REQUIRED', # Force certificate check. ca_certs=certifi.where(), # Path to the Certifi bundle. ) #Get the status response from pritunl api BASE_URL = 'https://www.vpn.trimble.cloud:443' API_TOKEN = 'gvwrfQZQPryTbX3l03AQMwTyaE0aFywE' API_SECRET = 'B0vZp5dDyOrshW1pmFFjAnIUyeGtFy9y' LOG_PATH = '/var/log/developer_vpn/' def auth_request(method, path, headers=None, data=None): auth_timestamp = str(int(time.time())) auth_nonce = uuid.uuid4().hex auth_string = '&amp;'.join([API_TOKEN, auth_timestamp, auth_nonce, method.upper(), path] + ([data] if data else [])) auth_signature = base64.b64encode(hmac.new( API_SECRET, auth_string, hashlib.sha256).digest()) auth_headers = { 'Auth-Token': API_TOKEN, 'Auth-Timestamp': auth_timestamp, 'Auth-Nonce': auth_nonce, 'Auth-Signature': auth_signature, } if headers: auth_headers.update(headers) return http.request(method, BASE_URL + path, headers=auth_headers, body=data) response1 = auth_request('GET', '/server', ) if response1.status == 200: pritunlServResponse = (json.loads(response1.data)) #print pritunlServResponse #print response1.data Name = [y['name'] for y in pritunlServResponse] Server_id = [x['id'] for x in pritunlServResponse] for srv_name, srv_id in zip(Name, Server_id): response2 = auth_request('GET', '/server/' + srv_id + '/output', ) pritunlServResponse2 = (json.loads(response2.data)) py_pritunlServResponse2 = pritunlServResponse2['output'] print("value of srv_id: ", srv_id, "\n") print("value of srv_name: ", srv_name, "\n") logfile = open(LOG_PATH + srv_name +'_vpn_out.log', 'w') for log in py_pritunlServResponse2: if re.search(r'(?!52\.39\.62\.8)', log): logfile.write("%s\n" % log) logfile.close() else: raise SystemExit </code></pre> <p>This code visits a website using authentication (the address has been redacted), grabs some text formatted in JSON, and parses two values from the output: "srv_name" and "srv_id". This code then uses the "srv_id" to construct additional HTTP requests to get log files from the server. It then grabs the log files - one for each "srv_id" and names them with the values obtained from "srv_name" and saves them on the local system.</p> <p>I want to do some additional grep-style processing before the files are written to the local system. Specifically I'd like to exclude any text exactly containing "52.39.62.8" from being written. When I run the code above, it looks like the regex is not being processed as I still see "52.39.62.8" in my output files.</p>
0
2016-09-08T23:27:40Z
39,404,008
<p>If the IP address is always flanked by specific characters, e.g.: <code>(52.39.62.8):</code>, you can use <code>in</code> for exact contains:</p> <pre><code>if '(52.39.62.8):' not in log: logfile.write(log + '\n') </code></pre>
1
2016-09-09T04:44:35Z
[ "python", "regex" ]
Prepare my bigdata with Spark via Python
39,401,690
<p>My 100m in size, quantized data:</p> <pre><code>(1424411938', [3885, 7898]) (3333333333', [3885, 7898]) </code></pre> <p>Desired result:</p> <pre><code>(3885, [3333333333, 1424411938]) (7898, [3333333333, 1424411938]) </code></pre> <p>So what I want, is to transform the data so that I group 3885 (for example) with all the <code>data[0]</code> that have it). Here is what I did in <a href="/questions/tagged/python" class="post-tag" title="show questions tagged &#39;python&#39;" rel="tag">python</a>:</p> <pre><code>def prepare(data): result = [] for point_id, cluster in data: for index, c in enumerate(cluster): found = 0 for res in result: if c == res[0]: found = 1 if(found == 0): result.append((c, [])) for res in result: if c == res[0]: res[1].append(point_id) return result </code></pre> <p>but when I <code>mapPartitions()</code>'ed <code>data</code> RDD with <code>prepare()</code>, it seem to do what I want only in the current partition, thus return a bigger result than the desired.</p> <p>For example, if the 1st record in the start was in the 1st partition and the 2nd in the 2nd, then I would get as a result:</p> <pre><code>(3885, [3333333333]) (7898, [3333333333]) (3885, [1424411938]) (7898, [1424411938]) </code></pre> <p>How to modify my <code>prepare()</code> to get the desired effect? Alternatively, how to process the result that <code>prepare()</code> produces, so that I can get the desired result?</p> <hr> <p>As you may already have noticed from the code, I do not care about speed at all.</p> <p>Here is a way to create the data:</p> <pre><code>data = [] from random import randint for i in xrange(0, 10): data.append((randint(0, 100000000), (randint(0, 16000), randint(0, 16000)))) data = sc.parallelize(data) </code></pre>
0
2016-09-08T23:39:38Z
39,401,827
<p>You can use a bunch of basic pyspark transformations to achieve this.</p> <pre><code>&gt;&gt;&gt; rdd = sc.parallelize([(1424411938, [3885, 7898]),(3333333333, [3885, 7898])]) &gt;&gt;&gt; r = rdd.flatMap(lambda x: ((a,x[0]) for a in x[1])) </code></pre> <p>We used <code>flatMap</code> to have a key, value pair for every item in <code>x[1]</code> and we changed the data line format to <code>(a, x[0])</code>, the <code>a</code> here is every item in <code>x[1]</code>. To understand <code>flatMap</code> better you can look to the documentation.</p> <pre><code>&gt;&gt;&gt; r2 = r.groupByKey().map(lambda x: (x[0],tuple(x[1]))) </code></pre> <p>We just grouped all key, value pairs by their keys and used tuple function to convert iterable to tuple.</p> <pre><code>&gt;&gt;&gt; r2.collect() [(3885, (1424411938, 3333333333)), (7898, (1424411938, 3333333333))] </code></pre> <p>As you said you can use [:150] to have first 150 elements, I guess this would be proper usage:</p> <p><code>r2 = r.groupByKey().map(lambda x: (x[0],tuple(x[1])[:150]))</code></p> <p>I tried to be as explanatory as possible. I hope this helps.</p>
1
2016-09-08T23:56:51Z
[ "python", "algorithm", "apache-spark", "bigdata", "distributed-computing" ]
Python list or pandas dataframe arbitrary indexing and slicing
39,401,709
<p>I have used both R and Python extensively in my work, and at times I get the syntax between them confused.</p> <p>In R, if I wanted to create a model from only <strong><em>some</em></strong> features of my data set, I can do something like this:</p> <pre><code>subset = df[1:1000, c(1,5,14:18,24)] </code></pre> <p>This would take the first 1000 rows (yes, R starts on index 1), and it would take the 1st, 5th, 14th <strong><em>through</em></strong> 18th, and 24th columns.</p> <p>I have tried to do any combination of <code>slice</code>, <code>range</code>, and similar sorts of functions, and have not been able to duplicate this sort of flexibility. In the end, I just enumerated all of the values.</p> <p>How can this be done in Python?</p> <blockquote> <p>Pick an arbitrary subset of elements from a list, some of which are selected individually (as in the commas shown above) and some selected sequentially (as in the colons shown above)?</p> </blockquote>
0
2016-09-08T23:42:23Z
39,401,764
<p>You can use <code>iloc</code> for integer indexing in pandas:</p> <pre><code>df.iloc[0:10000, [0, 4] + range(13,18) + [23]] </code></pre> <p>As commented by @root, in Python 3, you need to explicitly convert <code>range()</code> to list by <code>df.iloc[0:10000, [0, 4] + list(range(13,18)) + [23]]</code></p>
2
2016-09-08T23:49:28Z
[ "python", "pandas", "numpy", "slice" ]
Python list or pandas dataframe arbitrary indexing and slicing
39,401,709
<p>I have used both R and Python extensively in my work, and at times I get the syntax between them confused.</p> <p>In R, if I wanted to create a model from only <strong><em>some</em></strong> features of my data set, I can do something like this:</p> <pre><code>subset = df[1:1000, c(1,5,14:18,24)] </code></pre> <p>This would take the first 1000 rows (yes, R starts on index 1), and it would take the 1st, 5th, 14th <strong><em>through</em></strong> 18th, and 24th columns.</p> <p>I have tried to do any combination of <code>slice</code>, <code>range</code>, and similar sorts of functions, and have not been able to duplicate this sort of flexibility. In the end, I just enumerated all of the values.</p> <p>How can this be done in Python?</p> <blockquote> <p>Pick an arbitrary subset of elements from a list, some of which are selected individually (as in the commas shown above) and some selected sequentially (as in the colons shown above)?</p> </blockquote>
0
2016-09-08T23:42:23Z
39,402,012
<p>Try this, The first square brackets filter. The second set of square brackets slice. </p> <pre><code>df[[0,4]+ range(13,18)+[23]][:1000] </code></pre>
1
2016-09-09T00:22:20Z
[ "python", "pandas", "numpy", "slice" ]
Python list or pandas dataframe arbitrary indexing and slicing
39,401,709
<p>I have used both R and Python extensively in my work, and at times I get the syntax between them confused.</p> <p>In R, if I wanted to create a model from only <strong><em>some</em></strong> features of my data set, I can do something like this:</p> <pre><code>subset = df[1:1000, c(1,5,14:18,24)] </code></pre> <p>This would take the first 1000 rows (yes, R starts on index 1), and it would take the 1st, 5th, 14th <strong><em>through</em></strong> 18th, and 24th columns.</p> <p>I have tried to do any combination of <code>slice</code>, <code>range</code>, and similar sorts of functions, and have not been able to duplicate this sort of flexibility. In the end, I just enumerated all of the values.</p> <p>How can this be done in Python?</p> <blockquote> <p>Pick an arbitrary subset of elements from a list, some of which are selected individually (as in the commas shown above) and some selected sequentially (as in the colons shown above)?</p> </blockquote>
0
2016-09-08T23:42:23Z
39,402,051
<p>In a file of <code>index_tricks</code>, <code>numpy</code> defines a class instance that converts a scalars and slices into an enumerated list, using the <code>r_</code> method:</p> <pre><code>In [560]: np.r_[1,5,14:18,24] Out[560]: array([ 1, 5, 14, 15, 16, 17, 24]) </code></pre> <p>It's an instance with a <code>__getitem__</code> method, so it uses the indexing syntax. It expands <code>14:18</code> into <code>np.arange(14,18)</code>. It can also expand values with <code>linspace</code>.</p> <p>So I think you'd rewrite</p> <pre><code>subset = df[1:1000, c(1,5,14:18,24)] </code></pre> <p>as</p> <pre><code>df.iloc[:1000, np.r_[0,4,13:17,23]] </code></pre>
3
2016-09-09T00:28:57Z
[ "python", "pandas", "numpy", "slice" ]
plot Latitude longitude points from dataframe on folium map - iPython
39,401,729
<p>I have a dataframe with lat/lon coordinates </p> <pre><code>latlon (51.249443914705175, -0.13878830247011467) (51.249443914705175, -0.13878830247011467) (51.249768239976866, -2.8610415615063034) ... </code></pre> <p>I would like to plot these on to a Folium map but I'm not sure of how to iterate through each of the rows.</p> <p>any help would be appreciated, Thanks in advance!</p>
0
2016-09-08T23:45:10Z
39,401,823
<p>This can solve your issue</p> <pre><code>import folium mapit = None latlon = [ (51.249443914705175, -0.13878830247011467), (51.249443914705175, -0.13878830247011467), (51.249768239976866, -2.8610415615063034)] for coord in latlon: mapit = folium.Map( location=[ coord[0], coord[1] ] ) mapit.save( 'map.html') </code></pre> <h3>Edit (using marker)</h3> <pre><code>import folium latlon = [ (51.249443914705175, -0.13878830247011467), (51.249443914705175, -0.13878830247011467), (51.249768239976866, -2.8610415615063034)] mapit = folium.Map( location=[52.667989, -1.464582], zoom_start=6 ) for coord in latlon: folium.Marker( location=[ coord[0], coord[1] ], fill_color='#43d9de', radius=8 ).add_to( mapit ) mapit.save( 'map.html') </code></pre> <p>It'd be great if you use this reference: <a href="https://github.com/python-visualization/folium" rel="nofollow">https://github.com/python-visualization/folium</a></p>
0
2016-09-08T23:56:11Z
[ "python", "dataframe", "ipython", "folium" ]
Python is not printing my input in while loop
39,401,791
<p>I'm having trouble with my python code as it is not printing anything after I input something into the compiler. </p> <p>I'm trying to write code that uses a while loop and allows the user to enter an album's artist and title. Then I should be able to call <code>make_album</code> with the user's input and print the dictionary that's created. </p> <p>However, after I enter the artist and title, nothing prints.</p> <p>Here is my python code:</p> <pre><code>def make_album(artist_name, album_title, num_tracks = ''): """Return artist and album title name.""" CD1 = {'sonic': artist_name, 'his world': album_title} CD2 = {'shadow': artist_name, 'all hail shadow': album_title} CD3 = {'silver': artist_name, 'dream of an absolution': album_title} if num_tracks: CD = artist_name + ' ' + album_title + ' ' + num_tracks else: CD = artist_name + ' ' + album_title return CD.title() while True: print("\nEnter album's artist and title: ") print("\nEnter 'q' at anytime to quit") a_name = input("Artist name: ") if a_name == 'q': break a_title = input("Album title: ") if a_title == 'q': break make_album(a_name, a_title) formatted_album = make_album(a_name, a_title) print(formatted_album) </code></pre> <p>Does anyone have any idea as to what I may be doing wrong? Any feedback would be greatly appreciated. </p> <p>Thank you for your time.</p>
0
2016-09-08T23:52:21Z
39,401,900
<p>I suggest to print out the type of your "a_name" and see what it says.</p> <p>My guess is that your usage of <code>input</code> does not yield what you expect. Try <code>raw_input</code> instead.</p> <p>The python course has the following to say on <a href="http://www.python-course.eu/input.php" rel="nofollow"><code>input</code></a>:</p> <blockquote> <p>The input of the user will be interpreted. If the user e.g. puts in an integer value, the input function returns this integer value. If the user on the other hand inputs a list, the function will return a list.</p> </blockquote> <p>That means that if you want a string, e.g. <code>'q'</code>, you have to enter, literally, <code>'q'</code>.</p> <p>You may want your input to be takes as a string, no matter what you type, and leave interpretation up to your implementation. Then you better call <code>raw_input</code>.</p>
0
2016-09-09T00:06:04Z
[ "python" ]
Python is not printing my input in while loop
39,401,791
<p>I'm having trouble with my python code as it is not printing anything after I input something into the compiler. </p> <p>I'm trying to write code that uses a while loop and allows the user to enter an album's artist and title. Then I should be able to call <code>make_album</code> with the user's input and print the dictionary that's created. </p> <p>However, after I enter the artist and title, nothing prints.</p> <p>Here is my python code:</p> <pre><code>def make_album(artist_name, album_title, num_tracks = ''): """Return artist and album title name.""" CD1 = {'sonic': artist_name, 'his world': album_title} CD2 = {'shadow': artist_name, 'all hail shadow': album_title} CD3 = {'silver': artist_name, 'dream of an absolution': album_title} if num_tracks: CD = artist_name + ' ' + album_title + ' ' + num_tracks else: CD = artist_name + ' ' + album_title return CD.title() while True: print("\nEnter album's artist and title: ") print("\nEnter 'q' at anytime to quit") a_name = input("Artist name: ") if a_name == 'q': break a_title = input("Album title: ") if a_title == 'q': break make_album(a_name, a_title) formatted_album = make_album(a_name, a_title) print(formatted_album) </code></pre> <p>Does anyone have any idea as to what I may be doing wrong? Any feedback would be greatly appreciated. </p> <p>Thank you for your time.</p>
0
2016-09-08T23:52:21Z
39,402,010
<p>If you call to your <code>make_album</code> method in the bottom of your code it'll only show your three cd variables (or constants?), but that will only happen if you pass to it something to print.</p> <p>You're returning <code>CD.title()</code>, but what's title? a method from CD? where is it defined?, there's no value for title nor it's defined inside your CD1 nor to CD3, if you want to print that try, replacing it for <code>return CD1, CD2, CD3</code>, this way at least you're going to see them when you press q to exit from the prompt. And this will only happen one time, that's to say, you're going to see it only one time when press 'q', because you don't have the call to your method <code>make_album</code>, nor the <code>formatted_album</code> nor the of this same inside of your <code>while</code> block, give them the necessary tab space to be inside of it, this way they're going to be whenever you enter a new artist name and/or a new album title.</p> <p>And what about your CDs name, are they constants or variables?, if they're dictionaries and you're adding its <code>num_tracks</code> values can be considered as constants or not?</p> <pre><code>def make_album(artist_name, album_title, num_tracks = ''): """Return artist and album title name.""" CD1 = {'sonic': artist_name, 'his world': album_title} CD2 = {'shadow': artist_name, 'all hail shadow': album_title} CD3 = {'silver': artist_name, 'dream of an absolution': album_title} if num_tracks: CD = artist_name + ' ' + album_title + ' ' + num_tracks else: CD = artist_name + ' ' + album_title return CD1, CD2, CD3 while True: print("\nEnter album's artist and title: ") print("\nEnter 'q' at anytime to quit") a_name = input("Artist name: ") if a_name == 'q': break a_title = input("Album title: ") if a_title == 'q': break make_album(a_name, a_title) formatted_album = make_album(a_name, a_title) print(formatted_album) </code></pre> <p>I hope to have helped you, cheers!</p>
0
2016-09-09T00:21:51Z
[ "python" ]
Pandas group_by date and resample
39,401,821
<p>I have some data frame that looks like this:</p> <pre><code> A B C date 0 J Y 2 2013-02-01 14:21:02.070030 1 X X 0 2013-02-01 15:49:33.110849 2 Y D 9 2013-02-01 06:47:19.369514 3 Y C 17 2013-02-01 08:56:11.751781 4 3 J 21 2013-02-01 14:19:12.017232 </code></pre> <p>I'd like to group by date and then count, but omit the information about the hours, minutes, seconds, etc.</p> <p>It seems like something like this works:</p> <pre><code>df.set_index('date').resample('D').count() </code></pre> <p>Two questions:</p> <ol> <li>Why does that work? Is that the right way?</li> <li>Why doesn't something like <code>df.group_by('date').resample('D').count()</code> work?</li> </ol>
2
2016-09-08T23:55:54Z
39,402,120
<p><code>resample</code> is in some sense just a special case of groupby - rather than grouping on distinct values, which is what <code>grouppy('date')</code> would do, it groups a time-based transformation of the index, which is why you need to set the index. Alternatively, you could do:</p> <pre><code>df.groupby(pd.Grouper(key='date', freq='D')).count() </code></pre> <p>In the upcoming version <code>0.19.0</code> you'll be able to write the above like this.</p> <pre><code>df.resample('D', on='date').count() </code></pre>
4
2016-09-09T00:36:04Z
[ "python", "pandas" ]
How can I show realtime text analysis of a Django form Textarea (models.TextField)?
39,401,867
<p>OK, so I have a model class <code>Recipe</code> and a form class <code>AddRecipeForm</code> that is based on <code>Recipe</code> (<code>via forms.ModelForm</code>).</p> <p>The form shows up in my html template and works, but I want to implement a special type of form validation. Basically, I want to do some text processing on some big text input fields as a user types his / her ingredients / directions.</p> <p>I have code that figures out where all the amounts (numbers) in the ingredient text are, and identifies and classifies all the units (e.g. 'lbs.', 'gr', etc...). --> I'd love to be able to basically have a text-box right by the side of the form in <code>add_recipe.html</code> that performs this text processing (via highlighting, bold, etc) in real time.</p> <p>However, I'm really not sure how to do it. I've been reading about doing real-time form validation via <code>AJAX</code>, <code>jquery</code>, 'django channels', or maybe just <code>django</code>'s <code>ModelForm.clean()</code> method, but I'm not sure which would be best or where to start. Any pointers / suggestions would be awesome!</p> <p>Here's my (simplified) code:</p> <h2>models.py</h2> <pre><code>class Recipe(models.Model): recipe_name = models.CharField(max_length=128, default='') description = models.CharField(max_length=1024) ingredients_text = models.TextField(max_length=2048*2) instructions_text = models.TextField(max_length=2048*4) </code></pre> <h2>forms.py</h2> <pre><code>class AddRecipeForm(forms.ModelForm): class Meta: model = Recipe fields = '__all__' </code></pre> <h2>views.py</h2> <pre><code>def add_recipe(request): if request.method == "POST": add_recipe_form = AddRecipeForm(request.POST) if add_recipe_form.is_valid(): recipe = add_recipe_form.save() recipe.save() return HttpResponseRedirect('/recipes/detail/{}/'.format(recipe.id)) else: return HttpResponse('Invalid values :( Try again?') else: add_recipe_form = AddRecipeForm() context = { 'add_recipe_form': add_recipe_form, } return render(request, 'home/add_recipe.html', context) </code></pre> <h2>home/templates/home/add_recipe.py (html template)</h2> <p>sidenote: I'm using the <a href="http://materializecss.com/" rel="nofollow" title="materialize">materialize</a> framework, so I'm doing forms with a little helper so the CSS doesn't conflict with django - I can stop using it if need be though...</p> <pre><code>{% extends "home/index.html" %} {% block content %} {% load materialize %} &lt;h2&gt; Add a Recipe &lt;/h2&gt; &lt;form method="post" enctype="multipart/form-data"&gt; {% csrf_token %} &lt;div class="row"&gt; &lt;p&gt;{{add_recipe_form.ingredients_text|as_material:"s12 m6"}}&lt;/p&gt; &lt;div class="col s12 m6 textbox"&gt; TODO (realtime validation textbox) &lt;/div&gt; &lt;/div&gt; &lt;div class="row"&gt; &lt;p&gt;{{add_recipe_form.instructions_text|as_material:"s12 m6"}}&lt;/p&gt; &lt;div class="col s12 m6 textbox"&gt; TODO (realtime validation textbox) &lt;/div&gt; &lt;/div&gt; &lt;input type="submit" value="Submit" class="waves-effect waves-light btn"/&gt; &lt;/form&gt; {% endblock %} </code></pre>
0
2016-09-09T00:01:45Z
39,431,749
<p>Don't get Django's Form class involved and don't worry about django-channels. You just need to setup a simple API that takes a JSON of whatever is in your text field and returns a result of processed text. JavaScript would be responsible for calling the API via AJAX and doing something with the response.</p> <p>It's not 100% clear what you're doing with the processed text but if you're replacing the words "two and a half" with "2 1/2", I could see the server responding with a key value pair of {"two and a half": "2 1/2"} and allowing JavaScript to find and replace that text.</p>
1
2016-09-11T00:20:25Z
[ "jquery", "python", "django", "forms" ]
Maximize quadratic objective with linear constraints in R: Rsolnp or Auglag
39,401,978
<p>I am trying to find a solution to the following optimization problem using either auglag or Rsolnp. </p> <pre><code>Max t(w1 - w2) * Kf * Sf * t(Kf) * (w1 - w2) subject to Kc * w1 = Kc * w2 and sum(w1) = 1 and sum(w2) = 1 and w1,w2 &gt;= 0 Sc and Sf are variance covariance matrices at the coarse and fine level respectively. Kc and Kf are exposure matrices as the coarse and fine level respectively. Nc and Nf are nodes at which exposure nodes at the coarse and fine level. </code></pre> <p>This is effectively trying to find the wts of two portfolios w1 and w2 that would maximize the TEV at finer exposure level, subject to sum of wts = 1 and all wts > 0. There is another equality constraint too (This effectively means exposure at the coarse level are identical for the two portfolios). Rsolnp fails to maximize and gives back a solution where the objective function is 0 and auglag completely blows up and does not meet constraints with a bunch of warnings as well. </p> <p>Can anyone please help me understand where am I going wrong?</p> <pre><code> seqFineNodes &lt;- c(1, 2, 3, 4, 5, 6) Nc &lt;- c(2, 3, 5) Kc &lt;- matrix(c(0.2481316799436,0.495478766935844,0,0,0,0,0,0,0.743360061619584,0.497321712603124,0,0,0,0,0,0.497321712603124,1.23913608908603,1.48240730986596), nrow=length(seqFineNodes), ncol=length(Nc)) dimnames(Kc) &lt;- list(as.character(seqFineNodes), as.character(Nc)) Sc &lt;- matrix(c(619.806079280659,627.832850585004,549.805085990891,627.832850585004,668.726833059322,624.524848194842,549.805085990891,624.524848194842,696.498483673357), nrow=length(Nc), ncol=length(Nc)) dimnames(Sc) &lt;- list(as.character(Nc), as.character(Nc)) Nf &lt;- c(2, 3, 4, 5) Kf &lt;- matrix(c(0.2481316799436,0.495478766935844,0,0,0,0,0,0,0.743360061619584,0,0,0,0,0,0,0.994643425206249,0,0,0,0,0,0,1.23913608908603,1.48240730986596), nrow=length(seqFineNodes), ncol=length(Nf)) dimnames(Kf) &lt;- list(as.character(seqFineNodes), as.character(Nf)) Sf &lt;- matrix(c(619.806079280659,627.832850585004,602.504944834256,549.805085990891,627.832850585004,668.726833059322,666.196728425214,624.524848194842,602.504944834256,666.196728425214,696.688027074344,681.064062606848,549.805085990891,624.524848194842,681.064062606848,696.498483673357), nrow=length(Nf), ncol=length(Nf)) dimnames(Sf) &lt;- list(as.character(Nf), as.character(Nf)) KRD_fine &lt;- Kf KRD_coarse &lt;- Kc VC_fine &lt;- Sf VC_coarse &lt;- Sc countw &lt;- length(seqFineNodes) t1 &lt;- diag(x = 1, nrow = countw, ncol = countw) t2 &lt;- diag(x = -1, nrow = countw, ncol = countw) tr &lt;- cbind(t1,t2) D_fine &lt;- t(tr) %*% KRD_fine %*% VC_fine %*% t(KRD_fine) %*% tr #round(eigen(Dmat)$values, 4) D_fine &lt;- as.matrix(nearPD(D_fine)$mat) #round(eigen(Dmat)$values, 4) eq_coarse_krd_A &lt;- t(KRD_coarse) %*% tr eq_coarse_krd_b &lt;- rep(0, nrow(VC_coarse)) # Equality constraints eq_A1 &lt;- c(rep(1, countw), rep(0,countw)) eq_A2 &lt;- c(rep(0, countw), rep(1,countw)) eq_b &lt;- c(1 , 1) # Constraint wts greater than zero ineq_A &lt;- diag(x = 1, nrow = 2 * countw, ncol = 2 * countw) ineq_b &lt;- rep(0, 2 * countw) # Combine constraints heq &lt;- rbind(eq_coarse_krd_A, eq_A1, eq_A2) beq &lt;- c(eq_coarse_krd_b, eq_b) hin &lt;- ineq_A theta &lt;- c(1, rep(0, countw - 1), 1, rep(0, countw - 1)) krdsol &lt;- solnp(par = theta, fun = function(x) -c(t(x) %*% D_fine %*% x), ineqfun = function(x) c(hin %*% x), ineqLB = rep(0, 2 * countw), ineqUB = rep(1, 2 * countw), eqfun = function(x) c(heq %*% x), eqB = beq) krdFine &lt;- auglag(par = theta, fn = function(x) c(t(x) %*% D_fine %*% x), hin = function(x) c(hin %*% x), heq = function(x) c(heq %*% x) - beq, control.outer = list(method = "nlminb"), control.optim=list(fnscale=-1)) </code></pre>
3
2016-09-09T00:17:35Z
39,404,661
<p>I solved your problem about <code>solnp</code>. <code>?solnp</code> says <code>fun</code>, <code>ineqfun</code> and <code>eqfun</code> return <code>vector</code> but yours return <code>matrix</code>. So I added <code>c(...)</code> to them.</p> <pre><code>library(Rsolnp) krdsol &lt;- solnp(par = theta, fun = function(x) c(-t(x) %*% D_fine %*% x), ineqfun = function(x) c(hin %*% x), ineqLB = rep(0, 2 * countw), ineqUB = rep(1, 2 * countw), eqfun = function(x) c(heq %*% x), eqB = beq) </code></pre> [Edited] <p>The elements <code>auglag(control.optim=list(...))</code> takes as arguments are listed in <code>?nlminb()</code> (and see <code>?auglag()</code>)</p>
0
2016-09-09T05:50:17Z
[ "python", "matlab", "mathematical-optimization" ]
Split function splitting words with punctuations. Want to prevent. After splitting how to Alphabetize?
39,402,069
<p>I have two issues. Here goes the code: </p> <pre><code>Read =open("C:\Users\Moondra\Desktop/test1.txt",'r') text =Read.read() words =text.split() print(words) print(words.sort()) ##counts=dict() ##for word in words: ## counts[word] = counts.get(word,0)+1 ## ## ##print counts </code></pre> <p>And the text that I am trying to read:</p> <p><strong>test1.txt</strong></p> <p>Hello Hello Hello.</p> <p>How is everything. What is going on? Where are you? Hello!!</p> <p>Hope to see you soon.</p> <p>When are you coming by?</p> <p>What should I make for dinner?</p> <p>The end!</p> <p><strong>End of text from txt file</strong></p> <p>My two questions are the following:</p> <ol> <li><p>I'm trying to implement a count-each-word code where I count the number of times each word appears in a document. However when I split the words using the above code, the word "Hello" will appear as "Hello!," or even "Hello." separately. How can I avoid this? </p></li> <li><p>Next, I tried to sort the elements of the list, alphabetically, but all I get in return after running the <code>sort()</code> method is <code>none</code> which is really confusing me. </p></li> </ol> <p>Thanks!</p>
0
2016-09-09T00:30:54Z
39,402,490
<p>This code should work for what you described:</p> <pre><code>import re with open("C:\Users\Moondra\Desktop/test1.txt", 'r') as file: file = file.read() words_list = re.findall(r"[\w]+", file) words_list = sorted(words_list, key=str.lower) patterns = ["Hello"] counter = 0 for word in words_list: for pattern in patterns: if word == pattern: counter+=1 print("The word Hello occurred {0} times".format(counter)) # prints the number of times 'Hello' was found print(words_list) # prints your list alphabetically </code></pre> <p>There are a few things you should note however:</p> <ul> <li>I used the <code>re</code> module instead of sort. This is because using the <a href="https://docs.python.org/2/library/re.html" rel="nofollow">regular expression</a> engine in the re module would be much less complex than trying to split the strings using the <code>split()</code> function.</li> <li>I renamed some of your variables to follow the <a href="https://www.python.org/dev/peps/pep-0008/" rel="nofollow">PEP8</a> guide and naming convention for Python. Feel free to rename to your liking.</li> <li>the reason that <strong><code>sort()</code></strong> returned a list, is because the <code>sort()</code> attribute of a list, does not return a new list, but changes the old one. That is, the <code>sort()</code> attribute of a list sorts in place. The <code>sort()</code> you were using returns the data type <code>None</code>. You need to use the builtin Python function <strong><code>sorted()</code></strong> instead. The <code>sorted()</code> function returns the data type <code>list</code>.</li> </ul>
2
2016-09-09T01:28:16Z
[ "python", "text", "split" ]
Generating points on a circle
39,402,109
<pre><code>import random import math import matplotlib.pyplot as plt def circle(): x = [] y = [] for i in range(0,1000): angle = random.uniform(0,1)*(math.pi*2) x.append(math.cos(angle)); y.append(math.sin(angle)); plt.scatter(x,y) plt.show() circle() </code></pre> <p>I've written the above code to draw 1000 points randomly on a unit circle. However, when I run this code, it draws an oval for some reason. Why is this?</p> <p><a href="http://i.stack.imgur.com/vxsbK.png" rel="nofollow"><img src="http://i.stack.imgur.com/vxsbK.png" alt="enter image description here"></a></p>
1
2016-09-09T00:35:11Z
39,402,126
<p>It is a circle -- The problem is that the aspect ratio of your axes is not 1 so it looks like an oval when you plot it. To get an aspect ratio of 1, you can use:</p> <pre><code>plt.axes().set_aspect('equal', 'datalim') # before `plt.show()` </code></pre> <p>This is highlighted in a <a href="http://matplotlib.org/examples/pylab_examples/equal_aspect_ratio.html" rel="nofollow">demo</a>.</p>
2
2016-09-09T00:36:49Z
[ "python", "matplotlib" ]
Replacing a certain part of a string in a Pandas Data Frame with Regex
39,402,154
<p>My data frame has a date column (that currently are strings). I am trying to fix a problem with the column.</p> <pre><code>df[:15] Date Customer ID 0 01/25/2016 104064596300 1 02/28/2015 102077474472 2 11/17/2016 106430081724 3 02/24/2016 107770391692 4 10/05/2016 106523680888 5 02/24/2016 107057691592 6 11/24/2015 102472820188 7 10/12/2016 107195498128 8 01/05/2016 104796266660 9 09/30/2016 107812562924 10 10/13/2015 102809057000 11 11/21/2016 107379017712 12 11/08/2015 106642145040 13 02/26/2015 107862343816 14 10/16/2016 107383084928 </code></pre> <p>My data is supposed to be within the date range of: Sept 2015 to Feb 2016.</p> <p>Some of the data has their years mixed up (see row 2 above for example - its November 17, 2016!)</p> <p>What I am trying to do is change the years for the observations with incorrect dates.</p> <p>I have played around the replace() command in Pandas but cannot come to a command that works:</p> <pre><code>df.Date.str.replace(('^(09|10|11|12)\/\d\d\/2016$'), '2015') 0 01/25/2016 1 02/28/2015 2 2015 3 02/24/2016 4 2015 5 02/24/2016 6 11/24/2015 7 2015 8 01/05/2016 9 2015 10 10/13/2015 11 2015 12 11/08/2015 13 02/26/2015 14 2015 15 12/17/2015 16 01/05/2015 17 01/21/2015 18 2015 19 2015 20 02/06/2016 21 10/06/2015 22 02/18/2016 </code></pre> <p>To be specific, I am simply trying to change the last 4 digits (the year) of each row depending on some conditions:</p> <ol> <li><p>If the month is within September to December (09 to 12) and has year 2016, change the year for this observation to 2015</p></li> <li><p>If the month is January or February (01 or 02) and has year 2015, change the year for this observation to 2016</p></li> </ol> <p>The command I wrote above identifies the correct observations for scenario 1) but I am having trouble replacing the last 4 digits and inputting the results back into the original data frame.</p> <p><strong>One final note:</strong> You might be thinking why don't I simply change the column to a datetime type and then add or subtract a year based on my needs? If I attempt to do that, I will run into an error as some observations have a date of: 2/29/2015 -> you will run into an error as there was no Feb. 29 during 2015!</p>
2
2016-09-09T00:39:56Z
39,402,337
<p>Do not treat date as strings. You can first transform the string format of date to timestamp, then slice.</p> <pre><code>import pandas ad pd df.loc[:, 'Date'] = pd.DatetimeIndex(df['Date'], name='Date') df = df.set_index('Date') df['2015-09': '2016-02'] </code></pre> <p>Update:</p> <pre><code>df.loc[:, 'year_month'] = df.Date.map(lambda s: int(s[-4:]+s[:3])) df.query('201509&lt;=year_month&lt;=201602').drop('year_month', axis=1) </code></pre> <p>sorry, I misunderstood your question.</p> <pre><code>def transform(date_string): year = date_string[-4:] month = date_string[:2] day = date_string[3:5] if year== '2016' and month in ['09', '10', '11', '12']: return month + '/' + day + '/' + str(int(year)-1) elif year == '2015' and month in ['01', '02', '03']: return month + '/' + day + '/' + str(int(year)+1) else: return date_string df.loc[:, 'Date'] = df.Date.map(transform) </code></pre>
2
2016-09-09T01:03:45Z
[ "python", "regex", "pandas", "numpy", "substring" ]
Raise an error and after that return a value in Python
39,402,158
<p>I want to raise an error and then return a value.</p> <pre><code>raise ValueError('Max Iter Reached') return x </code></pre> <p>And make both lines work.</p> <p>Now I found this question is stupid and I decided to print an error message.</p> <pre><code>print 'Max Iter Reached' return x </code></pre>
0
2016-09-09T00:40:47Z
39,402,183
<p>It does not make sense but In case you need it. </p> <pre><code>def your_method(self): ...... try: raise ValueError('Max Iter Reached') except ValueError as e: return value </code></pre>
1
2016-09-09T00:45:22Z
[ "python" ]
Raise an error and after that return a value in Python
39,402,158
<p>I want to raise an error and then return a value.</p> <pre><code>raise ValueError('Max Iter Reached') return x </code></pre> <p>And make both lines work.</p> <p>Now I found this question is stupid and I decided to print an error message.</p> <pre><code>print 'Max Iter Reached' return x </code></pre>
0
2016-09-09T00:40:47Z
39,402,228
<p>This is impossible. A function can <em>either</em> return a value <em>or</em> raise an exception. It <em>cannot</em> do both. Calling <code>return</code> or <code>raise</code> will absolutely terminate the function.</p> <p>You could encode a return value inside the exception message, like this:</p> <pre><code>raise SomeException('my value is 5') </code></pre> <p>Or you could return a tuple of an exception <em>and</em> a value:</p> <pre><code>return (SomeException('hello'), 5) </code></pre>
3
2016-09-09T00:51:23Z
[ "python" ]
How to print function on if statement?
39,402,210
<p>How do I print the function 'play' in the if statement on line 5?</p> <pre><code>print("Deal 'Em\nby Imago Games") print("1. Play\n2. Credits") select = input() if select == 1: def play() elif select == 1: print("Bye") def play(): print("Welcome to Deal 'Em\nLet me teach you the basics!") </code></pre>
0
2016-09-09T00:48:53Z
39,402,240
<p>You can put either just <code>play()</code> on line 5 (remove the <code>def</code>) or change the <code>play</code> function to return the string instead of printing it. <code>def</code> is only used when you define the function, not when you call it.</p>
0
2016-09-09T00:52:18Z
[ "python" ]
How to print function on if statement?
39,402,210
<p>How do I print the function 'play' in the if statement on line 5?</p> <pre><code>print("Deal 'Em\nby Imago Games") print("1. Play\n2. Credits") select = input() if select == 1: def play() elif select == 1: print("Bye") def play(): print("Welcome to Deal 'Em\nLet me teach you the basics!") </code></pre>
0
2016-09-09T00:48:53Z
39,402,247
<p>Just leave it as play(), the print is already inside the function</p>
0
2016-09-09T00:52:51Z
[ "python" ]
How to print function on if statement?
39,402,210
<p>How do I print the function 'play' in the if statement on line 5?</p> <pre><code>print("Deal 'Em\nby Imago Games") print("1. Play\n2. Credits") select = input() if select == 1: def play() elif select == 1: print("Bye") def play(): print("Welcome to Deal 'Em\nLet me teach you the basics!") </code></pre>
0
2016-09-09T00:48:53Z
39,402,261
<p>Instead of <code>def play()</code> on line 5, use <code>play()</code> to simply call the function. Also, you should work on the structure of your code and place all the function definitions before calling them.</p>
0
2016-09-09T00:54:08Z
[ "python" ]
How to print function on if statement?
39,402,210
<p>How do I print the function 'play' in the if statement on line 5?</p> <pre><code>print("Deal 'Em\nby Imago Games") print("1. Play\n2. Credits") select = input() if select == 1: def play() elif select == 1: print("Bye") def play(): print("Welcome to Deal 'Em\nLet me teach you the basics!") </code></pre>
0
2016-09-09T00:48:53Z
39,402,355
<p>There are three major problems in your code:</p> <ul> <li><code>play()</code> is defined before usage. You have to define the function earlier in your code.</li> <li>to call the function use <code>play()</code> without <code>def</code></li> <li>your <code>elif</code> has the same condition as the <code>if</code> and will never be executed</li> </ul>
3
2016-09-09T01:05:50Z
[ "python" ]
Django search as input is entered into form
39,402,314
<p>I currently have a working search form in my project that passes through form data to the GET request. Pretty standard. What I'm wanting to do is search as data is entered into the search form, so that results will display in real time with search data. This is much like what Google does with the instant desktop results. Is this something that's possible with Django? Below is my current (simple) search</p> <pre><code>#views.py def ProductView(request): title = 'Products' all_products = Product.objects.all().order_by("product_Name") query = request.GET.get("q") if query: products = all_products.filter( Q(product_Name__contains=query) | Q(manufacturer__contains=query) ).distinct() return render(request, 'mycollection/details.html', { 'all_products' : products }) </code></pre> <p>-</p> <pre><code>&lt;!-- HTML --&gt; &lt;!-- SEARCH BAR --&gt; &lt;form class="navbar-form navbar-left" role="search" method="get" action="{% url 'mycollection:products' %}"&gt; &lt;div class="form-group"&gt; &lt;input type="text" class="form-control" name="q" value="{{ request.GET.q }}"&gt; &lt;/div&gt; &lt;button type="submit" class="btn btn-default"&gt;Search&lt;/button&gt; &lt;/form&gt; </code></pre>
0
2016-09-09T01:01:25Z
39,404,039
<p>you can save the request.data in to session and if any data is associated with session search data you can put in to value of search box.</p> <pre><code>request.session['search'] = request.GET.get('q','') </code></pre> <p>templete :</p> <pre><code>{% if request.session.search %} {{request.session.search}} {% endif %} </code></pre>
0
2016-09-09T04:47:57Z
[ "python", "django", "search" ]
Scrapy: TypeError: 'Request' object is not iterable
39,402,361
<p>I'm making a spider using Scrapy (1.1.2) to scrap products. I managed to get it to work and scrape enough data, but now, I want for each element to make new request to the <code>product page</code> and scrap, for example the product description.</p> <p>First, here's my last working code</p> <p><strong>spider.py</strong> (except)</p> <pre><code>class ProductScrapSpider(Spider): name = "dmoz" allowed_domains = ["example.com"] start_urls = [ "http://www.example.com/index.php?id_category=24" # ... ] def parse(self, response): for sel in response.xpath("a long string"): mainloader = ProductLoader(selector=sel) mainloader.add_value('category', 'Category Name') mainloader.add_value('meta', self.get_meta(sel)) # more data yield mainloader.load_item() # Follows the pagination next_page = response.css("li#pagination_next a::attr('href')") if next_page: url = response.urljoin(next_page[0].extract()) yield scrapy.Request(url, self.parse) def get_meta(self, response): metaloader = ProductMetaLoader(selector=response) metaloader.add_value('store', "Store name") # more data yield metaloader.load_item() </code></pre> <p><strong>Output</strong></p> <pre><code>[ { "category": "Category Name", "price": 220000, "meta": { "baseURL": "", "name": "", "store": "Store Name" }, "reference": "100XXX100" }, ... ] </code></pre> <p>After reading the documentation and some answers here, I've altered the <code>get_meta</code> method and added a callback for the request <code>get_product_page</code>:</p> <p><strong>new_spider.py</strong> (except)</p> <pre><code>def get_meta(self, response): metaloader = ProductMetaLoader(selector=response) metaloader.add_value('store', "Store name") # more data items = metaloader.load_item() new_request = scrapy.Request(items['url'], callback=self.get_product_page) # Passing the metadata new_request.meta['item'] = items # The source of the problem yield new_request def get_product_page(self, response): sel = response.selector.css('.product_description') items = response.meta['item'] new_meta = items new_meta.update({'product_page': sel[0].extract()}) return new_meta </code></pre> <p><strong>Expected output</strong></p> <pre><code>[ { "category": "Category Name", "price": 220000, "meta": { "baseURL": "", "name": "", "store": "Store Name", "product_page": "&lt;div&gt; [...] &lt;/div&gt;" }, "reference": "100XXX100" }, ... ] </code></pre> <p><strong>Error</strong></p> <pre><code>TypeError: 'Request' object is not iterable </code></pre> <p><a href="http://pastebin.com/napscirN" rel="nofollow">full output</a></p> <p>I couldn't find anything about this error, so please help me fix it.</p> <p>Thanks a lot.</p>
0
2016-09-09T01:07:01Z
39,403,418
<p>The error you experience (<code>TypeError: 'Request' object is not iterable</code>) happened because a <code>Request</code> instance is being put into a field of the item (in the updated <code>get_meta</code> method function), while the feed exporter cannot serialize it.</p> <p>You would need to return the get meta request to Scrapy, together with a meta argument to pass the half-parsed item. Here's an example of the updated <code>parse</code> method and a new <code>parse_get_meta</code> method:</p> <pre><code>def parse(self, response): for sel in response.xpath("a long string"): mainloader = ProductLoader(selector=sel) mainloader.add_value('category', 'Category Name') #mainloader.add_value('meta', self.get_meta(sel)) # more data item = mainloader.load_item() get_meta_req = self.get_meta(sel) get_meta_req['meta']['item'] = item yield get_meta_req.replace(callback=self.parse_get_meta) def parse_get_meta(self, response): """Parses a get meta response""" item = response.meta['item'] # Parse the response and load the data here, e.g. item['foo'] = bar pass # Finally return the item return item </code></pre> <p>See also: <a href="http://doc.scrapy.org/en/latest/topics/request-response.html#topics-request-response-ref-request-callback-arguments" rel="nofollow">http://doc.scrapy.org/en/latest/topics/request-response.html#topics-request-response-ref-request-callback-arguments</a></p>
1
2016-09-09T03:31:31Z
[ "python", "python-2.7", "scrapy", "scrapy-spider" ]
Process as string for each item in dataframe or list in Python
39,402,412
<p>I'm trying something very simple on python. </p> <p><code>zips = sempmme['Zip code'].unique()</code></p> <p>I want to apply <code>zipcode.isequal('12345')</code> for each <code>zips</code> but I'm not sure how to do it in pythonic efficient way. </p> <p>I tried 'zipcode.isequal(lambda x: x in zips)' and even for loop but I can't seem to get it. </p> <pre><code>for i in range(0, len(zips)): #print(zips[i]) cities[i] = zipcode.isequal("" + zips[i]) </code></pre> <p>It shows 'isequal() can only take string'. Needless to say, this is the first time I'm coding in Python. And figured the best way to learn is to take a project and figure it out.</p> <p>EDIt:</p> <p>output of <code>repr(zips)</code>:</p> <pre><code>"array([u'25404', u'265056555', u'251772049', u'25177', u'26508', u'25262',\n u'26554', u'265053816', u'154741359', u'15461', u'26250',\n u'262413392', u'25443', u'26505', u'258809366', u'217331141',\n u'26757', u'26201', u'25419', u'25427', u'25401', u'26003',\n u'25428', u'26150', u'268479803', u'24426', u' ', u'25813',\n u'253099769', u'22603', u'25174', u'25984', u'25430', u'25438',\n u'268360008', u'254356541', u'26170', u'25971', u'24622', u'24986',\n u'26847', u'24957', u'25963', u'25064', u'260039425', u'25526',\n u'25523', u'26452', u'25143', u'26301', u'25285', u'26104',\n u'25951', u'25206', u'24740', u'252137436', u'25420', u'26330',\n u'24701', u'25309', u'25304', u'26408', u'25564', u'26753',\n u'15349', u'45767', u'25213', u'25168', u'25302', u'24931',\n u'26623', u'25704', u'26362', u'24966', u'250641730', u'26415',\n u'25130', u'26134', u'25413', u'26101', u'25193', u'26354',\n u'260031309', u'26651', u'24954', u'26180', u'256700145', u'26033',\n u'26444', u'25661', u'26555', u'264521704', u'25111', u'25043',\n u'26278', u'25560', u'25181', u'25854', u'259210233', u'24874',\n u'26181', u'24963', u'254381574', u'25557', u'26203', u'26836',\n u'255109768', u'25035', u'25214', u'26726', u'25132', u'25411',\n u'24853', u'26750', u'25071', u'25913', u'26374', u'25110',\n u'24901', u'25843', u'25880', u'26610', u'26456', u'41514',\n u'26684', u'25541', u'25311', u'26431', u'26241', u'26541',\n u'25162', u'25312', u'24801', u'26159', u'25239', u'255269325',\n u'26293', u'249460055', u'25149', u'26743', u'261871112', u'25315',\n u'25570', u'25123', u'254300341', u'25705', u'25421', u'24747',\n u'261709789', u'26438', u'26448', u'263011836', u'26041', u'25248',\n u'24739', u'25125', u'25510', u'26531', u'251860464', u'263690126',\n u'26205', u'25678', u'251238805', u'25320', u'249707005', u'25414',\n u'26133', u'263850384', u'26501', u'25405', u'25882', u'25244',\n u'25504', u'25635', u'24868', u'26143', u'25313', u'45769',\n u'24870', u'25508', u'26323', u'24832', u'25202', u'26451',\n u'25637', u'26288', u'26656', u'25670', u'25550', u'25059',\n u'456197853', u'249011225', u'25303', u'45680', u'26155', u'25002',\n u'25387', u'251771047', u'263230278', u'256250601', u'246051700',\n u'25045', u'25085', u'25011', u'25136', u'26405', u'25241',\n u'26070', u'25075', u'259181310', u'26105', u'25253', u'25275',\n u'24811', u'26287', u'25669', u'25159', u'26833', u'26378',\n u'24850', u'45760', u'26519', u'22802', u'25039', u'25403',\n u'26425', u'25625', u'254254109', u'253099281', u'258821226',\n u'255609701', u'252761627', u'25545', u'26546', u'25674',\n u'255701081', u'25547', u'257021403', u'25555', u'25113',\n u'255609730', u'255089543', u'25909', u'250489721', u'25958',\n u'25831', u'25825', u'25701', u'258479621', u'267630283', u'26588',\n u'24945', u'254280359', u'257029632', u'254253549', u'24869',\n u'25203', u'24847', u'248440000', u'25425', u'24614', u'26807',\n u'253069761', u'28104', u'26525', u'24910', u'25361', u'259813804',\n u'24808', u'253027228', u'26601', u'25801', u'25702', u'26208',\n u'255249621', u'25652', u'25033', u'26416', u'24712', u'25444',\n u'32707', u'259621513', u'25644', u'26034', u'262419617', u'25917',\n u'26062', u'25169', u'24731', u'254434652', u'25314', u'24620',\n u'75092', u'25306', u'26385'], dtype=object)" </code></pre>
-1
2016-09-09T01:16:56Z
39,402,873
<p>Depending on what your goal in "applying <code>zipcode.isequal</code> for each <code>zips</code>"...</p> <p>To return a list where each element is the return value of <code>zipcode.isequal()</code> of the elements in zips:</p> <pre><code>cities = [zipcode.isequal(str(zip)) for zip in zips] </code></pre> <p>or return a list containing the elements in zips for which <code>zipcode.isequal()</code> returns true:</p> <pre><code>cities = [zip for zip in zips if zipcode.isequal(str(zip))] </code></pre> <p>Edit: Given that <code>zips</code> does not consist entirely of numeric strings, you probably need to do an additional filter on either one:</p> <pre><code>cities = [zipcode.isequal(str(zip)) for zip in zips if zip.isdigit()] cities = [zip for zip in zips if zip.isdigit() and zipcode.isequal(str(zip))] </code></pre>
1
2016-09-09T02:22:04Z
[ "python", "dataframe", "zipcode" ]
Django - Click on link won't redirect me to the page unless i right click and open in new tab
39,402,445
<p>I'm fairly new to django and I'm working on a project for some reason clicking on my links won't redirect me to the desired page, nothing happens, but i cant open it bu right click > open in new tab here is my template</p> <p><strong>index.html:</strong></p> <pre><code>&lt;ul class="list"&gt; {% for movie in all_movies %} &lt;li&gt; &lt;img src="{{ movie.poster }}" alt="" class="cover" /&gt; &lt;a href="{% url 'detail' movie.id %}"&gt;&lt;p class="title"&gt;{{ movie.title }}&lt;/p&gt;&lt;/a&gt; {% for genre in movie.genre.all %} &lt;p class="genre"&gt;{{ genre.genre }} | &lt;/p&gt; {% endfor %} &lt;/li&gt; {% endfor %} &lt;/ul&gt; </code></pre> <p><strong>views.py :</strong></p> <pre><code>def detail(request, movie_id): movie = get_object_or_404(Movie, pk=movie_id) return render(request, 'movies/detail.html', {'movie': movie}) </code></pre> <p><strong>urls.py :</strong></p> <pre><code>urlpatterns = [ # /movies/ url(r'^$', views.index, name='index'), # /movies/id/ url(r'^(?P&lt;movie_id&gt;[0-9]+)/$', views.detail, name='detail'), ] </code></pre> <p>i can't find what's wrong with my code, any help would be appreciated!</p>
2
2016-09-09T01:20:41Z
39,406,898
<p>This is most likely a JavaScript issue, this has nothing to do with Django. Your Django setup is working fine, I tested it is well.</p> <p>There is an identical issue faced by someone else <a href="http://stackoverflow.com/questions/20724590/a-href-not-working">here</a>, and it was revealed that JavaScript was the actual issue.</p> <p>I can't think of any other issues that may cause this.</p>
1
2016-09-09T08:14:41Z
[ "python", "django" ]
Is there a batch version of Tweepy's 'destroy_friendship()' function?
39,402,457
<p>It appears that Tweepy's destroy_friendship() function, which unfollows a Twitter user, can only unfollow one user at a time - this presumably means that to unfollow many users, you have to make many API calls, and this counts against your rate limit.</p> <p>Is there a batch version of the destroy_friendship() call, which would let you unfollow a list of Twitter users in one go, or some way to implement a functionality that could produce a comparable effect, without making numerous API call calls which count against your rate limit?</p>
2
2016-09-09T01:23:56Z
39,403,285
<p>Twitter's API does not have a call to destroy more than one friendship at a time. See the doc <a href="https://dev.twitter.com/rest/reference/post/friendships/destroy" rel="nofollow">here</a>. However, you don't have to wait for <code>destroy_friendship()</code> to finish before calling it again. It will be faster if you spawn a thread each time you call <code>destroy_friendship()</code>.</p>
3
2016-09-09T03:16:58Z
[ "python", "twitter", "tweepy" ]
What does the regex [^\s]*? mean?
39,402,495
<p>I am starting to learn python spider to download some pictures on the web and I found the code as follows. I know some basic regex. I knew <code>\.jpg</code> means <code>.jpg</code> and <code>|</code> means <code>or</code>. what's the meaning of <code>[^\s]*?</code> of the first line? I am wondering why using <code>\s</code>? And what's the difference between the two regexes?</p> <pre><code>http:[^\s]*?(\.jpg|\.png|\.gif) http://.*?(\.jpg|\.png|\.gif) </code></pre>
-5
2016-09-09T01:28:44Z
39,402,538
<p>Alright, so to answer your first question, I'll break down <code>[^\s]*?</code>.</p> <ul> <li><p>The square brackets (<code>[]</code>) indicate a <a href="http://www.regular-expressions.info/charclass.html" rel="nofollow"><em>character class</em></a>. A character class basically means that you want to match anything in the class, at that position, one time. <code>[abc]</code> will match the strings <code>a</code>, <code>b</code>, and <code>c</code>. In this case, your character class is <em>negated</em> using the caret (<code>^</code>) at the beginning - this inverts its meaning, making it match anything <em>but</em> the characters in it.</p></li> <li><p><code>\s</code> is fairly simple - it's a common shorthand in many regex flavours for "any whitespace character". This includes spaces, tabs, and newlines.</p></li> <li><p><code>*?</code> is a little harder to explain. The <code>*</code> <a href="http://www.regular-expressions.info/repeat.html" rel="nofollow"><em>quantifier</em></a> is fairly simple - it means "match this token (the character class in this case) zero or more times". The <code>?</code>, when applied to a quantifier, makes it <em>lazy</em> - it will match as little as it can, going from left to right one character at a time.</p></li> </ul> <p>In this case, what the whole pattern snippet <code>[^\s]*?</code> means is "match any sequence of non-whitespace characters, including the empty string". As mentioned in the comments, this can more succinctly be written as <code>\S*?</code>.</p> <p>To answer the second part of your question, I'll compare the two regexes you give:</p> <pre><code>http:[^\s]*?(\.jpg|\.png|\.gif) http://.*?(\.jpg|\.png|\.gif) </code></pre> <p>They both start the same way: attempting to match the protocol at the beginning of a URL and the subsequent colon (<code>:</code>) character. The first then matches any string that does not contain any whitespace and ends with the specified file extensions. The second, meanwhile, will match two literal slash characters (<code>/</code>) before matching any sequence of characters followed by a valid extension.</p> <p>Now, it's obvious that both patterns are meant to match a URL, but both are incorrect. The first pattern, for instance, will match strings like</p> <pre><code>http:foo.bar.png http:.png </code></pre> <p>Both of which are invalid. Likewise, the second pattern will permit spaces, allowing stuff like this:</p> <pre><code>http:// .jpg http://foo bar.png </code></pre> <p>Which is equally illegal in valid URLs. A better regex for this (though I caution strongly against trying to match URLs with regexes) might look like:</p> <pre><code>https?://\S+\.(jpe?g|png|gif) </code></pre> <p>In this case, it'll match URLs starting with both <code>http</code> and <code>https</code>, as well as files that end in both variations of <code>jpg</code>.</p>
5
2016-09-09T01:36:04Z
[ "python", "regex" ]
Difference between add and iadd?
39,402,501
<p>I'm not understanding the purpose of the in-place operators like iadd, imul, etc.</p> <blockquote> <p>Many operations have an “in-place” version. The following functions provide a more primitive access to in-place operators than the usual syntax does; for example, the statement x += y is equivalent to x = operator.iadd(x, y). Another way to put it is to say that z = operator.iadd(x, y) is equivalent to the compound statement z = x; z += y.</p> </blockquote> <p>It seems like I can always use either the in-place or regular operator interchangeably. Is one better than the other?</p>
0
2016-09-09T01:29:44Z
39,402,740
<p>The "in-place" functions for immutable objects cannot be implemented using an <a href="https://en.wikipedia.org/wiki/In-place_algorithm" rel="nofollow">in-place algorithm</a>, while for mutable objects they could be. The simple truth is that immutable objects don't change. </p> <p>Otherwise, usage of "in-place" versus not "in-place" functions has deep ramifications when considering mutable objects. Consider the following:</p> <pre><code>&gt;&gt;&gt; A = [1,2,3] &gt;&gt;&gt; B = A &gt;&gt;&gt; id(A) 4383125944 &gt;&gt;&gt; id(B) 4383125944 &gt;&gt;&gt; A = A + [1] &gt;&gt;&gt; id(A) 4383126376 &gt;&gt;&gt; A += [1] &gt;&gt;&gt; id(A) 4383126376 </code></pre> <p>Suppose you are writing some code where it is assumed that B is a soft copy of A (a mutable object). By not using the "in-place" function when modifying A, desired modifications to B can be quietly missed. What makes matters worse, is that quick visual inspection of the code makes it seem that the code (e.g., A = A + [2]) is implemented correctly (maybe it makes sense mathematically). If one really wants to just modify an object and not receive a new object, then the "in-place" function is the right way to go. </p> <p>Neither is better than the other. Rather there are specific circumstances under which one might be preferred over the other.</p>
0
2016-09-09T02:04:40Z
[ "python", "python-2.7", "operators" ]
How can I redirect only part of a stream?
39,402,535
<p>I'm using Python on bash on Linux. I would like to be able to suppress error messages from a particular module, but keep other error messages. An example would probably be the most efficient way to convey what I want:</p> <hr> <h3>File: <code>module.py</code></h3> <pre><code>import sys sys.stderr.write('Undesired output.\n') </code></pre> <h3>File: <code>script.py</code></h3> <pre><code>import module import sys sys.stderr.write('Desired output.\n') sys.stdout.write('Desired output.\n') x = int('a') </code></pre> <hr> <h3>Output of running <code>script.py</code>:</h3> <pre><code>$ python script.py Undesired output. Desired output. Desired output. Traceback (most recent call last): File "script.py", line 6, in &lt;module&gt; x = int('a') ValueError: invalid literal for int() with base 10: 'a' </code></pre> <h3>Desired output of running <code>script.py</code>:</h3> <pre><code>$ python script.py Desired output. Desired output. Traceback (most recent call last): File "script.py", line 6, in &lt;module&gt; x = int('a') ValueError: invalid literal for int() with base 10: 'a' </code></pre> <hr> <p>I cannot modify <code>module.py</code>, but I must use it. I've tried all kinds of redirects, and opening new file descriptors, but I can't seem to change the file descriptors of the calling shell from within python, so</p> <pre><code>$ python script.py 2&gt;/dev/null Desired output. </code></pre> <p>kills ALL <code>stderr</code> output. Since I know that the module is causing the undesired message, I know exactly which point I want to stop redirecting <code>stderr</code> to <code>/dev/null</code> and start redirecting it to <code>&amp;1</code>, but I can't seem to be able to do it from within Python.</p> <p>Any advice is much appreciated!</p>
0
2016-09-09T01:35:40Z
39,403,058
<p>There is an example here in the accepted answer that might help you</p> <p><a href="http://stackoverflow.com/questions/6796492/temporarily-redirect-stdout-stderr">Temporarily Redirect stdout/stderr</a></p> <p>but in your case, it assumes that you could do </p> <pre><code>import module </code></pre> <p><em>after</em></p> <pre><code>import sys </code></pre> <p>and <em>after</em> you've redirected stderr in your Python code. Like</p> <pre><code>import sys ... redirect code here import module ... code to remove redirect </code></pre> <p>Don't know if that will break your desired functionality, beyond the redirects.</p> <p>Also you're not supposed to stick import statements in the middle of your code, as it violates the PEP8 style guide.</p>
1
2016-09-09T02:45:18Z
[ "python", "linux", "bash", "file-descriptor", "io-redirection" ]
How can I redirect only part of a stream?
39,402,535
<p>I'm using Python on bash on Linux. I would like to be able to suppress error messages from a particular module, but keep other error messages. An example would probably be the most efficient way to convey what I want:</p> <hr> <h3>File: <code>module.py</code></h3> <pre><code>import sys sys.stderr.write('Undesired output.\n') </code></pre> <h3>File: <code>script.py</code></h3> <pre><code>import module import sys sys.stderr.write('Desired output.\n') sys.stdout.write('Desired output.\n') x = int('a') </code></pre> <hr> <h3>Output of running <code>script.py</code>:</h3> <pre><code>$ python script.py Undesired output. Desired output. Desired output. Traceback (most recent call last): File "script.py", line 6, in &lt;module&gt; x = int('a') ValueError: invalid literal for int() with base 10: 'a' </code></pre> <h3>Desired output of running <code>script.py</code>:</h3> <pre><code>$ python script.py Desired output. Desired output. Traceback (most recent call last): File "script.py", line 6, in &lt;module&gt; x = int('a') ValueError: invalid literal for int() with base 10: 'a' </code></pre> <hr> <p>I cannot modify <code>module.py</code>, but I must use it. I've tried all kinds of redirects, and opening new file descriptors, but I can't seem to change the file descriptors of the calling shell from within python, so</p> <pre><code>$ python script.py 2&gt;/dev/null Desired output. </code></pre> <p>kills ALL <code>stderr</code> output. Since I know that the module is causing the undesired message, I know exactly which point I want to stop redirecting <code>stderr</code> to <code>/dev/null</code> and start redirecting it to <code>&amp;1</code>, but I can't seem to be able to do it from within Python.</p> <p>Any advice is much appreciated!</p>
0
2016-09-09T01:35:40Z
39,405,885
<p><a href="http://stackoverflow.com/a/39403058/5214181">akubot's answer</a> and some experimentation led me to the answer to this question, so I'm going to accept it.</p> <p>Do this:</p> <pre><code>$ exec 3&gt;&amp;1 </code></pre> <p>Then change <code>script.py</code> to be the following:</p> <pre><code>import sys, os sys.stderr = open(os.devnull, 'w') import module sys.stderr = os.fdopen(3, 'w') sys.stderr.write('Desired output.\n') sys.stdout.write('Desired output.\n') x = int('a') </code></pre> <p>Then run like this:</p> <pre><code>$ python script.py 2&gt;/dev/null </code></pre> <p>yielding the desired output.</p>
1
2016-09-09T07:16:06Z
[ "python", "linux", "bash", "file-descriptor", "io-redirection" ]
I use crispy-forms work in django,render failures after the first click the submit button,the second time click the submit button has no response
39,402,624
<p>I am ready to use crispy-froms to my django projection,but in the test,there happen a question,I do not know where happened mistakes.So I post the code here. If someone can point out the mistake,I will appreciate it. forms:</p> <pre><code>class TestForm(forms.ModelForm): name = forms.CharField(widget=forms.TextInput(),label='Name',max_length=100,) def __init__(self,*args,**kwargs): super(TestForm, self).__init__(*args, **kwargs) self.helper = FormHelper() self.helper.add_input(Button('save', 'save')) self.helper.form_class = 'form-horizontal' self.helper.label_class = 'col-lg-3' self.helper.field_class = 'col-lg-8' self.helper.form_id = 'pkg-form' class Meta: model = PkgList fields = ( 'id','name', ) </code></pre> <p>urls.py:</p> <pre><code>url(r'^testform/$',test_form,name='test_form') </code></pre> <p>the views in my code:</p> <pre><code>@json_view def test_form(request): if request.method == 'POST'and request.is_ajax(): form = TestForm(request.POST) if form.is_valid(): return {'success': True} else: form_html = render_crispy_form(form) return {'success': False, 'form_html': form_html} else: if request.method == 'GET': form = TestForm() return render(request,'man/pkg_form.html',{'form':form}) </code></pre> <p>templetes:</p> <pre><code>&lt;div class="modal-dialog" xmlns="http://www.w3.org/1999/html"&gt; &lt;div class="modal-content message_align"&gt; &lt;div class="modal-body" align="center"&gt; &lt;span class="section"&gt;Pkg Info&lt;/span&gt; {% crispy form form.helper%} &lt;/div&gt; &lt;div&gt; &lt;/div&gt; </code></pre> <p></p> <pre><code>$('#button-id-save').on('click', function(event){ event.preventDefault(); var form = '#pkg-form'; $.ajax({ url: "{% url 'pkg_view' %}", type: "POST", data: $(form).serialize(), success: function(data) { if (!(data['success'])) { $(form).replaceWith(data['form_html']); }else { window.location.href = "{%url 'man/pkglist' %}"; } }, error: function () { $(form).find('.error-message').show() } }); return false; }); </code></pre> <p>after the first ajax POST and return {'success': False, 'form_html': form_html} from the backend,click the button can not send ajax POST no more.....</p>
0
2016-09-09T01:47:59Z
39,403,528
<p>Most probably it's because your ajax event is bind to an element which is being replaced by the returned data. Try binding it to document.</p> <p>Instead of </p> <pre><code>$('#button-id-save').on('click', function(event){ ... } </code></pre> <p>Try this</p> <pre><code>$(document).on("click", "#button-id-save", function(event){ ... }); </code></pre>
0
2016-09-09T03:45:11Z
[ "python", "ajax", "django" ]
`For x in locals():` -- RuntimeError: Why does locals() change size?
39,402,652
<p>I have a function and want to get the arguments passed to it. I tried achieving this via <code>locals()</code>, but I end up getting the keys of <code>locals()</code>, not the value-pairs:</p> <pre><code>&gt;&gt;&gt;def print_params(i,j,k): for x in locals(): print(x) &gt;&gt;&gt;print_params('a','b','c') j i k #&lt;--I want to get back 'a','b','c'. </code></pre> <p>I'd like to get the values instead of the keys, from <code>locals()</code>.</p> <p>With a normal dict, I know I can do this:</p> <pre><code>&gt;&gt;&gt; m = { ... 'a' : 'b', ... 'y' : 'z'} &gt;&gt;&gt; m {'a': 'b', 'y': 'z'} &gt;&gt;&gt; for x in m: ... print(x) ... a # &lt;--Keys y &gt;&gt;&gt; for x in m: ... print(m[x]) ... b # &lt;--Values z </code></pre> <p>But, if I try this with <code>locals()</code>, it creates a RuntimeError. </p> <pre><code>&gt;&gt;&gt; def print_args(i,j,k): ... for x in locals(): ... print(locals()[x]) ... &gt;&gt;&gt; print_args('a','b','c') b # &lt;--Got one! Traceback (most recent call last): File "&lt;stdin&gt;", line 1, in &lt;module&gt; File "&lt;stdin&gt;", line 2, in y RuntimeError: dictionary changed size during iteration </code></pre> <p>Can someone explain what's happening? From the single value I got to print (before it errors out), I can see that the error arises when the loop attempts to iterate to the next element of <code>locals()</code>; then, as the dict has been modified, it loses its place. I can't make sense of why the dictionary size should change, though. I'm inferring that, on each pass through the <code>for</code> loop, the variable <code>x</code> is destroyed, then re-created for the next pass through the loop. However, this seems odd, since I would think it'd be more efficient to keep <code>x</code> "alive" and just re-assign it to the new value on each iteration. </p> <p>Ultimately, I guess I could just do <code>local_args = locals()</code>, but I'm still curious as to what exactly is happening that prevents me from simply looping through <code>locals()</code>. I'm not sure if I'm right about <code>x</code> being destroyed/recreated; if I am, I have to wonder why that is, and if I'm not, then I'm even more curious as to what's happening.</p>
0
2016-09-09T01:53:55Z
39,402,732
<p>When you call <code>for x in locals()</code>, <code>locals()</code> is evaluated then its first item is stored in <code>x</code>. <code>x</code> wasn't defined before, hence it is being added to the <code>locals</code> dictionary during the first loop and the second loop notices the change.</p> <p>One way of workarounding this is to copy the <code>locals</code> dictionary beforehand as you've mentioned:</p> <pre><code>def print_args(i,j,k): local_args = dict(locals()) for x in local_args: print(local_args[x]) print_args('a','b','c') </code></pre>
0
2016-09-09T02:03:59Z
[ "python", "iteration" ]
`For x in locals():` -- RuntimeError: Why does locals() change size?
39,402,652
<p>I have a function and want to get the arguments passed to it. I tried achieving this via <code>locals()</code>, but I end up getting the keys of <code>locals()</code>, not the value-pairs:</p> <pre><code>&gt;&gt;&gt;def print_params(i,j,k): for x in locals(): print(x) &gt;&gt;&gt;print_params('a','b','c') j i k #&lt;--I want to get back 'a','b','c'. </code></pre> <p>I'd like to get the values instead of the keys, from <code>locals()</code>.</p> <p>With a normal dict, I know I can do this:</p> <pre><code>&gt;&gt;&gt; m = { ... 'a' : 'b', ... 'y' : 'z'} &gt;&gt;&gt; m {'a': 'b', 'y': 'z'} &gt;&gt;&gt; for x in m: ... print(x) ... a # &lt;--Keys y &gt;&gt;&gt; for x in m: ... print(m[x]) ... b # &lt;--Values z </code></pre> <p>But, if I try this with <code>locals()</code>, it creates a RuntimeError. </p> <pre><code>&gt;&gt;&gt; def print_args(i,j,k): ... for x in locals(): ... print(locals()[x]) ... &gt;&gt;&gt; print_args('a','b','c') b # &lt;--Got one! Traceback (most recent call last): File "&lt;stdin&gt;", line 1, in &lt;module&gt; File "&lt;stdin&gt;", line 2, in y RuntimeError: dictionary changed size during iteration </code></pre> <p>Can someone explain what's happening? From the single value I got to print (before it errors out), I can see that the error arises when the loop attempts to iterate to the next element of <code>locals()</code>; then, as the dict has been modified, it loses its place. I can't make sense of why the dictionary size should change, though. I'm inferring that, on each pass through the <code>for</code> loop, the variable <code>x</code> is destroyed, then re-created for the next pass through the loop. However, this seems odd, since I would think it'd be more efficient to keep <code>x</code> "alive" and just re-assign it to the new value on each iteration. </p> <p>Ultimately, I guess I could just do <code>local_args = locals()</code>, but I'm still curious as to what exactly is happening that prevents me from simply looping through <code>locals()</code>. I'm not sure if I'm right about <code>x</code> being destroyed/recreated; if I am, I have to wonder why that is, and if I'm not, then I'm even more curious as to what's happening.</p>
0
2016-09-09T01:53:55Z
39,402,759
<blockquote> <p>Can someone explain what's happening? From the single value I got to print (before it errors out), I can see that the error arises when the loop attempts to iterate to the next element of locals(); then, as the dict has been modified, it loses its place. I can't make sense of why the dictionary size should change, though. I'm inferring that, on each pass through the for loop, the variable x is destroyed, then re-created for the next pass through the loop. However, this seems odd, since I would think it'd be more efficient to keep x "alive" and just re-assign it to the new value on each iteration. </p> </blockquote> <p>The variable <code>x</code> is created only once, but it is not created until after you begin to iterate over <code>locals()</code>, because the source iterable of the <code>for</code> loop is evaluated before anything is assigned to the loop variable. The error message, however, is raised by the dict itself (or rather, its iterator): it "notices" when something is trying to iterate it and it has changed since its last iteration. So what happens is:</p> <ol> <li><code>locals()</code> is evaluated and gives you a dict</li> <li>the <code>for</code> loop creates an iterator over the dict and gets the first element</li> <li>a new local <code>x</code> is created and set to that first element</li> <li>on the next loop iteration, the <code>for</code> loop tries to get another element from the dict iterator</li> <li>the dict iterator notices that the underlying dict has a new element (<code>x</code>) since the last iteration, so it raises the error.</li> </ol> <p>A simple workaround is to create the variable <code>x</code> with some dummy value before the loop. Then it will already exist before you ever access <code>locals()</code>, and there will be no creation of new variables during the loop:</p> <pre><code>def print_args(i,j,k): x = None for x in locals(): print(locals()[x]) &gt;&gt;&gt; print_args('a', 'b', 'c') a x c b </code></pre> <p>Notice, of course, that you will also get <code>x</code> itself during this iteration.</p> <p>As others have mentioned in the comments, it's not really clear why you're doing this at all. <code>locals()</code> is a rather slippery beast and usually, given some practical task you're trying to accomplish, there's a better way to do it without using <code>locals()</code>.</p>
1
2016-09-09T02:06:59Z
[ "python", "iteration" ]
Python: how to access already instantiated class object in PyQt?
39,402,655
<p>I have a simple subclass of a base class, QLineEdit, which is just an editable text box from PyQt5: </p> <pre><code>class pQLineEdit(QtWidgets.QLineEdit): def __init__(self, parent = None): QtWidgets.QWidget.__init__(self,parent) def mouseDoubleClickEvent(self,e): self.setText("Test") </code></pre> <p>This class works fine. The object's text is indeed updated when I double click it. </p> <p>However, at this event, I am in need of accessing an object, foo.bar, from another, already-instantiated class in another file. How can I do this? I have tried </p> <pre><code>import other_file ... def mouseDoubleClickEvent(self,e): self.setText("Test") foo.bar </code></pre> <p>I get:</p> <pre><code> NameError: global name 'foo' is not defined </code></pre> <p>I've tried hacking my way just using</p> <pre><code>eval("foo.bar") </code></pre> <p>But it complains foo.bar is not defined. I feel this is very simple but the answer eludes me. </p> <p>I should clarify: foo() is a class instantiated in a FUNCTION in the other file, which is main(). I must access the instantiated class because it contains an SSH tunnel. I don't think my question is a dupe as listed. </p>
-1
2016-09-09T01:54:14Z
39,417,046
<p>At the risk of stating the obvious: if you want to access a variable, you need to put it somewhere that is accessible. The local scope of a function is not accessible to any code outside of that function.</p> <p>To make a variable accessible to another module, it must be assigned in the global scope, or be assigned as an attribute of another object that is in the global scope.</p> <p>If the variable cannot be assigned when the module is loaded, define it using the <code>global</code> statement:</p> <pre><code>foo = None def main(): global foo foo = MyClass() ... </code></pre> <p>However, a more appropriate approach for a gui application would be for the top-level window to provide access to the variable:</p> <pre><code>class MainWindow(QtWidgets.QMainWindow): def __init__(self): super(MainWindow, self).__init__() self.foo = MyClass() </code></pre> <p>Any child widget that has the main window as its parent can then access the variable via its <code>parent()</code> method:</p> <pre><code>class LineEdit(QtWidgets.QLineEdit): def __init__(self, parent=None): super(LineEdit, self).__init__(parent) def mouseDoubleClickEvent(self, event): bar = self.parent().foo.bar </code></pre> <p><strong>PS</strong>:</p> <p>Note that if you put the line-edit in a layout, it will be automatically re-parented to the layout's parent. In a <code>QMainWindow</code>, this will likely be the central-widget. In that case, you should either set the attribute on the central-widget, or retrieve the main-window with <code>self.parent().parent()</code>.</p>
1
2016-09-09T17:42:13Z
[ "python", "oop", "pyqt" ]
Can't make a date query with PyMongo
39,402,679
<p>the issue is the following. I have a mongo collection with the variable jobdate, which is as of now a string I would suppose.However, I need to run a route to query said variable in the following way:</p> <pre><code>@app.route('/active_jobs/&lt;jobdate&gt;', methods = ['GET']) def get_a_date(jobdate): ajobs = mongo.db.ajobs output = [] for q in ajobs.find({'jobdate':jobdate}): output.append ({ 'jobdate' : q['jobdate'], 'jobtime' : q['jobtime'],'plant': q['plant'], 'po': q['po'], 'company': q['company'], 'client': q['client'], 'jobaddress': q['jobaddress'], 'm3': q['m3'], 'use': q['use'], 'formula': q['formula'], 'placement': q['placement'], 'badmix1': q['badmix1'], 'badmix2': q['badmix2'], 'badmix3': q['badmix3'], 'confirmation': q['confirmation'],'status': q['status'] }) return jsonify({'result' : output}) </code></pre> <p>the issue here is the fact that when I try a GET request on Postman, I simply get an empty {'result' : } json object. I suspect the query structure itself may not be the problem but the date formatting.</p> <p>my POST request is as follows, how could I format the date variable to make it queryble so-to-speak.</p> <pre><code>@app.route('/active_jobs/new', methods=['POST']) def add_job(): ajobs = mongo.db.ajobs jobdate = request.json['jobdate']# date of job jobtime = request.json['jobtime']# time of job plant = request.json['plant']# plant for job po = request.json['po']# production order company = request.json['company']# client company name client = request.json['client']# person in charge jobaddress = request.json['jobaddress']#job address use = request.json['use']# concrete use in site m3 = request.json['m3']#job volume formula = request.json['formula']#job formula placement = request.json['placement']#type of placement badmix1 = request.json['badmix1']#batch admixture add-on badmix2 = request.json['badmix2']#batch admixture add-on badmix3 = request.json['badmix3']#batch admixture add-on confirmation = request.json['confirmation']#level of confirmation for job status = request.json['status']#job status ajob_id = ajobs.insert({ 'jobdate' : jobdate, 'jobtime' : jobtime, 'plant': plant, 'po' : po, 'company' : company, 'client' : client, 'jobaddress' : jobaddress, 'use' : use, 'm3' : m3, 'formula' : formula, 'placement' : placement, 'badmix1' : badmix1, 'badmix2' : badmix2, 'badmix3' : badmix3, 'confirmation' : confirmation, 'status' : status }) new_job = ajobs.find_one({'_id' : ajob_id}) output = ({ 'jobdate' : new_job['jobdate'], 'jobtime' : new_job['jobtime'],'plant': new_job['plant'], 'po': new_job['po'], 'company': new_job['company'], 'client': new_job['client'], 'jobaddress': new_job['jobaddress'], 'm3': new_job['m3'], 'use': new_job['use'], 'formula': new_job['formula'], 'placement': new_job['placement'], 'badmix1': new_job['badmix1'], 'badmix2': new_job['badmix2'], 'badmix3': new_job['badmix3'], 'confirmation': new_job['confirmation'],'status': new_job['status'] }) return jsonify({'verify new job': output}) </code></pre> <p>NOTE: for purposes of the app, the date structure must be the following YYYY-MM-DD </p>
2
2016-09-09T01:56:48Z
39,804,383
<p>Although string matching should also work, it would be best to store the date in mongo as a date object. You can do so by using strptime. For the format YYYY-MM-DD it should work out to something like this in POST:</p> <pre><code>from datetime.datetime import strptime jobdate = strptime(request.json['jobdate'], "%Y-%m-%d").date() </code></pre>
0
2016-10-01T08:09:24Z
[ "python", "mongodb", "date", "pymongo" ]
Python: Creating map with 2 lists
39,402,690
<p>I want to create map data from 2 list data. I have a simple example like below. What I want to do is create 'new_map' data like below. If it contains specific data, the value should be True.</p> <pre><code>all_s = ['s1', 's2', 's3', 's4'] data = ['s2', 's4'] new_map = {'s1': False, 's2': True, 's3': False, 's4': True} </code></pre> <p>Are there any smart way (like lambda) to implement this? My python env is 3.X. Of course I can resolve this problem if I use for-iter simply. But I wonder there are better ways.</p>
0
2016-09-09T01:57:55Z
39,402,720
<p>This should do it quickly and efficiently in a pythonic manner:</p> <pre><code> data_set = set(data) new_map = {k: k in data_set for k in all_s} </code></pre>
4
2016-09-09T02:01:37Z
[ "python", "python-3.x" ]