title
stringlengths
10
172
question_id
int64
469
40.1M
question_body
stringlengths
22
48.2k
question_score
int64
-44
5.52k
question_date
stringlengths
20
20
answer_id
int64
497
40.1M
answer_body
stringlengths
18
33.9k
answer_score
int64
-38
8.38k
answer_date
stringlengths
20
20
tags
list
parsing a dictionary from file in python using argparse?
39,151,817
<p>I have been using a file containing variables as input for a program, which is then imported into the main program at runtime. I am moving this system to using argparse reading the input values from a file. One of the current input variables is a list of dictionaries, with each dictionary providing settings for a given calculation. </p> <pre><code>CalculationVals=[{'startx':0,'starty':0,'endx':10, 'endy':10}, {'startx':1,'starty':1,'endx':12, 'endy':12}] </code></pre> <p>and then in the main part of the program, the individual CalculationVals are looped over. Is there a way to read this using argparse, or a better way to provide this input using an argparse method? Not relying on additional packages is advantageous here.</p>
0
2016-08-25T17:53:33Z
39,152,312
<p>As Dmitry Suggested, passing json as an argument to function is not a recommended way. </p> <p>So, I would suggest you to pass json file path in argument instead whole json file.</p> <pre><code># test_prog.py import argparse import json parser = argparse.ArgumentParser() parser.add_argument("-f","--filepath", help="json file path") args = parser.parse_args() with open(args.filepath, 'r') as f: config = json.load(f) print config </code></pre> <p>Then use this value to read using json lib.</p> <p>In case if json file is remote also, you can pass url as parameter!</p> <p>Sample run:</p> <pre><code>$ python test_prog.py -f ./f1.json [{u'endy': 10, u'endx': 10, u'startx': 0, u'starty': 0}, {u'endy': 12, u'endx': 12, u'startx': 1, u'starty': 1}] </code></pre> <p>Content of file, f1.json:</p> <pre><code>[{"startx":0,"starty":0,"endx":10, "endy":10},{"startx":1,"starty":1,"endx":12, "endy":12}] </code></pre>
0
2016-08-25T18:23:53Z
[ "python", "argparse" ]
parsing a dictionary from file in python using argparse?
39,151,817
<p>I have been using a file containing variables as input for a program, which is then imported into the main program at runtime. I am moving this system to using argparse reading the input values from a file. One of the current input variables is a list of dictionaries, with each dictionary providing settings for a given calculation. </p> <pre><code>CalculationVals=[{'startx':0,'starty':0,'endx':10, 'endy':10}, {'startx':1,'starty':1,'endx':12, 'endy':12}] </code></pre> <p>and then in the main part of the program, the individual CalculationVals are looped over. Is there a way to read this using argparse, or a better way to provide this input using an argparse method? Not relying on additional packages is advantageous here.</p>
0
2016-08-25T17:53:33Z
39,152,572
<p>First option would be to pass in the arguments directly, and call the script once for each set of values. It's unclear if that meets your use case.</p> <pre><code>./your_script.py startx=0, starty=0, endx=10, endy=10 </code></pre> <p>Alternatively, you could use an easily parseable file format such as JSON or CFG, instead of directly importing a python file. </p> <p>JSON File:</p> <pre><code>{ "CalculationVals": [{ "startx": 0, "starty": 0, "endx": 10, "endy": 10 }, { "startx": 1, "starty": 1, "endx": 12, "endy": 12 }] } </code></pre> <p>Python:</p> <pre><code>import argparse import json parser = argparse.ArgumentParser() parser.add_argument('file') args = parser.parse_args() with open(args.file) as fi: configs = json.load(fi) for values in configs['CalculationVals']: startx = values['startx'] starty = values['starty'] </code></pre> <p>Call the script: ./your_script.py config.json</p>
0
2016-08-25T18:39:49Z
[ "python", "argparse" ]
Python Print Variables with parameters
39,151,868
<p>I know you can do this for a string:</p> <pre><code>print("You have inputted {0} postive numbers and {1} negative numbers".format('3','2')) </code></pre> <p>And this would produce:</p> <pre><code>You have inputted 3 postive numbers and 2 negative numbers </code></pre> <p>But I want to insert a letter in a variable name. For example,</p> <pre><code>number = 4 print(nu{}ber.format("m")) </code></pre> <p>And this should produce <code>4</code> Any ideas?</p>
0
2016-08-25T17:56:31Z
39,151,914
<pre><code>print(eval('nu{}ber'.format("m"))) </code></pre> <p>Eval evaluates the expression from string, and has access to program variables by default.</p> <p>Or:</p> <pre><code>print(globals()['nu{}ber'.format("m")]) </code></pre> <p>Globals is the dictionary {'variable_name': value}.</p> <hr> <p>Unless you do trust the source of the string though, you should be careful. Fore example, if you have</p> <pre><code>class Nuke: def destroy_everything(): launch_nukes() class Cucumber: ... print(eval('nu{}ber'.format(string_received_from_user))) </code></pre> <p>And</p> <pre><code>string_received_from_user = "ke.destroy_everything() or Cucum" </code></pre> <p>Than the command</p> <pre><code>Nuke.destroy_everything() or Cucumber </code></pre> <p>will be executed.</p>
2
2016-08-25T17:59:02Z
[ "python", "output" ]
Python Print Variables with parameters
39,151,868
<p>I know you can do this for a string:</p> <pre><code>print("You have inputted {0} postive numbers and {1} negative numbers".format('3','2')) </code></pre> <p>And this would produce:</p> <pre><code>You have inputted 3 postive numbers and 2 negative numbers </code></pre> <p>But I want to insert a letter in a variable name. For example,</p> <pre><code>number = 4 print(nu{}ber.format("m")) </code></pre> <p>And this should produce <code>4</code> Any ideas?</p>
0
2016-08-25T17:56:31Z
39,151,956
<p>Though it's not recommended, you can access your variable dynamically through <code>globals</code>:</p> <pre><code>&gt;&gt;&gt; number = 4 &gt;&gt;&gt; globals()['num' + 'ber'] 4 </code></pre> <p><code>eval</code> works too:</p> <pre><code>&gt;&gt;&gt; eval('num' + 'ber') 4 </code></pre>
0
2016-08-25T18:00:57Z
[ "python", "output" ]
Menu_Opciones() missing 2 required positional arguments: 'Pregunta' and 'Opciones'
39,151,875
<p>I have a problem which evolve a script: "script.py" and a view in "views.py" in a django project. The goal is to show in a page a question: "Pregunta" and a list of options: "Opciones" In a part of the script I need to call a view's function "Menu_Opciones" in this way:</p> <pre><code>ESI_App.views.Menu_Opciones(request, Pregunta, Opciones) </code></pre> <p>In views.py I have the function defined in this way:</p> <pre><code>def Menu_Opciones(request, Pregunta, Opciones): for i in range(len(Opciones)): ModelOpciones.objects.create(opciones=Opciones[i]) form = OpcionesForm(request.POST or None, field1_qs = ModelOpciones.objects.all()) context = { 'pregunta': Pregunta, 'form': form, } if form.is_valid(): opcion = form.cleaned_data['Campo_Opciones'] return render(request, "Menu_op.html", context) </code></pre> <p>Here's the traceback:</p> <pre><code>Environment: Request Method: GET Request URL: http://127.0.0.1:8000/Menu_Opciones/ Django Version: 1.9.7 Python Version: 3.4.3 Installed Applications: ['django.contrib.admin', 'django.contrib.auth', 'django.contrib.contenttypes', 'django.contrib.sessions', 'django.contrib.messages', 'django.contrib.staticfiles', 'ESI_App'] Installed Middleware: ['django.middleware.security.SecurityMiddleware', 'django.contrib.sessions.middleware.SessionMiddleware', 'django.middleware.common.CommonMiddleware', 'django.middleware.csrf.CsrfViewMiddleware', 'django.contrib.auth.middleware.AuthenticationMiddleware', 'django.contrib.auth.middleware.SessionAuthenticationMiddleware', 'django.contrib.messages.middleware.MessageMiddleware', 'django.middleware.clickjacking.XFrameOptionsMiddleware'] Traceback: File "C:\Users\Miguel\Desktop\VenvProyecto\lib\site-packages\django\core\handlers\base.py" in get_response 149. response = self.process_exception_by_middleware(e, request) File "C:\Users\Miguel\Desktop\VenvProyecto\lib\site-packages\django\core\handlers\base.py" in get_response 147. response = wrapped_callback(request, *callback_args, **callback_kwargs) Exception Type: TypeError at /Menu_Opciones/ Exception Value: Menu_Opciones() missing 2 required positional arguments: 'Pregunta' and 'Opciones' </code></pre> <p><strong>More information:</strong> I have this error since I changed computer but I created a new proyect and app, copied the code, modified all paths, project files and folder names, app name, settings... carefully.</p> <p>As you can see, I call the function with that 3 arguments and I can't imagine why get that error. Please, give me a hand. Thanks in advance.</p>
0
2016-08-25T17:56:58Z
39,190,432
<p>I fixed it! First I tried to pass the parameters as keyword but got the same error, so I tried simply to define in views "Pregunta" and "Opciones" as global variables:</p> <pre><code>Preg = '' Opcs = [] </code></pre> <p>I imported them in "script.py" and asigned their proper value:</p> <pre><code>ESI_App.views.Preg = Pregunta ESI_App.views.Opcs = Opciones </code></pre> <p>Then, I call the view and use those global variables :) I know it's not elegant abusing of global variables, but it avoided the error and solved the problem. If you find another elegant solucion or way to solve that error, I would appreciate it and would mark his solution as accepted. </p>
0
2016-08-28T10:41:05Z
[ "python", "django", "parameters", "arguments", "views" ]
Pandas: Use index value in apply function over a column
39,151,978
<p>I was wondering if there's a way to use index value while using apply over a column of the dataframe. Suppose I have df like:</p> <pre><code> col1 col2 0 a [0,1,2] 1 b [0,2] 2 c [0,1,2] </code></pre> <p>I want to write an apply function on df.col2 such that it removes the index values from the list in col2 leaving a df like:</p> <pre><code> col1 col2 0 a [1,2] 1 b [0,2] 2 c [0,1] </code></pre> <p><strong>The index value may or may not be in the list. But if it does exist in the list it should be removed.</strong> Note that this isn't the actual use-case but similar to what I need. I have </p> <pre><code>df.col2.apply(lambda x: f(x)) </code></pre> <p>and in the f(x) I want to be able to access the index value of x if its possible or a workaround. <strong>I know that df.apply() can work on the column values and df.index.map() can work on the index. Is there a method in Pandas that combines the use-cases of the two in one single elegant solution</strong>. Thanks for the help.</p> <p>UPDATE: the index is an integer value and it will be constrained in such a way that its consecutive whole numbers. The col2 will have a list for each index. I want to check if the index is in that list and remove it from the list if it exists. So lets say for row index 3 we have list [27,36,3,9,7]. I want to drop 3 from the list. The list is unordered</p>
2
2016-08-25T18:01:54Z
39,152,043
<pre><code>def name_drop(x): x_ = x.drop('col2') _x = pd.Series(x.col2) _x = _x[_x != x.name].tolist() x = x_.append(pd.Series({'col2': _x})) return x df.apply(name_drop, axis=1) </code></pre> <p><a href="http://i.stack.imgur.com/Xw04j.png" rel="nofollow"><img src="http://i.stack.imgur.com/Xw04j.png" alt="enter image description here"></a></p>
1
2016-08-25T18:07:15Z
[ "python", "pandas", "dataframe", "apply" ]
Pandas: Use index value in apply function over a column
39,151,978
<p>I was wondering if there's a way to use index value while using apply over a column of the dataframe. Suppose I have df like:</p> <pre><code> col1 col2 0 a [0,1,2] 1 b [0,2] 2 c [0,1,2] </code></pre> <p>I want to write an apply function on df.col2 such that it removes the index values from the list in col2 leaving a df like:</p> <pre><code> col1 col2 0 a [1,2] 1 b [0,2] 2 c [0,1] </code></pre> <p><strong>The index value may or may not be in the list. But if it does exist in the list it should be removed.</strong> Note that this isn't the actual use-case but similar to what I need. I have </p> <pre><code>df.col2.apply(lambda x: f(x)) </code></pre> <p>and in the f(x) I want to be able to access the index value of x if its possible or a workaround. <strong>I know that df.apply() can work on the column values and df.index.map() can work on the index. Is there a method in Pandas that combines the use-cases of the two in one single elegant solution</strong>. Thanks for the help.</p> <p>UPDATE: the index is an integer value and it will be constrained in such a way that its consecutive whole numbers. The col2 will have a list for each index. I want to check if the index is in that list and remove it from the list if it exists. So lets say for row index 3 we have list [27,36,3,9,7]. I want to drop 3 from the list. The list is unordered</p>
2
2016-08-25T18:01:54Z
39,153,953
<p>If I understand your question correctly, this should do the job:</p> <pre><code>df.apply(lambda x: x.name in x.col2 and x.col2.remove(x.name), axis=1) </code></pre> <p>With the example from the original post:</p> <pre><code>In [226]: df Out[226]: col1 col2 0 a [0, 1, 2] 1 b [0, 2] 2 c [0, 1, 2] In [227]: df.apply(lambda x: x.name in x.col2 and x.col2.remove(x.name), axis=1); In [228]: df Out[228]: col1 col2 0 a [1, 2] 1 b [0, 2] 2 c [0, 1] </code></pre>
2
2016-08-25T20:10:16Z
[ "python", "pandas", "dataframe", "apply" ]
Pandas: Use index value in apply function over a column
39,151,978
<p>I was wondering if there's a way to use index value while using apply over a column of the dataframe. Suppose I have df like:</p> <pre><code> col1 col2 0 a [0,1,2] 1 b [0,2] 2 c [0,1,2] </code></pre> <p>I want to write an apply function on df.col2 such that it removes the index values from the list in col2 leaving a df like:</p> <pre><code> col1 col2 0 a [1,2] 1 b [0,2] 2 c [0,1] </code></pre> <p><strong>The index value may or may not be in the list. But if it does exist in the list it should be removed.</strong> Note that this isn't the actual use-case but similar to what I need. I have </p> <pre><code>df.col2.apply(lambda x: f(x)) </code></pre> <p>and in the f(x) I want to be able to access the index value of x if its possible or a workaround. <strong>I know that df.apply() can work on the column values and df.index.map() can work on the index. Is there a method in Pandas that combines the use-cases of the two in one single elegant solution</strong>. Thanks for the help.</p> <p>UPDATE: the index is an integer value and it will be constrained in such a way that its consecutive whole numbers. The col2 will have a list for each index. I want to check if the index is in that list and remove it from the list if it exists. So lets say for row index 3 we have list [27,36,3,9,7]. I want to drop 3 from the list. The list is unordered</p>
2
2016-08-25T18:01:54Z
39,187,431
<p>maybe you can try this, this will not delete the index values from the list but will replace it with 'nan'</p> <pre><code>df = pd.DataFrame({'a':list('mno'),'b':[[1,2,3],[1,3,4],[5,6,2]]}) df1 = df.b.apply(pd.Series) df['b'] = np.array(df1[df1.apply(lambda x: x!=df.index.values)]).tolist() </code></pre> <blockquote> <p><code>Out[111]: a b 0 m [1.0, 2.0, 3.0] 1 n [nan, 3.0, 4.0] 2 o [5.0, 6.0, nan]</code></p> </blockquote>
0
2016-08-28T02:10:09Z
[ "python", "pandas", "dataframe", "apply" ]
Assign permissions to user in Django
39,152,165
<p>I'm using Django User model in my Django project. my user view is:</p> <pre><code>from django.contrib.auth.models import User from django.dispatch import receiver from django.db.models.signals import post_save from rest_framework import generics from rest_framework import permissions from rest_framework import status from rest_framework.authtoken.models import Token from rest_framework.response import Response from myapp.serializers.user import UserSerializer, UserListSerializer class UserList(generics.ListCreateAPIView): model = User permission_classes = (permissions.IsAuthenticated, ) _ignore_model_permissions = True serializer_class = UserListSerializer queryset = User.objects.exclude(pk=-1) def post(self, request, *args, **kwargs): userName = request.DATA.get('username', None) userPass = request.DATA.get('password', None) user = User.objects.create_user(username=userName, password=userPass) if not user: return Response({'message': "error creating user"}, status=status.HTTP_200_OK) return Response({'username': user.username}, status=status.HTTP_201_CREATED) class UserDetail(generics.RetrieveUpdateDestroyAPIView): model = User permission_classes = (permissions.IsAuthenticated, ) _ignore_model_permissions = True serializer_class = UserSerializer queryset = User.objects.exclude(pk=-1) </code></pre> <p>When I try to view users page logged in as a superuser, I can see the list of all the users. But when I try to access it with a non-superuser, I get an empty list. I like every user to be able to view the user list but only its own user detail if it is non superuser. I tried using signals (such as <code>post_migrate</code>) but the problem is that for each user I need to give view permission to every other user every time I migrate.</p> <p>Is there any easier way to do this?</p>
0
2016-08-25T18:14:27Z
39,157,722
<blockquote> <p>I can see the list of all the users. But when I try to access it with a non-superuser, I get an empty list.</p> </blockquote> <p>From your code you should be able to access UserList even if you are not superuser.</p> <blockquote> <p>I like every user to be able to view the user list but only its own user detail if it is non superuser.</p> </blockquote> <p>Try custom permission.</p> <pre><code>class IsOwner(permissions.BasePermission): """ Custom permission to only allow owners of an object to edit it. """ def has_object_permission(self, request, view, obj): return obj == request.user class UserDetail(generics.RetrieveUpdateDestroyAPIView): model = User permission_classes = (permissions.IsOwner, ) _ignore_model_permissions = True serializer_class = UserSerializer queryset = User.objects.exclude(pk=-1) </code></pre> <p>Now only owner can see their details</p>
0
2016-08-26T03:06:47Z
[ "python", "django", "permissions", "user", "signals" ]
Behaviour Driven Development - Undefined steps in Behave using Python with Flask
39,152,217
<p>I'm following a Flask tutorial and currently looking at Behaviour Driven Development using Behave.</p> <p>My task is to build a very basic blog application that allows a single user to log in, log out and create blog posts, using BDD.</p> <p>I've written the feature file, the steps file and the environment file. I've then written code to enable a user to log in and out.</p> <p>When I run the application locally and manually test, it works as expected allowing the user to log in and out and displays the required text ("You were logged in" or "You were logged out"), so I'm assuming the problem is with the feature file or steps file rather than the application code.</p> <p>When I run Behave, the final step appears to be "undefined".</p> <p>The relevant part of the feature file is:</p> <pre><code>Feature: flaskr is secure in that users must log in and log out to access certain features Scenario: successful login Given flaskr is setup When we log in with "admin" and "admin" Then we should see the alert "You were logged in" Scenario: successful logout Given flaskr is setup and we log in with "admin" and "admin" When we log out Then we should see the alert "You were logged out" </code></pre> <p>And my steps file is:</p> <pre><code>from behave import * @given(u'flaskr is setup') def flask_is_setup(context): assert context.client @given(u'we log in with "{username}" and "{password}"') @when(u'we log in with "{username}" and "{password}"') def login(context, username, password): context.page = context.client.post('/login', data=dict(username=username, password=password), follow_redirects=True ) assert context.page @when(u'we log out') def logout(context): context.page = context.client.get('/logout', follow_redirects=True) assert context.page @then(u'we should see the alert "{message.encode("utf-8")}"') def message(context, message): assert message in context.page.data </code></pre> <p>From the environment file:</p> <pre><code>def before_feature(context, feature): context.client = app.test_client() </code></pre> <p>It is the final "then" step that seems to be the problem. I've tried checking the tutorial solutions and searching elsewhere but I can't seem to solve this part of the code. I've had to encode the message as I am using Python version 3.5 (the tutorial was using version 2.7 if this is relevant).</p> <p>Any pointers would be very much appreciated.</p>
0
2016-08-25T18:17:38Z
39,171,893
<p>Moving the encode solved the problem. I now have</p> <pre><code>@then(u'we should see the alert "{message}"') def message(context, message): assert message.encode("utf-8") in context.page.data </code></pre>
0
2016-08-26T17:38:48Z
[ "python", "flask", "bdd", "python-behave" ]
Django Form validation errors handling using AJAX
39,152,255
<p>Having the following form being generated in a Modal:</p> <p><strong>Template:</strong></p> <p>...</p> <pre><code>&lt;div id="form-modal-body" class="modal-body"&gt; &lt;form id="register_new_customer" method="post" action="{% url "customer:new_customer" %}"&gt; {% csrf_token %} &lt;div id="form-errors" class='form-errors' class="text-danger"&gt;&lt;/div&gt; {%for field in customer_form %} &lt;div class="form-group {%if field.errors %} has-error{%endif%}"&gt; &lt;span class="help-block"&gt;{{ field.errors }}&lt;/span&gt; &lt;label for="{{ field.id_for_label }}" class="control-label"&gt;{{ field.label }}&lt;/label&gt; &lt;div&gt;{{ field }}&lt;/div&gt; &lt;/div&gt; {% endfor %} &lt;div class="modal-footer"&gt; &lt;div class="col-lg-10 col-lg-offset-2"&gt; &lt;button type="reset" class="btn btn-default" data-dismiss="modal"&gt;Cancel&lt;/button&gt; &lt;button type="submit" class="btn btn-primary"&gt;Submit&lt;/button&gt; &lt;/div&gt; &lt;/div&gt; &lt;/form&gt; &lt;/div&gt; </code></pre> <p>...</p> <p>I'm submitting the form using AJAX as follow:</p> <pre><code> $(document).on('submit', '#register_new_customer', function(e) { e.preventDefault(); var frm = $('#register_new_customer') $.ajax({ type: frm.attr('method'), url: frm.attr('action'), data: frm.serialize(), success:function (response) { // TODO: How to handle Form Validation error messages?! } }) return false; </code></pre> <p>In my View I'm returning an HttpResponse in case the form is Not Valid:</p> <pre><code>return HttpResponse(customer_form.errors.as_json()) "phone_number": [{"message": "Please enter a valid Phone Number!", "code": "invalid"}], "box_enabled": [{"message": "This field is required.", "code": "required"}]} </code></pre> <p>Here I have the error messages and the correspondent Form Field:</p> <ul> <li>phone_number</li> <li>box_enabled</li> </ul> <p>How can I pass this error messages correctly to the form html?</p> <p>Thanks</p>
1
2016-08-25T18:19:56Z
39,152,735
<p>I think you want something along these lines from your view:</p> <pre><code>from django.http import JsonResponse def phone_number_eval(request): if phone_number is valid: response = {'status': 1, 'message': "Ok"} else: response = {'status': 2, 'message': "Please enter a valid phone number."} return JsonResponse(response) </code></pre> <p>My Javascript isn't so good, but you can do something to the effect of if <code>response.status</code> is <code>2</code>, display <code>message</code>. Maybe someone else can give the code for it.</p>
1
2016-08-25T18:50:59Z
[ "python", "ajax", "django" ]
Computing the average of non-negative numbers from a text file
39,152,256
<p>I am trying to read in a text file of integers, make it a list, compute the average of all integers, compute the average of all non-negative integers, print max and min. I was able to compute the average of all integers but am having difficulty getting the average of all non-negative integers and the max and min. </p> <p>Here is what I have so far:</p> <pre><code>file = open("numberlist.txt", "r") sum = 0 list = [] for num in file: list.append(num) poslist = [] for number in file: x = int(number) if x &gt; 0: poslist.append(x) sum += number posavg = sum / len(poslist) print("The number of non-negative integers is ", len(poslist)) print("The average of the non-negtive integers is ", posavg) </code></pre>
1
2016-08-25T18:19:58Z
39,152,374
<p>If the numbers are separated by spaces<br> (or, as I understand from your code, by new lines)<br> this is a very short and <em>"Pythonic"</em> task!</p> <p>First, let's read the entire file into numbers<br> and also have the file close automatically:</p> <pre><code>with open('numberlist.txt') as f: nums = [int(x) for x in f.read().split() if int(x) &gt;= 0] </code></pre> <p>After the previous 2 line you have all the non-negative<br> numbers in a list called <code>nums</code>!</p> <p>Now, the average would be:</p> <pre><code>avg = sum(nums) / len(nums) </code></pre> <p>And the min/max would be:</p> <pre><code>minNum, maxNum = min(nums), max(nums) </code></pre> <p>And that's all!</p> <p>Now, I pushed as much Python as I think is possible<br> into this task, so by understanding this code you make <br> a leap in Python!</p>
0
2016-08-25T18:27:12Z
[ "python", "python-3.x", "if-statement", "for-loop", "accumulator" ]
Computing the average of non-negative numbers from a text file
39,152,256
<p>I am trying to read in a text file of integers, make it a list, compute the average of all integers, compute the average of all non-negative integers, print max and min. I was able to compute the average of all integers but am having difficulty getting the average of all non-negative integers and the max and min. </p> <p>Here is what I have so far:</p> <pre><code>file = open("numberlist.txt", "r") sum = 0 list = [] for num in file: list.append(num) poslist = [] for number in file: x = int(number) if x &gt; 0: poslist.append(x) sum += number posavg = sum / len(poslist) print("The number of non-negative integers is ", len(poslist)) print("The average of the non-negtive integers is ", posavg) </code></pre>
1
2016-08-25T18:19:58Z
39,152,407
<p>This keeps most of your code and adds the non-negative part to it (perhaps should be called positive instead? :)</p> <pre><code>file = open("numberlist.txt", "r") sum = 0 nonNegativeTotal = 0 nonNegativeCount = 0 list = [] for num in file: list.append(num) for number in list: x = int(number) if x &gt;= 0: nonNegativeCount += 1 nonNegativeTotal += x sum += x avg = sum/len(list) avgNonNegative = nonNegativeTotal/nonNegativeCount print("The number of integers is ", len(list)) print("The overall average is ", avg) print("The number of non-negative numbers is ", nonNegativeCount) print("The non-negative average is ", avgNonNegative) list.sort() print("The minimum number is ", list[0]) print("The maximum number is ", list[-1]) </code></pre> <p>For the min and max you could do:</p> <pre><code>minNum, maxNum = min(list), max(list) </code></pre>
0
2016-08-25T18:29:33Z
[ "python", "python-3.x", "if-statement", "for-loop", "accumulator" ]
undefined variable while playing with dates and functions
39,152,276
<p>You could call me an extreme newbie, I have a course at school for which I need to be able, sort of, to solve problems using python. All so far have worked, but the last one (just a small part really) wont give in.</p> <p>The first 4 functions work fine (2 arguments each, a birthday (bday) and a random date (today)). They determine whether or not the random date is a birthday/unbirthday/hundredday/sameweekday, returning True or False if so or not, respectively. First line of every function is the following, rest of the script doesn't matter much.</p> <pre><code>def birthday(bday, today): def unbirthday(bday, today): def hundredday(bday, today): def sameweekday(bday, today): </code></pre> <p>Again, these work fine. </p> <p>The last function has to return all dates, between a certain start and end date, on which one of the above variations of birthdays match. First argument is again bday, the next is start (by default bday, this is the asshole), third is end (by default today) and the fourth is birthday (by default the actual birthday).</p> <pre><code>def birthdays(bday, start=bday, end=date.today(), birthday=birthday): </code></pre> <p>its the start=bday that won't work, stating that this bday is undefined. Rest of the script doesn't matter as I don't get so far.</p> <p>(I import datetime in the beginning of script and all first functions work fine using its tools)</p>
0
2016-08-25T18:21:11Z
39,152,322
<p>You cannot read from a variable before it's been created:</p> <pre><code>def birthdays(bday, start=bday, end=date.today(), birthday=birthday): ^---1 ^---2 </code></pre> <p>Function arguments as above are just variable name definitions. "The chunk of data inserted into the function call at this point will be named <code>bday</code>". They don't exist as readable variable WITHIN the function signature, only within the function's body itself. So your #2 above is trying to read from a variable which doesn't exist (yet).</p>
2
2016-08-25T18:24:34Z
[ "python", "function", "date" ]
undefined variable while playing with dates and functions
39,152,276
<p>You could call me an extreme newbie, I have a course at school for which I need to be able, sort of, to solve problems using python. All so far have worked, but the last one (just a small part really) wont give in.</p> <p>The first 4 functions work fine (2 arguments each, a birthday (bday) and a random date (today)). They determine whether or not the random date is a birthday/unbirthday/hundredday/sameweekday, returning True or False if so or not, respectively. First line of every function is the following, rest of the script doesn't matter much.</p> <pre><code>def birthday(bday, today): def unbirthday(bday, today): def hundredday(bday, today): def sameweekday(bday, today): </code></pre> <p>Again, these work fine. </p> <p>The last function has to return all dates, between a certain start and end date, on which one of the above variations of birthdays match. First argument is again bday, the next is start (by default bday, this is the asshole), third is end (by default today) and the fourth is birthday (by default the actual birthday).</p> <pre><code>def birthdays(bday, start=bday, end=date.today(), birthday=birthday): </code></pre> <p>its the start=bday that won't work, stating that this bday is undefined. Rest of the script doesn't matter as I don't get so far.</p> <p>(I import datetime in the beginning of script and all first functions work fine using its tools)</p>
0
2016-08-25T18:21:11Z
39,152,347
<p>One solution would be to default start=None, and then in the function body have:</p> <pre><code>if start is None: start = bday </code></pre> <p>which should give you the behaviour you want.</p>
2
2016-08-25T18:25:49Z
[ "python", "function", "date" ]
Multilayer feedforward net fails to train in TensorFlow
39,152,282
<p>I started with the TensorFlow tutorial to classify the images in the mnist data set using a single layer feedforward neural net. That works OK, i get 80+ percent on the test set. Then I tried to modify it to a multilayer network by adding a new layer in between. After this modification all my attempts to train the network fails. The first couple of iterations the network becomes a bit better but then it stagnates at 11.35% accuracy.</p> <p>First twenty iterations using 1 hidden layer:</p> <pre><code>Train set: 0.124, test set: 0.098 Train set: 0.102, test set: 0.098 Train set: 0.112, test set: 0.101 Train set: 0.104, test set: 0.101 Train set: 0.092, test set: 0.101 Train set: 0.128, test set: 0.1135 Train set: 0.12, test set: 0.1135 Train set: 0.114, test set: 0.1135 Train set: 0.108, test set: 0.1135 Train set: 0.1, test set: 0.1135 Train set: 0.114, test set: 0.1135 Train set: 0.11, test set: 0.1135 Train set: 0.122, test set: 0.1135 Train set: 0.102, test set: 0.1135 Train set: 0.12, test set: 0.1135 Train set: 0.106, test set: 0.1135 Train set: 0.102, test set: 0.1135 Train set: 0.116, test set: 0.1135 Train set: 0.11, test set: 0.1135 Train set: 0.124, test set: 0.1135 </code></pre> <p>It does not matter how long time I train it, it is stuck here. I have tried to change from rectified linear units to softmax, both yields the same result. I have tried to change the fitness function to e=(y_true-y)^2. Same result.</p> <p>First twenty iterations using no hidden layers:</p> <pre><code>Train set: 0.124, test set: 0.098 Train set: 0.374, test set: 0.3841 Train set: 0.532, test set: 0.5148 Train set: 0.7, test set: 0.6469 Train set: 0.746, test set: 0.7732 Train set: 0.786, test set: 0.8 Train set: 0.788, test set: 0.7887 Train set: 0.752, test set: 0.7882 Train set: 0.84, test set: 0.8138 Train set: 0.85, test set: 0.8347 Train set: 0.806, test set: 0.8084 Train set: 0.818, test set: 0.7917 Train set: 0.85, test set: 0.8063 Train set: 0.792, test set: 0.8268 Train set: 0.812, test set: 0.8259 Train set: 0.774, test set: 0.8053 Train set: 0.788, test set: 0.8522 Train set: 0.812, test set: 0.8131 Train set: 0.814, test set: 0.8638 Train set: 0.778, test set: 0.8604 </code></pre> <p>Here is my code:</p> <pre><code>import numpy as np import tensorflow as tf from tensorflow.examples.tutorials.mnist import input_data mnist = input_data.read_data_sets("MNIST_data/", one_hot=True) # Parameters batch_size = 500 # Create the network structure # ---------------------------- # First layer x = tf.placeholder(tf.float32, [None, 784]) W_1 = tf.Variable(tf.zeros([784,10])) b_1 = tf.Variable(tf.zeros([10])) y_1 = tf.nn.relu(tf.matmul(x,W_1) + b_1) # Second layer W_2 = tf.Variable(tf.zeros([10,10])) b_2 = tf.Variable(tf.zeros([10])) y = tf.nn.softmax(tf.matmul(y_1,W_2) + b_2) # Loss function y_true = tf.placeholder(tf.float32, [None, 10]) cross_entropy = tf.reduce_mean(-tf.reduce_sum(y_true * tf.log(y), reduction_indices=[1])) # Training method train_step = tf.train.GradientDescentOptimizer(0.5).minimize(cross_entropy) correct_prediction = tf.equal(tf.argmax(y,1), tf.argmax(y_true,1)) accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32)) # Train network # ------------- sess = tf.Session() sess.run(tf.initialize_all_variables()) batch, batch_labels = mnist.train.next_batch(batch_size) for i in range(20): print("Train set: " + str(sess.run(accuracy, feed_dict={x: batch, y_true: batch_labels})) + ", test set: " + str(sess.run(accuracy, feed_dict={x: mnist.test.images, y_true: mnist.test.labels}))) sess.run(train_step, feed_dict={x: batch, y_true: batch_labels}) batch, batch_labels = mnist.train.next_batch(batch_size) </code></pre> <p>So with this code it does not work, but if I change from</p> <pre><code>y = tf.nn.softmax(tf.matmul(y_1,W_2) + b_2) </code></pre> <p>to</p> <pre><code>y = tf.nn.softmax(tf.matmul(x,W_1) + b_1) </code></pre> <p>then it works. What have I missed?</p> <p>Edit: Now I have it working. Two changes was needed,first of all initiating the weights to random values instead of zero (yes, actually it was the weights that needed to be not zero, the bias as zero was OK despite the relu function). The second thing is strange to me: If I remove the softmax function from the output layer and instead of manually applying the formula for cross entropy uses the softmax_cross_entropy_with_logits(y,y_true) function then it works. As I understand it that should be the same.. And previously I also tried with the sum of squared errors which didn't work either.. Anyway, the following code is working. (Quite ugly though, but working..) With 10k iterations it gets me 93.59% accuracy on the test set, so not optimal in any way but better than the one with no hidden layer. After only 20 iterations it's already up to 65%. </p> <pre><code>import numpy as np import tensorflow as tf from tensorflow.examples.tutorials.mnist import input_data mnist = input_data.read_data_sets("MNIST_data/", one_hot=True) # Parameters batch_size = 500 # Create the network structure # ---------------------------- # First layer x = tf.placeholder(tf.float32, [None, 784]) W_1 = tf.Variable(tf.truncated_normal([784,10], stddev=0.1)) b_1 = tf.Variable(tf.truncated_normal([10], stddev=0.1)) y_1 = tf.nn.relu(tf.matmul(x,W_1) + b_1) # Second layer W_2 = tf.Variable(tf.truncated_normal([10,10], stddev=0.1)) b_2 = tf.Variable(tf.truncated_normal([10], stddev=0.1)) y = tf.matmul(y_1,W_2) + b_2 # Loss function y_true = tf.placeholder(tf.float32, [None, 10]) cross_entropy = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(y,y_true)) # Training method train_step = tf.train.GradientDescentOptimizer(0.5).minimize(cross_entropy) correct_prediction = tf.equal(tf.argmax(y,1), tf.argmax(y_true,1)) accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32)) # Train network # ------------- sess = tf.Session() sess.run(tf.initialize_all_variables()) batch, batch_labels = mnist.train.next_batch(batch_size) for i in range(10000): if i % 100 == 0: print("Train set: " + str(sess.run(accuracy, feed_dict={x: batch, y_true: batch_labels})) + ", test set: " + str(sess.run(accuracy, feed_dict={x: mnist.test.images, y_true: mnist.test.labels}))) sess.run(train_step, feed_dict={x: batch, y_true: batch_labels}) batch, batch_labels = mnist.train.next_batch(batch_size) </code></pre>
1
2016-08-25T18:21:40Z
39,155,484
<p>Few suggestions:</p> <p>1- Add standard deviation to both weight variable initialization, instead of initializing with <code>zeros</code>:</p> <pre><code>weight_1 = tf.Variable(tf.truncated_normal([784,10], stddev=0.1)) </code></pre> <p>2- Lower the learning rate until the accuracy value show a varying behavior.</p> <p>3- When using RELU, initialize bias with a slightly positive value. This suggestion probably has less to do with the issue that you are seeing.</p> <pre><code>bias_1 = tf.Variable(tf.constant(.05, shape=[10])) </code></pre>
1
2016-08-25T22:08:00Z
[ "python", "numpy", "neural-network", "tensorflow", "feed-forward" ]
Why does python3-config list -ldl before the last lined library?
39,152,358
<p>Running <code>python2.7-config --libs</code>, <code>python3.5-config --libs</code>, etc. results in the following output:</p> <pre><code>-lpython&lt;version&gt; -lpthread -ldl -lutil -lm </code></pre> <p>Why isn't libdl the last linked library? I believe the order only matters when the preceding libraries reference the latter, but do neither libutil nor libm reference libdl, but libpthread does? I am unable to examine libpthread the way I normally might (readelf and/or nm) as it is apparently not in the right format.</p>
0
2016-08-25T18:26:08Z
39,228,654
<p>Neither of the later libraries appear to reference libdl, so linking to libdl first is unnecessary. In fact, linking to libdl <em>first</em> appears to have no effect, nor does skipping it entirely.</p> <pre><code>#include "Python.h" int main(int argc, const char **argv) {} </code></pre> <p>...</p> <pre><code>gcc -o main main.c `python3.5m-config --cflags` -lpython3.5m -lpthread -lutil -lm </code></pre>
0
2016-08-30T13:19:10Z
[ "python" ]
Python 2.7 Improve algorithm that checks if a dictionary key is not in a list and if so deletes that key
39,152,395
<p>I was playing around with a few solutions but am looking for an optimal and clean way to do this in Python. Basically I have a dictionary called 'd', and a list 'l'. If the dictionary contains a value not in the list delete that value in the dictionary. So given: </p> <pre><code>d = {1 : "bob", 2 : "mary", 3 : "joe"} l = [2,3] </code></pre> <p>The first key of d gets deleted.</p> <pre><code>d = {2 : "mary", 3 : "joe"} l = [2,3] </code></pre> <p>I have a working solution but feel the use of flags is unnecessary. </p> <pre><code>flag = False del_vals = list() for key in d.iterkeys(): for i in l: if key == i: flag = True if flag == False: del_vals.append(key) flag = False for k in del_vals: d.pop(k, None) </code></pre> <p>Is there a way I can write it to improve performance and make it more clean? Whether the use of generators or anything else is involved. </p>
0
2016-08-25T18:28:48Z
39,152,561
<p>A list comprehension will work starting from Python 2.7:</p> <pre><code>d = {k: v for k, v in d.iteritems() if k in l} </code></pre> <p>Use <code>items()</code> instead of <code>iteritems()</code> in Python 3:</p> <pre><code>d = {k: v for k, v in d.items() if k in l} </code></pre>
4
2016-08-25T18:39:05Z
[ "python", "python-2.7", "dictionary" ]
Python 2.7 Improve algorithm that checks if a dictionary key is not in a list and if so deletes that key
39,152,395
<p>I was playing around with a few solutions but am looking for an optimal and clean way to do this in Python. Basically I have a dictionary called 'd', and a list 'l'. If the dictionary contains a value not in the list delete that value in the dictionary. So given: </p> <pre><code>d = {1 : "bob", 2 : "mary", 3 : "joe"} l = [2,3] </code></pre> <p>The first key of d gets deleted.</p> <pre><code>d = {2 : "mary", 3 : "joe"} l = [2,3] </code></pre> <p>I have a working solution but feel the use of flags is unnecessary. </p> <pre><code>flag = False del_vals = list() for key in d.iterkeys(): for i in l: if key == i: flag = True if flag == False: del_vals.append(key) flag = False for k in del_vals: d.pop(k, None) </code></pre> <p>Is there a way I can write it to improve performance and make it more clean? Whether the use of generators or anything else is involved. </p>
0
2016-08-25T18:28:48Z
39,152,779
<p>You can also use sets for achieving the same:</p> <pre><code>z={i:d[i] for i in set(d.keys()).intersection(l)} </code></pre> <p>or use the below</p> <pre><code>p={i:d[i] for i in d.keys() if i in l} </code></pre> <p>or do this</p> <pre><code>q={i:d[i] for i in set(d.keys()) &amp; set(l)} </code></pre> <p>Alternatively, you could do this as well:</p> <pre><code>for i in set(d.keys()) - set(l): del(d[i]) </code></pre>
0
2016-08-25T18:53:40Z
[ "python", "python-2.7", "dictionary" ]
ValueError: Invalid chart type given scatter
39,152,398
<p>I am new to Python and I am trying to plot a simple scatter plot for 2 stocks (Adj Close). For some reason I am unable to generate a scatter plot (when I remove the <code>kind='scatter'</code> argument, the chart runs as expected, but is a line chart). Here is my code:</p> <pre><code>from pandas.io.data import DataReader from datetime import datetime import matplotlib.pyplot as plt #inputs symbols = ['SPY', 'QQQ'] startDate = datetime(2013,1,1) endDate = datetime(2016,12,31) #get data from yahoo instrument = DataReader(symbols, 'yahoo', startDate, endDate) #isolate column close = instrument['Adj Close'] def compute_daily_returns(df): daily_returns = (df / df.shift(1)) - 1 return daily_returns dlyRtns = compute_daily_returns(close) xPlt = dlyRtns['SPY'] yPlt = dlyRtns['QQQ'] dlyRtns.plot(kind='scatter', x=xPlt, y=yPlt) plt.show() </code></pre> <p>and here is the resulting error message (any ideas on what I am missing?):</p> <blockquote> <p>Traceback (most recent call last): File "C:/Users/sferrom/PycharmProjects/untitled2/scatterPlot.py", line 27, in dlyRtns.plot(kind='scatter', x=xPlt, y=yPlt) File "C:\Python27\lib\site-packages\pandas\tools\plotting.py", line 1537, in plot_frame raise ValueError('Invalid chart type given %s' % kind) ValueError: Invalid chart type given scatter</p> </blockquote> <p>Process finished with exit code 1</p>
0
2016-08-25T18:28:56Z
39,154,004
<p>"Create" <code>scatter</code> plot from <code>line</code> chart if <code>line</code> works:</p> <pre><code>dlyRtns.plot(x=xPlt, y=yPlt, marker='o', linewidth=0) </code></pre>
1
2016-08-25T20:13:42Z
[ "python", "python-2.7", "pandas", "matplotlib", "plot" ]
Python - BaseHTTPServer - Overriding "handle_timeout" method
39,152,411
<p>I'm currently working with Python's <code>BaseHTTPServer</code> and I need to override the <code>handle_timeout()</code> method of the server.</p> <p>According to Python's documentation (<a href="https://docs.python.org/2/library/basehttpserver.html" rel="nofollow">https://docs.python.org/2/library/basehttpserver.html</a>), BaseHTTPServer is a subclass of TCPServer. TCPServer implements the method <code>handle_timeout</code> (<a href="https://docs.python.org/2/library/socketserver.html#SocketServer.BaseServer.handle_timeout" rel="nofollow">https://docs.python.org/2/library/socketserver.html#SocketServer.BaseServer.handle_timeout</a>) which I want to override.</p> <p>My code looks currently like this:</p> <pre><code>class newServer(BaseHTTPServer.HTTPServer): def handle_timeout(self): print "Timeout!" class RequestHandler(BaseHTTPServer.BaseHTTPRequestHandler): def setup(self): self.timeout=2 BaseHTTPServer.BaseHTTPRequestHandler.setup(self) if __name__ == '__main__': server=newServer httpd = server((HOST_NAME, PORT_NUMBER), RequestHandler) print time.asctime(), "Server Starts - %s:%s" % (HOST_NAME, PORT_NUMBER) try: httpd.serve_forever() except KeyboardInterrupt: pass httpd.server_close() print time.asctime(), "Server Stops - %s:%s" % (HOST_NAME, PORT_NUMBER) </code></pre> <p>The timeout itself works, I receive the following message from the server:</p> <pre><code>Request timed out: timeout('timed out',) </code></pre> <p>The problem is, that this is not the message I want to print. Any help with my problem?</p>
0
2016-08-25T18:29:41Z
39,228,920
<p>For anyone interested, after trying a few things out I finally came to a conclusion:</p> <p>According to this file (<a href="https://svn.python.org/projects/python/trunk/Lib/BaseHTTPServer.py" rel="nofollow">https://svn.python.org/projects/python/trunk/Lib/BaseHTTPServer.py</a>), I directly copied the code from <code>BaseHTTPRequestHandler</code>'s <code>handle_one_request</code>-method and have overridden the last few lines to finally make my code work.</p> <p>All my other attempts (overriding it in the server class, catching exceptions in the request handler or in the <code>main</code>-method) did not seem to work.</p> <p>Hope this could help somebody out sometimes.</p>
0
2016-08-30T13:31:01Z
[ "python", "timeout", "basehttpserver" ]
Function that returns an equation
39,152,420
<p>I am writing a script to calculate the definite integral of an equation. I'm writing a helper function that would take in coefficients as parameters and return a function of x.</p> <pre><code>def eqn(x, k, c, a): return ((k*x + c**(1-a)) </code></pre> <p>Next, I define a function that calculates the definite integral, using quad imported from scipy:</p> <pre><code>from scipy.integrate import quad def integral(eqn, c_i, y_i): integral_i, integral_err = quad(eqn, c_i, y_i) print integral_i </code></pre> <p>Then I call the function by passing in parameters</p> <pre><code>k = calc_k(7511675,1282474,0,38,2) eqn = carbon_path_eqn(x, k, 7511675, 2) carbon_path_def_int(eqn,0,38) </code></pre> <p>However, I get an error saying that 'name x is not defined'. I understand that <code>x</code> isn't defined globally, but I'm wondering how I can write a helper function, that takes in parameters, and still returns a function of x that can be used in <code>quad</code>?</p> <p>Thank you!</p> <p>PS - <a href="http://stackoverflow.com/users/4317531/bpachev">@bpachev</a>, this is a follow up from the other post</p>
2
2016-08-25T18:30:12Z
39,184,239
<p>The mistake here is that the function 'eqn' does not return a function, it returns the <em>value</em> of a function at some point x, given parameters k,c,a.</p> <p>quad should be passed a function (in your case, eqn) where the first argument (in your case, x) is assumed to be the variable over which the function is integrated. You also need to pass quad a tuple of the remaining parameters (in your case (k,c,a)) and two limits (in your case, c_i,y_i). In other words, call quad like this:</p> <pre><code>quad(eqn,c_i,y_i,args=(k,c,a)) </code></pre> <p>This is all explained in the scipy documentation <a href="http://docs.scipy.org/doc/scipy/reference/generated/scipy.integrate.quad.html" rel="nofollow">http://docs.scipy.org/doc/scipy/reference/generated/scipy.integrate.quad.html</a>.</p>
0
2016-08-27T18:07:51Z
[ "python", "python-2.7", "scipy" ]
Function that returns an equation
39,152,420
<p>I am writing a script to calculate the definite integral of an equation. I'm writing a helper function that would take in coefficients as parameters and return a function of x.</p> <pre><code>def eqn(x, k, c, a): return ((k*x + c**(1-a)) </code></pre> <p>Next, I define a function that calculates the definite integral, using quad imported from scipy:</p> <pre><code>from scipy.integrate import quad def integral(eqn, c_i, y_i): integral_i, integral_err = quad(eqn, c_i, y_i) print integral_i </code></pre> <p>Then I call the function by passing in parameters</p> <pre><code>k = calc_k(7511675,1282474,0,38,2) eqn = carbon_path_eqn(x, k, 7511675, 2) carbon_path_def_int(eqn,0,38) </code></pre> <p>However, I get an error saying that 'name x is not defined'. I understand that <code>x</code> isn't defined globally, but I'm wondering how I can write a helper function, that takes in parameters, and still returns a function of x that can be used in <code>quad</code>?</p> <p>Thank you!</p> <p>PS - <a href="http://stackoverflow.com/users/4317531/bpachev">@bpachev</a>, this is a follow up from the other post</p>
2
2016-08-25T18:30:12Z
39,195,020
<p>This is not what you asked. However, as someone else has mentioned, sympy would probably make your life much easier. For instance, suppose you need to be able to evaluate integrals of functions of <strong>x</strong> such as f in the code below, where <strong>a</strong> and <strong>b</strong> are arbitrary constants. Here's how you can use sympy to do that.</p> <p>Define the function, integrate it with respect to <strong>x</strong> and save the result, then evaluate the result for values of <strong>a</strong>, <strong>b</strong> and <strong>x</strong>.</p> <p><a href="http://i.stack.imgur.com/Ul4K5.jpg" rel="nofollow"><img src="http://i.stack.imgur.com/Ul4K5.jpg" alt="sample use of sympy"></a></p>
0
2016-08-28T19:31:12Z
[ "python", "python-2.7", "scipy" ]
python: how to explain the following codes
39,152,577
<p>Tested on my local machine:</p> <pre><code>Python 2.7.3 (default, Jun 22 2015, 19:33:41) [GCC 4.6.3] on linux2 Type "help", "copyright", "credits" or "license" for more information. &gt;&gt;&gt; q=[2,3] &gt;&gt;&gt; p=[1,q,4] &gt;&gt;&gt; p[1].append('test') &gt;&gt;&gt; q [2, 3, 'test'] &gt;&gt;&gt; hex(id(q)) '0x7fabfa5c2b90' &gt;&gt;&gt; &gt;&gt;&gt; &gt;&gt;&gt; hex(id(p)) '0x7fabfa5c2b48' &gt;&gt;&gt; hex(id(p[1])) '0x7fabfa5c2b90' &gt;&gt;&gt; &gt;&gt;&gt; &gt;&gt;&gt; p.append(q) &gt;&gt;&gt; p [1, [2, 3, 'test'], 4, [2, 3, 'test']] &gt;&gt;&gt; p[1].append('test2') &gt;&gt;&gt; p [1, [2, 3, 'test', 'test2'], 4, [2, 3, 'test', 'test2']] &gt;&gt;&gt; </code></pre> <p>At beginning, I thought when generating p, a copy of q is copied into p. </p> <p><strong>Any document can help understand the above behaviour? I have no idea why python did this? and In which cases, this behaviour will happen?</strong> </p> <p>Thanks</p>
0
2016-08-25T18:40:22Z
39,152,683
<p>Well, I can explain. Your <code>p</code> keeps reference to <code>q</code> and you cannot change that. That's all. So if you will edit <code>p[1]</code>, then basically it modifies <code>q</code>, because going there through reference.</p> <p><strong>Any document can help understand the above behaviour?</strong> <a href="https://docs.python.org/2/reference/datamodel.html" rel="nofollow">https://docs.python.org/2/reference/datamodel.html</a></p> <p><strong>I have no idea why python did this?</strong> </p> <p>To have two ways of doing things. Based on reference to object (you don't pass a copy of <code>q</code> to <code>p</code>, just only reference):</p> <pre><code>&gt;&gt;&gt; q=[2,3] &gt;&gt;&gt; p=[1,q,4] &gt;&gt;&gt; p[1].append('test') &gt;&gt;&gt; q [2, 3, 'test'] </code></pre> <p>and without (you pass a copy of <code>q</code> to <code>p</code>, not reference):</p> <pre><code>&gt;&gt;&gt; q=[2,3] &gt;&gt;&gt; p=[1,q[:],4] &gt;&gt;&gt; p[1].append('test') &gt;&gt;&gt; q [2, 3] </code></pre> <p><strong>In which cases, this behaviour will happen?</strong></p> <p>I think example above shows it.</p>
1
2016-08-25T18:47:55Z
[ "python" ]
python: how to explain the following codes
39,152,577
<p>Tested on my local machine:</p> <pre><code>Python 2.7.3 (default, Jun 22 2015, 19:33:41) [GCC 4.6.3] on linux2 Type "help", "copyright", "credits" or "license" for more information. &gt;&gt;&gt; q=[2,3] &gt;&gt;&gt; p=[1,q,4] &gt;&gt;&gt; p[1].append('test') &gt;&gt;&gt; q [2, 3, 'test'] &gt;&gt;&gt; hex(id(q)) '0x7fabfa5c2b90' &gt;&gt;&gt; &gt;&gt;&gt; &gt;&gt;&gt; hex(id(p)) '0x7fabfa5c2b48' &gt;&gt;&gt; hex(id(p[1])) '0x7fabfa5c2b90' &gt;&gt;&gt; &gt;&gt;&gt; &gt;&gt;&gt; p.append(q) &gt;&gt;&gt; p [1, [2, 3, 'test'], 4, [2, 3, 'test']] &gt;&gt;&gt; p[1].append('test2') &gt;&gt;&gt; p [1, [2, 3, 'test', 'test2'], 4, [2, 3, 'test', 'test2']] &gt;&gt;&gt; </code></pre> <p>At beginning, I thought when generating p, a copy of q is copied into p. </p> <p><strong>Any document can help understand the above behaviour? I have no idea why python did this? and In which cases, this behaviour will happen?</strong> </p> <p>Thanks</p>
0
2016-08-25T18:40:22Z
39,152,694
<p>When you append <code>q</code> to <code>p</code>, you are not creating a copy of <code>q</code>, you are actually appending a reference to the object that the name <code>q</code> currently points to. Therefore, when you append to <code>q</code> (or in this case <code>p[1]</code>, which points to the same object), it will append to the single object that those two references point to. If you want to insert a <em>copy</em> of <code>q</code>, you can use slicing like so:</p> <pre><code>p=[1,q[:],4] </code></pre> <p>or</p> <pre><code>p.append(q[:]) </code></pre> <p>This will create a new anonymous list that you can append to without affecting the original or any other reference to that same object that <code>q</code> points to.</p> <p>Here's an example:</p> <pre><code>&gt;&gt;&gt; q = [2, 3] &gt;&gt;&gt; p = [1, q[:], 4] # include a copy of q using [:] &gt;&gt;&gt; p[1].append('test') # append to copy &gt;&gt;&gt; p [1, [2, 3, 'test'], 4] # 'test' is in p[1] &gt;&gt;&gt; q [2, 3] # but not in q &gt;&gt;&gt; p = [1, q, 4] # include q itself in p (no [:]) &gt;&gt;&gt; p[1].append('test') &gt;&gt;&gt; p [1, [2, 3, 'test'], 4] # test appears in p[1] &gt;&gt;&gt; q [2, 3, 'test'] # and also in q </code></pre> <p>See the docs for lists <a class='doc-link' href="http://stackoverflow.com/documentation/python/209/list#t=20160825190558726918">here</a></p>
3
2016-08-25T18:48:45Z
[ "python" ]
python: how to explain the following codes
39,152,577
<p>Tested on my local machine:</p> <pre><code>Python 2.7.3 (default, Jun 22 2015, 19:33:41) [GCC 4.6.3] on linux2 Type "help", "copyright", "credits" or "license" for more information. &gt;&gt;&gt; q=[2,3] &gt;&gt;&gt; p=[1,q,4] &gt;&gt;&gt; p[1].append('test') &gt;&gt;&gt; q [2, 3, 'test'] &gt;&gt;&gt; hex(id(q)) '0x7fabfa5c2b90' &gt;&gt;&gt; &gt;&gt;&gt; &gt;&gt;&gt; hex(id(p)) '0x7fabfa5c2b48' &gt;&gt;&gt; hex(id(p[1])) '0x7fabfa5c2b90' &gt;&gt;&gt; &gt;&gt;&gt; &gt;&gt;&gt; p.append(q) &gt;&gt;&gt; p [1, [2, 3, 'test'], 4, [2, 3, 'test']] &gt;&gt;&gt; p[1].append('test2') &gt;&gt;&gt; p [1, [2, 3, 'test', 'test2'], 4, [2, 3, 'test', 'test2']] &gt;&gt;&gt; </code></pre> <p>At beginning, I thought when generating p, a copy of q is copied into p. </p> <p><strong>Any document can help understand the above behaviour? I have no idea why python did this? and In which cases, this behaviour will happen?</strong> </p> <p>Thanks</p>
0
2016-08-25T18:40:22Z
39,153,121
<p>If you are new to Python you may run into issues like this.</p> <p>In Python if you say </p> <pre><code> a = [10] </code></pre> <p>And then say </p> <pre><code> b = a </code></pre> <p>What happens? b gets the value of a, but also any change in a will make the similar change in b, because instead of just copying the value "b" is pointed towards "a" (that means changes in a will reflect in b as well) That is the reason why you are facing that issue.</p> <p>In case you don't want this you can use copy module in python </p> <pre><code> from copy import copy b = copy(a) </code></pre> <p>Now even though you change the value of "a", "b" value will be same.</p> <p><a href="http://www.python-course.eu/deep_copy.php" rel="nofollow">For any reference regarding the same topic</a> </p>
1
2016-08-25T19:15:51Z
[ "python" ]
Pandas Merge - put all join-column data under one output column instead of two?
39,152,595
<p>I have two CSV files, with the following schemas:</p> <p>CSV1 columns:</p> <pre><code>"Id","First","Last","Email","Company" </code></pre> <p>CSV2 columns:</p> <pre><code>"PersonId","FirstName","LastName","Em","FavoriteFood" </code></pre> <p>If I load them each into a Pandas DataFrame and do <code>newdf = df1.merge(df2, how='outer', left_on=['Last', 'First'], right_on=['LastName','FirstName'])</code></p> <p>Then a CSV export of the joined DataFrame has a schema of:</p> <pre><code>"Id","First","Last","Email","Company","PersonId","FirstName","LastName","Em","FavoriteFood" </code></pre> <ul> <li>All the rows that were only in CSV1 have a first name printed under "First."</li> <li>All the rows that were only in CSV2 have a first name printed under "FirstName."</li> <li>All the rows that were in both CSV file have a first name <em>(the exact same value - which is to be expected, since it was a "join on" value)</em> printed under both columns.</li> <li>Same problem for "Last" &amp; "LastName."</li> </ul> <p>What I'd like is an output schema more like this:</p> <pre><code>"Id","First","Last","Email","Company","PersonId","Em","FavoriteFood" </code></pre> <ul> <li>It should have all of the "first names" under the column "First" (and equivalent for "Last").</li> </ul> <p>Most relational database software I'm familiar with does this <em>(the left-side join-column names win the naming war)</em>. Does Pandas have a syntax for instructing it to do so?</p> <p>I can do <code>df1.merge(df2.rename(columns = {'LastName':'Last', 'FirstName':'First'}), how='outer', on=['Last', 'First'])</code>, but stylistically, it drives me crazy to hard-code the same column-names twice in my source code. It's more to fix if I change the column names in the CSV files.</p>
0
2016-08-25T18:41:42Z
39,152,701
<p>One way to do it would be to just merge the same way but drop columns that you'd like to remove.</p> <pre><code>newdf.drop(['LastName','FirstName'], 1, inplace=True) </code></pre>
0
2016-08-25T18:49:14Z
[ "python", "join", "merge", "rename" ]
Selenium with Python: First instance of the element is identified but the next occurance is ElementNotVisibleException
39,152,629
<p>I have the following Selenium Test for Python/Django application:</p> <pre><code>class EmailRecordsTest(StaticLiveServerTestCase): def test_can_store_email_and_retrieve_it_later(self): self.browser.get(self.live_server_url) emailbox = self.browser.find_element_by_xpath("//form[@class='pma-subscribe-form']/input[1]") self.assertEqual(emailbox.get_attribute("placeholder"), 'Enter your Email') print("tested until here") print("The placeholder: ", emailbox.get_attribute("placeholder")) print(emailbox) emailbox.send_keys('vio@mesmerizing.com') </code></pre> <p>First occurance of emailbox is clearly identified as seen from the print runs and assert Equal for placeholder. The last instance of emailbox.send_keys throws following error:</p> <pre><code> selenium.common.exceptions.ElementNotVisibleException: Message: Element is not currently visible and so may not be interacted with </code></pre> <p>Cannot find why the same element become Not Visible when using with send_keys.</p> <p>The Html code being tested is as below:</p> <pre><code>&lt;!-- Start footer --&gt; &lt;footer id="pma-footer"&gt; &lt;!-- start footer top --&gt; &lt;div class="pma-footer-top"&gt; &lt;div class="container"&gt; &lt;div class="pma-footer-top-area"&gt; &lt;div class="row"&gt; &lt;div class="col-lg-3 col-md-3 col-sm-3"&gt; &lt;div class="pma-footer-widget"&gt; &lt;h4&gt;News letter&lt;/h4&gt; &lt;p&gt;Get latest update, news &amp; offers&lt;/p&gt; &lt;form class="pma-subscribe-form"&gt; &lt;input id="subscribe-email" type="email" placeholder="Enter your Email"&gt; &lt;button class="btn btn-danger btn-md" type="submit"&gt;Subscribe!&lt;/button&gt; &lt;/form&gt; &lt;/div&gt; &lt;/div&gt; &lt;/div&gt; &lt;/div&gt; &lt;/div&gt; &lt;/div&gt; &lt;!-- end footer top --&gt; </code></pre> <p>Kindly help.</p>
1
2016-08-25T18:44:32Z
39,152,842
<p>Actually <code>find_element</code> returns element which would be present on the <code>DOM</code> no matter it's visible or not and you can get attribute of this element as well but <code>send_keys</code> does an action on element and selenium does action only visible element, So you need to be sure before doing action on element that it's visible using <code>WebDriverWait</code> as below :-</p> <pre><code>from selenium import webdriver from selenium.webdriver.common.by import By from selenium.webdriver.support.ui import WebDriverWait from selenium.webdriver.support import expected_conditions as EC wait = WebDriverWait(driver, 10) emailbox = wait.until(EC.visibility_of_element_located((By.ID, "subscribe-email"))) #do your all stuff before send keys # now use send_keys emailbox.send_keys('vio@mesmerizing.com') </code></pre> <p><strong>Edited</strong> :- If you are still unable to interact with element try using <code>execute_script()</code> to set value as below :-</p> <pre><code>emailbox = wait.until(EC.presence_of_element_located((By.ID, "subscribe-email"))) #do your all stuff before send keys # now use execute_script driver.execute_script("arguments[0].value = 'vio@mesmerizing.com'", emailbox) </code></pre>
1
2016-08-25T18:57:41Z
[ "python", "django", "selenium" ]
Selenium with Python: First instance of the element is identified but the next occurance is ElementNotVisibleException
39,152,629
<p>I have the following Selenium Test for Python/Django application:</p> <pre><code>class EmailRecordsTest(StaticLiveServerTestCase): def test_can_store_email_and_retrieve_it_later(self): self.browser.get(self.live_server_url) emailbox = self.browser.find_element_by_xpath("//form[@class='pma-subscribe-form']/input[1]") self.assertEqual(emailbox.get_attribute("placeholder"), 'Enter your Email') print("tested until here") print("The placeholder: ", emailbox.get_attribute("placeholder")) print(emailbox) emailbox.send_keys('vio@mesmerizing.com') </code></pre> <p>First occurance of emailbox is clearly identified as seen from the print runs and assert Equal for placeholder. The last instance of emailbox.send_keys throws following error:</p> <pre><code> selenium.common.exceptions.ElementNotVisibleException: Message: Element is not currently visible and so may not be interacted with </code></pre> <p>Cannot find why the same element become Not Visible when using with send_keys.</p> <p>The Html code being tested is as below:</p> <pre><code>&lt;!-- Start footer --&gt; &lt;footer id="pma-footer"&gt; &lt;!-- start footer top --&gt; &lt;div class="pma-footer-top"&gt; &lt;div class="container"&gt; &lt;div class="pma-footer-top-area"&gt; &lt;div class="row"&gt; &lt;div class="col-lg-3 col-md-3 col-sm-3"&gt; &lt;div class="pma-footer-widget"&gt; &lt;h4&gt;News letter&lt;/h4&gt; &lt;p&gt;Get latest update, news &amp; offers&lt;/p&gt; &lt;form class="pma-subscribe-form"&gt; &lt;input id="subscribe-email" type="email" placeholder="Enter your Email"&gt; &lt;button class="btn btn-danger btn-md" type="submit"&gt;Subscribe!&lt;/button&gt; &lt;/form&gt; &lt;/div&gt; &lt;/div&gt; &lt;/div&gt; &lt;/div&gt; &lt;/div&gt; &lt;/div&gt; &lt;!-- end footer top --&gt; </code></pre> <p>Kindly help.</p>
1
2016-08-25T18:44:32Z
39,252,938
<p>Another option which worked in this case is that you scroll to the specific element (which was at the bottom of the page)and then use send_keys it works.</p> <pre><code>emailbox = self.browser.find_element_by_xpath("//form[@class='mu-subscribe-form']/input[1]") self.browser.execute_script("window.scrollTo(0, document.body.scrollHeight);") emailbox.send_keys('vio@mesmerizing.com') </code></pre>
0
2016-08-31T14:54:47Z
[ "python", "django", "selenium" ]
Building a django queryset over foreign key and onetoone
39,152,669
<p>I have some two models (in different apps/models.py files) related with User:</p> <pre><code>class Profile(models.Model): user = models.OneToOneField(settings.AUTH_USER_MODEL, null=False) ... class CourseStudent(models.Model): user = models.ForeignKey(settings.AUTH_USER_MODEL) semester = models.ForeignKey(Semester) ... </code></pre> <p>I am trying to get a queryset of all profiles that have at least one course in the current semester.</p> <p>How can I generate a queryset of profiles, where <code>profile.user</code> has at least one <code>CourseStudent</code> instance, and filtered so that <code>coursestudent.semester=current_semester</code>? </p> <p>Since a student may have multiple courses in the semester, the duplicates also need to be removed (unique profiles only in the queryset)</p> <p>EDIT: I am using postgresql and trying to figure out if I need to use <a href="https://docs.djangoproject.com/en/1.10/ref/models/querysets/#distinct" rel="nofollow">distinct with an argument</a>.</p>
0
2016-08-25T18:47:09Z
39,157,339
<p>Not tested. Maybe you should try</p> <pre><code>class Profile(models.Model): user = models.OneToOneField(settings.AUTH_USER_MODEL, null=False) ... class CourseStudent(models.Model): user = models.ForeignKey(settings.AUTH_USER_MODEL, related_name="course_student") semester = models.ForeignKey(Semester) Profile.objects.filter("what_you_want").exclude(user__courser_student=None).distinct() </code></pre>
0
2016-08-26T02:07:47Z
[ "python", "django", "postgresql", "foreign-keys", "django-queryset" ]
Python XOR Encryption program sometimes doesn't work
39,152,671
<p>I am trying to make a simple xor encryption program in python and what I have now is working almost fine, only sometimes it doesn't and I just can't figure out why. For example, if I input 'hello' and the key '1234' it will encrypt it to YW_X^ and if I then decrypt this with the same key it will print 'hello'. But if I change the key to 'qwer' the encrypted message is something like '^Y^R ^^^^' and if I try to decrypt it, 'heERQWERoi' comes out. This is the code:</p> <pre><code>from itertools import cycle, izip choice = int(raw_input('Press 1 to encrypt, 2 to decrypt. ')) if choice == 1: message = raw_input('Enter message to be encrypted: ') privatekey = raw_input('Enter a private key: ') encrypted_message = ''.join(chr(ord(c)^ord(k)) for c,k in izip(message, cycle(privatekey))) print 'Encrypted message:' print encrypted_message elif choice == 2: todecrypt = raw_input('Enter a message to be decrypted: ') otherprivatekey = raw_input('Enter the private key: ') decrypted_message = ''.join(chr(ord(c)^ord(k)) for c,k in izip(todecrypt, cycle(otherprivatekey))) print 'Decrypted message:' print decrypted_message </code></pre> <p>I have no idea what is wrong with it so I would really appreciate some help, thank you!</p>
2
2016-08-25T18:47:18Z
39,152,838
<p>It's probably working fine, but you are getting characters which you may not be able to re-input into your terminal directly as they don't correspond to the ordinarily inputtable ASCII characters. In particular, with the key <code>qwer</code>, the values of <code>ord</code> become <code>[25, 18, 9, 30, 30]</code>, which you may have a hard time inputting (cf. <a href="http://www.asciitable.com/" rel="nofollow">this table</a>).</p> <p>The similar problem will not occur if you use <code>1234</code> as a key as in that case the values are <code>[89, 87, 95, 88, 94]</code> which correspond to "normal" characters.</p>
0
2016-08-25T18:57:25Z
[ "python", "python-2.7", "encryption", "cryptography", "xor" ]
Python XOR Encryption program sometimes doesn't work
39,152,671
<p>I am trying to make a simple xor encryption program in python and what I have now is working almost fine, only sometimes it doesn't and I just can't figure out why. For example, if I input 'hello' and the key '1234' it will encrypt it to YW_X^ and if I then decrypt this with the same key it will print 'hello'. But if I change the key to 'qwer' the encrypted message is something like '^Y^R ^^^^' and if I try to decrypt it, 'heERQWERoi' comes out. This is the code:</p> <pre><code>from itertools import cycle, izip choice = int(raw_input('Press 1 to encrypt, 2 to decrypt. ')) if choice == 1: message = raw_input('Enter message to be encrypted: ') privatekey = raw_input('Enter a private key: ') encrypted_message = ''.join(chr(ord(c)^ord(k)) for c,k in izip(message, cycle(privatekey))) print 'Encrypted message:' print encrypted_message elif choice == 2: todecrypt = raw_input('Enter a message to be decrypted: ') otherprivatekey = raw_input('Enter the private key: ') decrypted_message = ''.join(chr(ord(c)^ord(k)) for c,k in izip(todecrypt, cycle(otherprivatekey))) print 'Decrypted message:' print decrypted_message </code></pre> <p>I have no idea what is wrong with it so I would really appreciate some help, thank you!</p>
2
2016-08-25T18:47:18Z
39,152,847
<p>Your script is printing non-printing characters, which sometimes can't be copy/pasted. You could encode the ciphertext into a format that uses only the characters <code>abcdef0123456789</code>, which lets you display it without issue:</p> <pre><code>print encrypted_message.encode('hex') </code></pre> <p>You can then decode it when the user types it in once more:</p> <pre><code>todecrypt = raw_input('Enter a message to be decrypted: ').decode('hex') </code></pre>
0
2016-08-25T18:57:54Z
[ "python", "python-2.7", "encryption", "cryptography", "xor" ]
Queue doesn't process all elements when there are many threads
39,152,680
<p>I have noticed that when I have many threads pulling elements from a queue, there are less elements processed than the number that I put into the queue. This is sporadic but seems to happen somewhere around half the time when I run the following code.</p> <pre><code>#!/bin/env python from threading import Thread import httplib, sys from Queue import Queue import time import random concurrent = 500 num_jobs = 500 results = {} def doWork(): while True: result = None try: result = curl(q.get()) except Exception as e: print "Error when trying to get from queue: {0}".format(str(e)) if results.has_key(result): results[result] += 1 else: results[result] = 1 try: q.task_done() except: print "Called task_done when all tasks were done" def curl(ourl): result = 'all good' try: time.sleep(random.random() * 2) except Exception as e: result = "error: %s" % str(e) except: result = str(sys.exc_info()[0]) finally: return result or "None" print "\nRunning {0} jobs on {1} threads...".format(num_jobs, concurrent) q = Queue() for i in range(concurrent): t = Thread(target=doWork) t.daemon = True t.start() for x in range(num_jobs): q.put("something") try: q.join() except KeyboardInterrupt: sys.exit(1) total_responses = 0 for result in results: num_responses = results[result] print "{0}: {1} time(s)".format(result, num_responses) total_responses += num_responses print "Number of elements processed: {0}".format(total_responses) </code></pre>
3
2016-08-25T18:47:44Z
39,152,814
<p>Tim Peters hit the nail on the head in the comments. The issue is that the tracking of results is threaded and isn't protected by any sort of mutex. That allows something like this to happen:</p> <pre><code>thread A gets result: "all good" thread A checks results[result] thread A sees no such key thread A suspends # &lt;-- before counting its result thread B gets result: "all good" thread B checks results[result] thread B sees no such key thread B sets results['all good'] = 1 thread C ... thread C sets results['all good'] = 2 thread D ... thread A resumes # &lt;-- and remembers it needs to count its result still thread A sets results['all good'] = 1 # resetting previous work! </code></pre> <p>A more typical workflow might have a results queue that the main thread is listening on.</p> <pre><code>workq = queue.Queue() resultsq = queue.Queue() make_work(into=workq) do_work(from=workq, respond_on=resultsq) # do_work would do respond_on.put_nowait(result) instead of # return result results = {} while True: try: result = resultsq.get() except queue.Empty: break # maybe? You'd probably want to retry a few times results.setdefault(result, 0) += 1 </code></pre>
1
2016-08-25T18:55:34Z
[ "python", "multithreading", "python-2.7" ]
How to have multiple groups in Python statsmodels linear mixed effects model?
39,152,729
<p>I am trying to use the Python statsmodels linear mixed effects model to fit a model that has two random intercepts, e.g. two groups. I cannot figure out how to initialize the model so that I can do this. </p> <p>Here's the example. I have data that looks like the following (taken from <a href="http://www.bodowinter.com/tutorial/politeness_data.csv" rel="nofollow">here</a>): </p> <pre><code>subject gender scenario attitude frequency F1 F 1 pol 213.3 F1 F 1 inf 204.5 F1 F 2 pol 285.1 F1 F 2 inf 259.7 F1 F 3 pol 203.9 F1 F 3 inf 286.9 F1 F 4 pol 250.8 F1 F 4 inf 276.8 </code></pre> <p>I want to make a linear mixed effects model with two random effects -- one for the subject group and one for the scenario group. I am trying to do this: </p> <pre><code>import statsmodels.api as sm model = sm.MixedLM.from_formula("frequency ~ attitude + gender", data, groups=data[['subject', 'scenario']]) result = model.fit() print result.summary() </code></pre> <p>I keep getting this error: </p> <pre><code>LinAlgError: Singular matrix </code></pre> <p>It works fine in R. When I use <code>lme4</code> in R with the formula-based rendering it fits just fine: </p> <pre><code>politeness.model = lmer(frequency ~ attitude + gender + (1|subject) + (1|scenario), data=politeness) </code></pre> <p>I don't understand why this is happening. It works when I use any one random effect/group, e.g.</p> <pre><code>model = sm.MixedLM.from_formula("frequency ~ attitude + gender", data, groups=data['subject']) </code></pre> <p>Then I get: </p> <pre><code> Mixed Linear Model Regression Results =============================================================== Model: MixedLM Dependent Variable: frequency No. Observations: 83 Method: REML No. Groups: 6 Scale: 850.9456 Min. group size: 13 Likelihood: -393.3720 Max. group size: 14 Converged: Yes Mean group size: 13.8 --------------------------------------------------------------- Coef. Std.Err. z P&gt;|z| [0.025 0.975] --------------------------------------------------------------- Intercept 256.785 15.226 16.864 0.000 226.942 286.629 attitude[T.pol] -19.415 6.407 -3.030 0.002 -31.972 -6.858 gender[T.M] -108.325 21.064 -5.143 0.000 -149.610 -67.041 Intercept RE 603.948 23.995 =============================================================== </code></pre> <p>Alternatively, if I do: </p> <pre><code>model = sm.MixedLM.from_formula("frequency ~ attitude + gender", data, groups=data['scenario']) </code></pre> <p>This is the result I get: </p> <pre><code> Mixed Linear Model Regression Results ================================================================ Model: MixedLM Dependent Variable: frequency No. Observations: 83 Method: REML No. Groups: 7 Scale: 1110.3788 Min. group size: 11 Likelihood: -402.5003 Max. group size: 12 Converged: Yes Mean group size: 11.9 ---------------------------------------------------------------- Coef. Std.Err. z P&gt;|z| [0.025 0.975] ---------------------------------------------------------------- Intercept 256.892 8.120 31.637 0.000 240.977 272.807 attitude[T.pol] -19.807 7.319 -2.706 0.007 -34.153 -5.462 gender[T.M] -108.603 7.319 -14.838 0.000 -122.948 -94.257 Intercept RE 182.718 5.502 ================================================================ </code></pre> <p>I have no idea what's going on. I feel like I am missing something foundational in the statistics of the problem. </p>
0
2016-08-25T18:50:35Z
39,156,640
<p>You are trying to fit a model with <em>crossed random effects</em>, i.e., you want to allow for consistent variation among subjects across scenarios as well as consistent variation among scenarios across subjects. You <em>can</em> use multiple random-effects terms in statsmodels, but they must be nested. Fitting crossed (as opposed to nested) random effects requires more sophisticated algorithms, and indeed the <a href="http://statsmodels.sourceforge.net/devel/mixed_linear.html" rel="nofollow">statsmodels documentation</a> says (as of 25 Aug 2016, emphasis added):</p> <blockquote> <p>Some limitations of the current implementation are that it does not support structure more complex on the residual errors (they are always homoscedastic), and <strong>it does not support crossed random effects</strong>. We hope to implement these features for the next release.</p> </blockquote> <p>As far as I can see, your choices are (1) fall back to a nested model (i.e. fit the model as though either scenario is nested within subject or <em>vice versa</em> - or try both and see if the difference matters); (2) fall back to <code>lme4</code>, either within R or via <a href="http://rpy2.bitbucket.org/" rel="nofollow">rpy2</a>.</p> <p>As always, you're entitled to a full refund of the money you paid to use statsmodels ...</p>
1
2016-08-26T00:23:01Z
[ "python", "statsmodels", "lme4", "mixed-models" ]
How to have multiple groups in Python statsmodels linear mixed effects model?
39,152,729
<p>I am trying to use the Python statsmodels linear mixed effects model to fit a model that has two random intercepts, e.g. two groups. I cannot figure out how to initialize the model so that I can do this. </p> <p>Here's the example. I have data that looks like the following (taken from <a href="http://www.bodowinter.com/tutorial/politeness_data.csv" rel="nofollow">here</a>): </p> <pre><code>subject gender scenario attitude frequency F1 F 1 pol 213.3 F1 F 1 inf 204.5 F1 F 2 pol 285.1 F1 F 2 inf 259.7 F1 F 3 pol 203.9 F1 F 3 inf 286.9 F1 F 4 pol 250.8 F1 F 4 inf 276.8 </code></pre> <p>I want to make a linear mixed effects model with two random effects -- one for the subject group and one for the scenario group. I am trying to do this: </p> <pre><code>import statsmodels.api as sm model = sm.MixedLM.from_formula("frequency ~ attitude + gender", data, groups=data[['subject', 'scenario']]) result = model.fit() print result.summary() </code></pre> <p>I keep getting this error: </p> <pre><code>LinAlgError: Singular matrix </code></pre> <p>It works fine in R. When I use <code>lme4</code> in R with the formula-based rendering it fits just fine: </p> <pre><code>politeness.model = lmer(frequency ~ attitude + gender + (1|subject) + (1|scenario), data=politeness) </code></pre> <p>I don't understand why this is happening. It works when I use any one random effect/group, e.g.</p> <pre><code>model = sm.MixedLM.from_formula("frequency ~ attitude + gender", data, groups=data['subject']) </code></pre> <p>Then I get: </p> <pre><code> Mixed Linear Model Regression Results =============================================================== Model: MixedLM Dependent Variable: frequency No. Observations: 83 Method: REML No. Groups: 6 Scale: 850.9456 Min. group size: 13 Likelihood: -393.3720 Max. group size: 14 Converged: Yes Mean group size: 13.8 --------------------------------------------------------------- Coef. Std.Err. z P&gt;|z| [0.025 0.975] --------------------------------------------------------------- Intercept 256.785 15.226 16.864 0.000 226.942 286.629 attitude[T.pol] -19.415 6.407 -3.030 0.002 -31.972 -6.858 gender[T.M] -108.325 21.064 -5.143 0.000 -149.610 -67.041 Intercept RE 603.948 23.995 =============================================================== </code></pre> <p>Alternatively, if I do: </p> <pre><code>model = sm.MixedLM.from_formula("frequency ~ attitude + gender", data, groups=data['scenario']) </code></pre> <p>This is the result I get: </p> <pre><code> Mixed Linear Model Regression Results ================================================================ Model: MixedLM Dependent Variable: frequency No. Observations: 83 Method: REML No. Groups: 7 Scale: 1110.3788 Min. group size: 11 Likelihood: -402.5003 Max. group size: 12 Converged: Yes Mean group size: 11.9 ---------------------------------------------------------------- Coef. Std.Err. z P&gt;|z| [0.025 0.975] ---------------------------------------------------------------- Intercept 256.892 8.120 31.637 0.000 240.977 272.807 attitude[T.pol] -19.807 7.319 -2.706 0.007 -34.153 -5.462 gender[T.M] -108.603 7.319 -14.838 0.000 -122.948 -94.257 Intercept RE 182.718 5.502 ================================================================ </code></pre> <p>I have no idea what's going on. I feel like I am missing something foundational in the statistics of the problem. </p>
0
2016-08-25T18:50:35Z
39,170,955
<p>Multiple or crossed random intercepts crossed effects can be fit using variance components, which are implemented in a different way from the one-group mixed effects.</p> <p>I don't find an example, and the documentation seems to be only partially updated.</p> <p>The unit tests contain an example using the MixedLM formula interface:</p> <p><a href="https://github.com/statsmodels/statsmodels/blob/master/statsmodels/regression/tests/test_lme.py#L284" rel="nofollow">https://github.com/statsmodels/statsmodels/blob/master/statsmodels/regression/tests/test_lme.py#L284</a></p>
1
2016-08-26T16:37:00Z
[ "python", "statsmodels", "lme4", "mixed-models" ]
How can I organize case-insensitive text and the material following it?
39,152,790
<p>I'm very new to Python so it'd be very appreciated if this could be explained as in-depth as possible.</p> <p>If I have some text like this on a text file:</p> <pre><code>matthew : 60 kg MaTtHew : 5 feet mAttheW : 20 years old maTThEw : student MaTTHEW : dog owner </code></pre> <p>How can I make a piece of code that can write something like...</p> <pre><code>Matthew : 60 kg , 5 feet , 20 years old , student , dog owner </code></pre> <p>...by only gathering information from the text file?</p>
0
2016-08-25T18:54:10Z
39,153,529
<pre><code>def test_data(): # This is obviously the source data as a multi-line string constant. source = \ """ matthew : 60 kg MaTtHew : 5 feet mAttheW : 20 years old maTThEw : student MaTTHEW : dog owner bob : 70 kg BoB : 6 ft """ # Split on newline. This will return a list of lines like ["matthew : 60 kg", "MaTtHew : 5 feet", etc] return source.split("\n") def append_pair(d, p): k, v = p if k in d: d[k] = d[k] + [v] else: d[k] = [v] return d if __name__ == "__main__": # Do a list comprehension. For every line in the test data, split by ":", strip off leading/trailing whitespace, # and convert to lowercase. This will yield lists of lists. # This is mostly a list of key/value size-2-lists pairs = [[x.strip().lower() for x in line.split(":", 2)] for line in test_data()] # Filter the lists in the main list that do not have a size of 2. This will yield a list of key/value pairs like: # [["matthew", "60 kg"], ["matthew", "5 feet"], etc] cleaned_pairs = [p for p in pairs if len(p) == 2] # This will iterate the list of key/value pairs and send each to append_pair, which will either append to # an existing key, or create a new key. d = reduce(append_pair, cleaned_pairs, {}) # Now, just print out the resulting dictionary. for k, v in d.items(): print("{}: {}".format(k, ", ".join(v))) </code></pre>
0
2016-08-25T19:42:36Z
[ "python" ]
How can I organize case-insensitive text and the material following it?
39,152,790
<p>I'm very new to Python so it'd be very appreciated if this could be explained as in-depth as possible.</p> <p>If I have some text like this on a text file:</p> <pre><code>matthew : 60 kg MaTtHew : 5 feet mAttheW : 20 years old maTThEw : student MaTTHEW : dog owner </code></pre> <p>How can I make a piece of code that can write something like...</p> <pre><code>Matthew : 60 kg , 5 feet , 20 years old , student , dog owner </code></pre> <p>...by only gathering information from the text file?</p>
0
2016-08-25T18:54:10Z
39,153,988
<pre><code>import sys # There's a number of assumptions I have to make based on your description. # I'll try to point those out. # Should be self-explanatory. something like: "C:\Users\yourname\yourfile" path_to_file = "put_your_path_here" # open a file for reading. The 'r' indicates read-only infile = open(path_to_file, 'r') # reads in the file line by line and strips the "invisible" endline character readLines = [line.strip() for line in infile] # make sure we close the file infile.close() # An Associative array. Does not use normal numerical indexing. # instead, in our case, we'll use a string(the name) to index into. # At a given name index(AKA key) we'll save the attributes about that person. names = dict() # iterate through each line we read in from the file # each line in this loop will be stored in the variable # item for that iteration. for item in readLines: #assuming that your file has a strict format: # name : attribute index = item.find(':') # if there was a ':' found then continue if index is not -1: # grab only the name of the person and convert the string to all lowercase name = item[0:index].lower() # see if our associative array already has that peson if names.has_key(name): # if that person has already been indexed add the new attribute # this assumes there are no dupilcates so I don't check for them. names[name].append(item[index+1:len(item)]) else: # if that person was not in the array then add them. # we're adding a list at that index to store their attributes. names[name] = list() # append the attribute to the list. # the len() function tells us how long the string 'item' is # offsetting the index by 1 so we don't capture the ':' names[name].append(item[index+1:len(item)]) else: # there was no ':' found in the line so skip it pass # iterate through keys (names) we found. for name in names: # write it to stdout. I am using this because the "print" built-in to python # always ends with a new line. This way I can print the name and then # iterate through the attributes associated with them sys.stdout.write(name + " : ") # iterate through attributes for attribute in names[name]: sys.stdout.write(attribute + ", ") # end each person with a new line. sys.stdout.write('\r\n') </code></pre>
0
2016-08-25T20:12:49Z
[ "python" ]
Python - Problems creating a list of URLs using BeautifulSoup
39,152,827
<p>I am trying to make a Python Crawler using BeautifulSoup, but I receive an error that I am trying to write a non-String or other character buffer type to a file. From examining the program output, I found that my list contains many items that are None. In addition to having None, I also have a lot of images and things that are not links but are image links inside of my list. How can I only add the URLs to my list?</p> <pre><code> import urllib from BeautifulSoup import * try: with open('url_file', 'r') as f: url_list = [line.rstrip('\n') for line in f] f.close() with open('old_file', 'r') as x: old_list = [line.rstrip('\n') for line in f] f.close() except: url_list = list() old_list = list() #for Testing url_list.append("http://www.dinamalar.com/") count = 0 for item in url_list: try: count = count + 1 if count &gt; 5: break html = urllib.urlopen(item).read() soup = BeautifulSoup(html) tags = soup('a') for tag in tags: if tag in old_list: continue else: url_list.append(tag.get('href', None)) old_list.append(item) #for testing print url_list except: continue with open('url_file', 'w') as f: for s in url_list: f.write(s) f.write('\n') with open('old_file', 'w') as f: for s in old_list: f.write(s) </code></pre>
1
2016-08-25T18:56:41Z
39,154,825
<p>First off, use <a href="https://www.crummy.com/software/BeautifulSoup/bs4/doc/" rel="nofollow">bs4</a> not the no longer maintained <em>BeautifulSoup3</em>, your error is because not all anchors have hrefs so you are trying to write <em>None</em> which causes your error, use <em>find_all</em> and set <em>href=True</em> so you only find anchor tags that have a href attribute:</p> <pre><code>soup = BeautifulSoup(html) tags = soup.find_all("a", href=True) </code></pre> <p>Also never use blanket except statements, always catch the errors you expect and at least print them when they do occur. As far as <em>I also have a lot of images and things that are not links</em> goes, if you want to filter for certain links then you have to be more specific, either look for the tags that contain what you are interested in if possible, use a regex <code>href=re.compile("some_pattern")</code> or use a css selectors:</p> <pre><code># hrefs starting with something "a[href^=something]" # hrefs that contain something "a[href*=something]" # hrefs ending with something "a[href$=something]" </code></pre> <p>Only you know the structure of the html and what you want so what you use is completely up to you to decide.</p>
1
2016-08-25T21:13:38Z
[ "python", "list", "beautifulsoup" ]
Print all characters in a string
39,152,848
<p>Is there a way to print all characters in python, even ones which usually aren't printed?</p> <p>For example</p> <pre><code>&gt;&gt;&gt;print_all("skip line") skip\nline </code></pre>
3
2016-08-25T18:57:57Z
39,153,030
<p>Looks like you want <a href="https://docs.python.org/3/library/functions.html#repr" rel="nofollow"><code>repr()</code></a></p> <pre><code>&gt;&gt;&gt; """skip ... line""" 'skip\nline' &gt;&gt;&gt; &gt;&gt;&gt; print(repr("""skip ... line""")) 'skip\nline' &gt;&gt;&gt; print(repr("skip line")) 'skip\tline </code></pre> <p>So, your function could be </p> <pre><code>print_all = lambda s: print(repr(s)) </code></pre> <p>And for Python 2, you need <code>from __future__ import print_function</code></p>
5
2016-08-25T19:09:55Z
[ "python", "string" ]
Print all characters in a string
39,152,848
<p>Is there a way to print all characters in python, even ones which usually aren't printed?</p> <p>For example</p> <pre><code>&gt;&gt;&gt;print_all("skip line") skip\nline </code></pre>
3
2016-08-25T18:57:57Z
39,153,380
<p>Even easier, cast it to a <em>raw string</em> by using <code>"%r"</code>, raw strings treat backslashes as literal characters:</p> <pre><code>print("%r" % """skip line""") skip\nline </code></pre> <p>Additionally, use <code>!r</code> in a <code>format</code> call:</p> <pre><code>print("{0!r}".format("""skip line""")) </code></pre> <p>for similar results.</p>
0
2016-08-25T19:32:25Z
[ "python", "string" ]
Merge events temporally close to one another in numpy (pandas)
39,152,864
<p>I would like to simplify a list of start and stop times. When the time between the stop of one and the start of another, I'd like to combine (rows). The following is a simplification of my data and what I would like as output:</p> <pre><code>import numpy as np import pandas as pd start_time = [ 1, 7, 20, 22, 27, 35] stop_time = [ 5, 9, 22, 26, 30, 40] events = pd.DataFrame({'start_time': start_time, 'stop_time': stop_time}) allowable_gap = 2.0 desired_start_time = [ 1, 20, 35] desired_stop_time = [ 9, 30, 40] desired_events = pd.DataFrame({'start_time':desired_start_time, 'stop_time':desired_stop_time}) </code></pre> <p>I have no requirement that I must use Pandas. However, I need to at least use numpy. The number of events is in the order of 1e6.</p> <p>Thanks for any implementations or guidance. I know that part of my problem is that I don't "get" Pandas.</p> <p>My usage likely isn't relevant to the solution. As background, I'm collecting a large number of events and then plotting them using matplotlib.pyplot. As the output is complex, the best format I have found is .svg. IE usually renders fine but takes and exceptionally long time to do so and I hope to reduce the number of lines that it has to draw. I would love to view time series in a better way but that is outside the scope of this question.</p>
2
2016-08-25T18:58:40Z
39,153,539
<p>This solution uses <a href="http://pandas.pydata.org/pandas-docs/version/0.18.0/generated/pandas.DataFrame.iterrows.html" rel="nofollow">DataFrame.iterrows()</a> function.<br> I made this assumption:</p> <ul> <li>start_time &lt;= stop_time for all events</li> </ul> <pre class="lang-python prettyprint-override"><code>import numpy as np import pandas as pd start_time = [ 1, 7, 20, 22, 27, 35] stop_time = [ 5, 9, 22, 26, 30, 40] events = pd.DataFrame({'start_time': start_time, 'stop_time': stop_time}) allowable_gap = 2.0 desired_start_time = [] desired_stop_time = [] start = None end = None for index, row in events.iterrows(): if start == None and end == None: start = row['start_time'] end = row['stop_time'] else: if end + allowable_gap &gt;= row['start_time']: end = row['stop_time'] else: desired_start_time.append(start) desired_stop_time.append(end) start = row['start_time'] end = row['stop_time'] desired_start_time.append(start) desired_stop_time.append(end) print(desired_start_time) print(desired_stop_time) </code></pre> <p></p> <h3>Output:</h3> <blockquote> <p>[1, 20, 35]<br> [9, 30, 40]</p> </blockquote>
0
2016-08-25T19:42:54Z
[ "python", "pandas", "numpy", "svg" ]
Merge events temporally close to one another in numpy (pandas)
39,152,864
<p>I would like to simplify a list of start and stop times. When the time between the stop of one and the start of another, I'd like to combine (rows). The following is a simplification of my data and what I would like as output:</p> <pre><code>import numpy as np import pandas as pd start_time = [ 1, 7, 20, 22, 27, 35] stop_time = [ 5, 9, 22, 26, 30, 40] events = pd.DataFrame({'start_time': start_time, 'stop_time': stop_time}) allowable_gap = 2.0 desired_start_time = [ 1, 20, 35] desired_stop_time = [ 9, 30, 40] desired_events = pd.DataFrame({'start_time':desired_start_time, 'stop_time':desired_stop_time}) </code></pre> <p>I have no requirement that I must use Pandas. However, I need to at least use numpy. The number of events is in the order of 1e6.</p> <p>Thanks for any implementations or guidance. I know that part of my problem is that I don't "get" Pandas.</p> <p>My usage likely isn't relevant to the solution. As background, I'm collecting a large number of events and then plotting them using matplotlib.pyplot. As the output is complex, the best format I have found is .svg. IE usually renders fine but takes and exceptionally long time to do so and I hope to reduce the number of lines that it has to draw. I would love to view time series in a better way but that is outside the scope of this question.</p>
2
2016-08-25T18:58:40Z
39,155,712
<p>A bit more efficient way to do that:</p> <pre><code>In [106]: (events.groupby((events.start_time - events.stop_time.shift() &gt; allowable_gap).cumsum()) .....: .agg({'start_time':'min', 'stop_time':'max'})[['start_time','stop_time']]) Out[106]: start_time stop_time 0 1 9 1 20 30 2 35 40 </code></pre> <p>Timing against 60K rows DF:</p> <pre><code>In [129]: events = pd.concat([events] * 10**4, ignore_index=True) In [130]: events.shape Out[130]: (60000, 2) In [131]: %paste def f(): desired_start_time = [] desired_stop_time = [] start = None end = None for index, row in events.iterrows(): if start == None and end == None: start = row['start_time'] end = row['stop_time'] else: if end + allowable_gap &gt;= row['start_time']: end = row['stop_time'] else: desired_start_time.append(start) desired_stop_time.append(end) start = row['start_time'] end = row['stop_time'] desired_start_time.append(start) desired_stop_time.append(end) ## -- End pasted text -- In [132]: %timeit f() 1 loop, best of 3: 16.1 s per loop In [133]: %%timeit .....: (events.groupby((events.start_time - events.stop_time.shift() &gt; allowable_gap).cumsum()) .....: .agg({'start_time':'min', 'stop_time':'max'})[['start_time','stop_time']]) .....: 100 loops, best of 3: 16.9 ms per loop </code></pre> <p><strong>Conclusion:</strong> "looping" solution is approx. 1000 times slower</p> <p>Another timing for 6M rows DF:</p> <pre><code>In [153]: events = pd.concat([events] * 10**6, ignore_index=True) In [154]: events.shape Out[154]: (6000000, 2) In [155]: %%timeit .....: (events.groupby((events.start_time - events.stop_time.shift() &gt; allowable_gap).cumsum()) .....: .agg({'start_time':'min', 'stop_time':'max'})[['start_time','stop_time']]) .....: 1 loop, best of 3: 1.49 s per loop </code></pre> <p>given and desired DFs:</p> <pre><code>In [98]: events Out[98]: start_time stop_time 0 1 5 1 7 9 2 20 22 3 22 26 4 27 30 5 35 40 In [99]: desired_events Out[99]: start_time stop_time 0 1 9 1 20 30 2 35 40 </code></pre> <p>Explanation:</p> <pre><code>In [107]: events.start_time - events.stop_time.shift() Out[107]: 0 NaN 1 2.0 2 11.0 3 0.0 4 1.0 5 5.0 dtype: float64 In [108]: (events.start_time - events.stop_time.shift() &gt; allowable_gap) Out[108]: 0 False 1 False 2 True 3 False 4 False 5 True dtype: bool In [109]: (events.start_time - events.stop_time.shift() &gt; allowable_gap).cumsum() Out[109]: 0 0 1 0 2 1 3 1 4 1 5 2 dtype: int32 </code></pre>
1
2016-08-25T22:28:52Z
[ "python", "pandas", "numpy", "svg" ]
What's the best way to serialize more than one model to json in Django 1.6?
39,152,899
<p>I have 3 models show as below.</p> <pre><code> class DocumentClass(models.Model): text = models.CharField(max_length=100) class DocumentGroup(models.Model): text = models.CharField(max_length=100) documentclass = models.ForeignKey(DocumentClass) class DocumentType(models.Model): text = models.CharField(max_length=100) documentgroup = models.ForeignKey(DocumentGroup) </code></pre> <p>And my goal is something like this:</p> <pre><code>[ { 'pk': 1, 'model': 'DocumentClass', 'fields':{ 'text':'DocumentClass1', 'documentgroup': [ { 'pk': 1, 'model': 'DocumentGroup' 'field': { 'text':'DocumentGroup1' } } ] } }, { 'pk': 2, 'model': 'DocumentClass', 'fields':{ 'text':'DocumentClass2' } } ] </code></pre> <p>I usually serialize a model by </p> <pre><code>jsonstr = serializers.serialize("json", DocumentType.objects.all()) </code></pre> <p>But for my goal. I have no idea. As the title. What is the best way to do that?</p> <p>Edit: The relation of above models look like:</p> <pre><code>DocumentClass1 |-DocumentGroup1 | |-DocumentType1 | |-DocumentType2 | |-... |-DocumentGroup2 | |-... DocumentClass2 |-DocumentGroup... | |-DocumentType... </code></pre>
1
2016-08-25T19:01:20Z
39,153,307
<h2>If The Models Are Related</h2> <p>This will require some more customized serialization, for which you will have to use <a href="http://www.django-rest-framework.org/api-guide/relations/#nested-relationships" rel="nofollow">Django Rest Framework's serializers</a>.</p> <p>Try subclassing <code>ModelSerializer</code>:</p> <pre><code>class DocumentTypeSerializer(serializers.ModelSerializer): class Meta: model = DocumentType class DocumentGroupSerializer(serializers.ModelSerializer): types = DocumentTypeSerializer(many=True) class Meta: model = DocumentGroup class DocumentClassSerializer(serializers.ModelSerializer): groups = DocumentGroupSerializer(many=True) class Meta: model = DocumentClass queryset = DocumentClass.objects.all() serializer = DocumentClassSerializer(queryset, many=True) json = JSONRenderer().render(serializer.data) </code></pre> <h2>If The Models Are Not Related</h2> <p>According to the <a href="https://docs.djangoproject.com/en/1.10/topics/serialization/#serializing-data" rel="nofollow" title="Django docs">Django docs</a>, the 2nd argument to <code>serialize()</code> can be "any iterator that yields Django model instances".</p> <p>Now you'll need to pass your 3 kinds of instances as an iterator. It seems like Python's <code>itertools.chain</code> is the preferred method, as voted <a href="http://stackoverflow.com/a/434755/245915">here</a>.</p> <p>So your call would look something like:</p> <pre><code>instances = list(chain(DocumentClass.objects.all(), DocumentGroup.objects.all(), DocumentType.objects.all()) jsonstr = serializers.serialize("json", instances) </code></pre>
1
2016-08-25T19:28:18Z
[ "python", "json", "serialization", "django-views" ]
What's the best way to serialize more than one model to json in Django 1.6?
39,152,899
<p>I have 3 models show as below.</p> <pre><code> class DocumentClass(models.Model): text = models.CharField(max_length=100) class DocumentGroup(models.Model): text = models.CharField(max_length=100) documentclass = models.ForeignKey(DocumentClass) class DocumentType(models.Model): text = models.CharField(max_length=100) documentgroup = models.ForeignKey(DocumentGroup) </code></pre> <p>And my goal is something like this:</p> <pre><code>[ { 'pk': 1, 'model': 'DocumentClass', 'fields':{ 'text':'DocumentClass1', 'documentgroup': [ { 'pk': 1, 'model': 'DocumentGroup' 'field': { 'text':'DocumentGroup1' } } ] } }, { 'pk': 2, 'model': 'DocumentClass', 'fields':{ 'text':'DocumentClass2' } } ] </code></pre> <p>I usually serialize a model by </p> <pre><code>jsonstr = serializers.serialize("json", DocumentType.objects.all()) </code></pre> <p>But for my goal. I have no idea. As the title. What is the best way to do that?</p> <p>Edit: The relation of above models look like:</p> <pre><code>DocumentClass1 |-DocumentGroup1 | |-DocumentType1 | |-DocumentType2 | |-... |-DocumentGroup2 | |-... DocumentClass2 |-DocumentGroup... | |-DocumentType... </code></pre>
1
2016-08-25T19:01:20Z
39,164,684
<p>As I follow by @nofinator's answer in "If The Models Are Related" section everything looks fine until <code>json = JSONRenderer().render(serializer.data)</code>I got above errors. How can I eliminate them?</p> <pre><code>AttributeError: Got AttributeError when attempting to get a value for field `documenttypes` on serializer `DocumentGroupSerializer`. The serializer field might be named incorrectly and not match any attribute or key on the `DocumentGroups` instance. Original exception text was: 'DocumentGroups' object has no attribute 'documenttypes'. </code></pre> <p>Here is my serialize.py</p> <pre><code>from rest_framework import serializers from .models import * class DocumentTypeSerializer(serializers.ModelSerializer): class Meta: model = DocumentTypes fields = ('id', 'text') class DocumentGroupSerializer(serializers.ModelSerializer): documenttypes = DocumentTypeSerializer(many=True) class Meta: model = DocumentGroups fields = ('id', 'text', 'documenttypes') class DocumentClassSerializer(serializers.ModelSerializer): documentgroups = DocumentGroupSerializer(many=True) class Meta: model = DocumentClasses fields = ('id', 'text', 'documentgroups') </code></pre> <p>And this is my models.py</p> <pre><code>from django.db import models class DocumentClasses(models.Model): text = models.CharField(max_length=100) class DocumentGroups(models.Model): text = models.CharField(max_length=100) documentclass = models.ForeignKey(DocumentClasses) class DocumentTypes(models.Model): text = models.CharField(max_length=100) documentgroup = models.ForeignKey(DocumentGroups) </code></pre> <p>Update. When I try to add <code>read_only=True</code> my serializer.py look like</p> <pre><code>from rest_framework import serializers from .models import * class DocumentTypeSerializer(serializers.ModelSerializer): class Meta: model = DocumentTypes fields = ('id', 'text', 'documentgroup') class DocumentGroupSerializer(serializers.ModelSerializer): documenttypes = DocumentTypeSerializer(many=True, read_only=True) class Meta: model = DocumentGroups fields = ('id', 'text', 'documenttypes') class DocumentClassSerializer(serializers.ModelSerializer): documentgroups = DocumentGroupSerializer(many=True, read_only=True) class Meta: model = DocumentClasses fields = ('id', 'text', 'documentgroups') </code></pre> <p>Then try:</p> <pre><code>queryset = DocumentClasses.objects.all() serializer = DocumentClassSerializer(queryset, many=True) json = JSONRenderer().render(serializer.data) </code></pre> <p>I got only <code>'[{"id":1,"text":"class1"}]'</code> But this is my expected:</p> <pre><code>[ { "id":1, "text":"class1", "documentgroups":[ { "id":1, "text":"group1", "documenttypes":[ { "id":1, "text":"type1" } ] } ] } ] </code></pre>
0
2016-08-26T10:57:35Z
[ "python", "json", "serialization", "django-views" ]
What's the best way to serialize more than one model to json in Django 1.6?
39,152,899
<p>I have 3 models show as below.</p> <pre><code> class DocumentClass(models.Model): text = models.CharField(max_length=100) class DocumentGroup(models.Model): text = models.CharField(max_length=100) documentclass = models.ForeignKey(DocumentClass) class DocumentType(models.Model): text = models.CharField(max_length=100) documentgroup = models.ForeignKey(DocumentGroup) </code></pre> <p>And my goal is something like this:</p> <pre><code>[ { 'pk': 1, 'model': 'DocumentClass', 'fields':{ 'text':'DocumentClass1', 'documentgroup': [ { 'pk': 1, 'model': 'DocumentGroup' 'field': { 'text':'DocumentGroup1' } } ] } }, { 'pk': 2, 'model': 'DocumentClass', 'fields':{ 'text':'DocumentClass2' } } ] </code></pre> <p>I usually serialize a model by </p> <pre><code>jsonstr = serializers.serialize("json", DocumentType.objects.all()) </code></pre> <p>But for my goal. I have no idea. As the title. What is the best way to do that?</p> <p>Edit: The relation of above models look like:</p> <pre><code>DocumentClass1 |-DocumentGroup1 | |-DocumentType1 | |-DocumentType2 | |-... |-DocumentGroup2 | |-... DocumentClass2 |-DocumentGroup... | |-DocumentType... </code></pre>
1
2016-08-25T19:01:20Z
39,169,543
<p>Hi @nofinator and other django rookie like me! Thanks @nofinator for the great answer! And for django rookies like me. If you want to use django-rest-framework to serialize queryset <strong>DO NOT FORGET</strong> to use the ** <strong>related_name</strong> **(without ** ) attribute in the model like this:</p> <pre><code>class DocumentClasses(models.Model): text = models.CharField(max_length=100) class DocumentGroups(models.Model): text = models.CharField(max_length=100) documentclass = models.ForeignKey(DocumentClasses, **related_name**='documentgroups') class DocumentTypes(models.Model): text = models.CharField(max_length=100) documentgroup = models.ForeignKey(DocumentGroups, **related_name**='documenttypes') </code></pre> <p>@nofinator and ralated_name made my day! Million thanks!</p>
0
2016-08-26T15:14:36Z
[ "python", "json", "serialization", "django-views" ]
django: instantiating AdminSite changes not reflected
39,152,958
<p>I am trying to change the templates of admin site, and tried overriding by creating a local template for admin, but failed without knowing why: <a href="http://stackoverflow.com/questions/39132187/django-how-do-i-actually-override-admin-site-template?noredirect=1#comment65613565_39132187">template way</a>. So now I tried to create a instance of AdminSite and make changes from there, but still failed. I have:</p> <pre><code>urls.py urlpatterns = [ url(r'^myadmin/', admin.site.urls), admin.py from django.contrib import admin from .models import Equipment, Calibration, Flag, Tests from django.contrib.admin import AdminSite class MyAdminSite(AdminSite): site_header = "Equipment Calibration Database" site_title = "Equipment Calibration Database" index_title = "Equipment Calibration Database" # Register your models here. admin_site = MyAdminSite(name = 'myadmin') admin.site.register(Equipment) admin.site.register(Calibration) admin.site.register(Flag) admin.site.register(Tests) </code></pre> <p>Now when I go to <a href="http://127.0.0.1:8000/myadmin/" rel="nofollow">http://127.0.0.1:8000/myadmin/</a>, I still find that none of the text I specified in MyAdminSite is in effect, I still see "Django administration" and "Site administration". </p> <p>This is all acting really weird and helps are greatly welcome</p>
1
2016-08-25T19:05:54Z
39,153,065
<p>To change the text in the admin, all you need to add to the <code>admin.py</code> is as follows:</p> <pre><code>admin.site.site_header = "Equipment Calibration Database" admin.site.site_title = "Equipment Calibration Database" admin.site.index_title = "Equipment Calibration Database" </code></pre>
1
2016-08-25T19:12:01Z
[ "python", "django", "django-admin" ]
1+ GET request returning error 403 to heroku python
39,152,976
<p>I'm using an api in my script. When I run the script via my own terminal, I'm successfully able to make 3 calls to the endpoint. But when I run the same script on heroku bash, only the first call is success, other two return Error 403. Here's my code</p> <pre><code>results = [] for level in levels: headers={'User-Agent': 'Mozilla/5.0'} res = requests.get(url+level,headers=headers) if res.status_code==200: res = json.loads(str(res.content)) print "success" #do something else: print "Error",str(res.status_code) return results </code></pre> <p>In my terminal the output is</p> <pre><code> success success success </code></pre> <p>In heroku bash the output is</p> <pre><code> success Error 403 Error 403 </code></pre> <p>I've also tried it without the User-Agent header but same issue persists.</p>
1
2016-08-25T19:06:59Z
39,153,243
<p>It's a permissions error. My guess is that when you run this from your browser, the first page sets various cookies, which the second and third requests require.</p> <p>A quick possible fix, <em>IF</em> that's the issue, is to use requests' Session() objects. This stores cookies and sends them back on subsequent requests, kinda like a normal browser would.</p> <pre><code>results = [] mySession = requests.Session() for level in levels: headers={'User-Agent': 'Mozilla/5.0'} res = mySession.get(url+level,headers=headers) if res.status_code==200: res = json.loads(str(res.content)) print "success" #do something else: print "Error",str(res.status_code) return results </code></pre>
0
2016-08-25T19:23:28Z
[ "python", "heroku" ]
How to sort numpy array by absolute value of a column?
39,153,007
<p>What I have now:</p> <pre><code>import numpy as np # 1) Read CSV with headers data = np.genfromtxt("big.csv", delimiter=',', names=True) # 2) Get absolute values for column in a new ndarray new_ndarray = np.absolute(data["target_column_name"]) # 3) Append column in new_ndarray to data # I'm having trouble here. Can't get hstack, concatenate, append, etc; to work # 4) Sort by new column and obtain a new ndarray data.sort(order="target_column_name_abs") </code></pre> <p>I would like:</p> <ul> <li>A solution for 3): To be able to add this new "abs" column to the original ndarray or</li> <li>Another approach to be able to sort a csv file by the absolute values of a column.</li> </ul>
1
2016-08-25T19:08:56Z
39,153,235
<p>Here is a way to do it.<br> First, let's create a sample array:</p> <pre><code>In [39]: a = (np.arange(12).reshape(4, 3) - 6) In [40]: a Out[40]: array([[-6, -5, -4], [-3, -2, -1], [ 0, 1, 2], [ 3, 4, 5]]) </code></pre> <p>Ok, lets say </p> <pre><code>In [41]: col = 1 </code></pre> <p>which is the column we want to sort by,<br> and here is the sorting code - using Python's <code>sorted</code>:</p> <pre><code>In [42]: b = sorted(a, key=lambda row: np.abs(row[col])) </code></pre> <p>Let's convert b from list to array, and we have:</p> <pre><code>In [43]: np.array(b) Out[43]: array([[ 0, 1, 2], [-3, -2, -1], [ 3, 4, 5], [-6, -5, -4]]) </code></pre> <p>Which is the array with the rows sorted according to<br> the absolute value of column 1.</p>
1
2016-08-25T19:22:47Z
[ "python", "python-3.x", "csv", "numpy" ]
How to sort numpy array by absolute value of a column?
39,153,007
<p>What I have now:</p> <pre><code>import numpy as np # 1) Read CSV with headers data = np.genfromtxt("big.csv", delimiter=',', names=True) # 2) Get absolute values for column in a new ndarray new_ndarray = np.absolute(data["target_column_name"]) # 3) Append column in new_ndarray to data # I'm having trouble here. Can't get hstack, concatenate, append, etc; to work # 4) Sort by new column and obtain a new ndarray data.sort(order="target_column_name_abs") </code></pre> <p>I would like:</p> <ul> <li>A solution for 3): To be able to add this new "abs" column to the original ndarray or</li> <li>Another approach to be able to sort a csv file by the absolute values of a column.</li> </ul>
1
2016-08-25T19:08:56Z
39,153,314
<p>Here's a solution using <a href="http://pandas.pydata.org/" rel="nofollow">pandas</a>:</p> <pre><code>In [117]: import pandas as pd In [118]: df = pd.read_csv('test.csv') In [119]: df Out[119]: a b 0 1 -3 1 2 2 2 3 -1 3 4 4 In [120]: df['c'] = abs(df['b']) In [121]: df Out[121]: a b c 0 1 -3 3 1 2 2 2 2 3 -1 1 3 4 4 4 In [122]: df.sort_values(by='c') Out[122]: a b c 2 3 -1 1 1 2 2 2 0 1 -3 3 3 4 4 4 </code></pre>
1
2016-08-25T19:28:32Z
[ "python", "python-3.x", "csv", "numpy" ]
Python recursive function or cycle to convert string to json logical object
39,153,037
<p>I have this function:</p> <pre><code>def req_splitter(req_string): req = {} if " AND " in req_string: cond = "AND" req_splitted = req_string.split(" AND ") elif " OR " in req_string: cond = "OR" req_splitted = req_string.split(" OR ") else: cond = "AND" req_splitted = [req_string] if len(req_splitted) &gt; 1: for sub_req in req_splitted: sub_req_splitted = req_splitter(sub_req) req[cond] = list()#new_req req[cond].append(sub_req_splitted) else: req[cond] = req_splitted return req </code></pre> <p>It is intended to convert to json-logic conditions a strings like this one:</p> <pre><code>Barracks AND Tech Lab Lair OR Hive Hatchery OR Lair OR Hive Cybernetics Core AND Gateway OR Warpgate Forge AND Twilight Council AND Ground Armor 1 Spire OR Greater Spire AND Hive AND Flyer Flyer Carapace 2 Spire OR Greater Spire AND Lair OR Hive AND Flyer Attacks 1 </code></pre> <p>json_logic condition is looks like this:</p> <pre><code>{ "and": [ { "or": [ "Gateway", "Warpgate" ] }, "Cybernetics Core" ] } </code></pre> <p>How my recursive function should work to help me to split the string to the condition object like the example above?</p> <hr> <p>To help you understand the problem: </p> <p><a href="http://jsonlogic.com/" rel="nofollow">json_logic</a> is a module that checks the condition, like the dictionary you see above and returns some result, depending with what you compare it.</p> <p>And how the condition works: key-value par is one single logical statement. The key stands for logical condition. And the values in list is stands for operands. if a value itself not a list but a dictionary, it recurses.</p> <p>You can compare it to "<a href="https://en.wikipedia.org/wiki/Polish_notation" rel="nofollow">polish notation</a>"</p> <p>And last thing - AND statements has more priority than OR statements, and OR statements are always together.</p>
1
2016-08-25T19:10:12Z
39,153,634
<p>You'll need to write a simple top-down parser. The inimitable effbot wrote a <a href="http://effbot.org/zone/simple-top-down-parsing.htm" rel="nofollow">great tutorial</a> about just that kind of thing.</p> <p>Tokenizing is just splitting on the <code>r'\s+(OR|AND)\s+'</code> regex, then recognising <code>OR</code> and <code>AND</code> as operators, the rest as literals. The <code>AND</code> and <code>OR</code> token <code>.led()</code> methods could flatten out directly nested operators of the same type.</p> <p>I've implemented what is described there using a little more OOP (and not globals), and made it Python 2 and 3 compatible:</p> <pre><code>import re from functools import partial class OpAndToken(object): lbp = 10 op = 'and' def led(self, parser, left): right = parser.expression(self.lbp) operands = [] for operand in left, right: # flatten out nested operands of the same type if isinstance(operand, dict) and self.op in operand: operands.extend(operand[self.op]) else: operands.append(operand) return {self.op: operands} class OpOrToken(OpAndToken): lbp = 20 op = 'or' class LiteralToken(object): def __init__(self, value): self.value = value def nud(self): return self.value class EndToken(object): lbp = 0 class Parser(object): operators = {'AND': OpAndToken, 'OR': OpOrToken} token_pat = re.compile("\s+(AND|OR)\s+") def __init__(self, program): self.program = program self.tokens = self.tokenizer() self.token = next(self.tokens) def expression(self, rbp=0): t = self.token self.token = next(self.tokens) left = t.nud() while rbp &lt; self.token.lbp: t = self.token self.token = next(self.tokens) left = t.led(self, left) return left def tokenizer(self): for tok in self.token_pat.split(self.program): if tok in self.operators: yield self.operators[tok]() else: yield LiteralToken(tok) yield EndToken() def parse(self): return self.expression() </code></pre> <p>This parses your format into the expected output:</p> <pre><code>&gt;&gt;&gt; Parser('foo AND bar OR spam AND eggs').parse() {'and': ['foo', {'or': ['bar', 'spam']}, 'eggs']} </code></pre> <p>Demo on your input lines:</p> <pre><code>&gt;&gt;&gt; from pprint import pprint &gt;&gt;&gt; tests = '''\ ... Barracks AND Tech Lab ... Lair OR Hive ... Hatchery OR Lair OR Hive ... Cybernetics Core AND Gateway OR Warpgate ... Forge AND Twilight Council AND Ground Armor 1 ... Spire OR Greater Spire AND Hive AND Flyer Flyer Carapace 2 ... Spire OR Greater Spire AND Lair OR Hive AND Flyer Attacks 1 ... '''.splitlines() &gt;&gt;&gt; for test in tests: ... pprint(Parser(test).parse()) ... {'and': ['Barracks', 'Tech Lab']} {'or': ['Lair', 'Hive']} {'or': ['Hatchery', 'Lair', 'Hive']} {'and': ['Cybernetics Core', {'or': ['Gateway', 'Warpgate']}]} {'and': ['Forge', 'Twilight Council', 'Ground Armor 1']} {'and': [{'or': ['Spire', 'Greater Spire']}, 'Hive', 'Flyer Flyer Carapace 2']} {'and': [{'or': ['Spire', 'Greater Spire']}, {'or': ['Lair', 'Hive']}, 'Flyer Attacks 1']} </code></pre> <p>Note that for multiple <code>OR</code> or <code>AND</code> operators in a row the operands are combined.</p> <p>I'll leave adding support for <code>(...)</code> parentheses to the reader; the tutorial shows you how to do this (just make the <code>advance()</code> function a method on the <code>Parser</code> class and pass the parser in to <code>.nud()</code> calls too, or pass in the parser when creating token class instances).</p>
5
2016-08-25T19:49:53Z
[ "python", "json", "recursion", "logic", "polish-notation" ]
Is there a way to print in Python that will not output the current contents of stdin?
39,153,058
<p>I'm using select to wait for stdin or data from the server/client, but if I receive a message while typing, my current text is printed as well the received message. I may be using select incorrectly, but I need to find a way to maintain what's currently being typed without it being output with the message.</p>
-3
2016-08-25T19:11:23Z
39,153,259
<p>I haven't tried it (so I might be a bit out of my expertise) but <a href="http://rpyc.readthedocs.io/en/latest/index.html" rel="nofollow">RPyC</a> might be able to do what you are asking. In its <a href="http://rpyc.readthedocs.io/en/latest/docs/howto.html" rel="nofollow">How To</a> page they show a couple of snippets that direction the print for both host and local:</p> <pre><code>&gt;&gt;&gt; import rpyc &gt;&gt;&gt; c = rpyc.classic.connect("localhost") &gt;&gt;&gt; c.execute("print 'hi there'") # this will print on the host &gt;&gt;&gt; import sys &gt;&gt;&gt; c.modules.sys.stdout = sys.stdout &gt;&gt;&gt; c.execute("print 'hi here'") # now this will be redirected here hi here </code></pre> <p>, or:</p> <pre><code>&gt;&gt;&gt; c.execute("print 'hi there'") # printed on the server &gt;&gt;&gt; &gt;&gt;&gt; with rpyc.classic.redirected_stdio(c): ... c.execute("print 'hi here'") # printed on the client ... hi here &gt;&gt;&gt; c.execute("print 'hi there again'") # printed on the server &gt;&gt;&gt; </code></pre>
1
2016-08-25T19:24:13Z
[ "python" ]
Python Eve patch_internal set etag
39,153,106
<p>I am trying to make an <code>on_insert</code> event hook for a document from schema A, call eve's <code>patch_internal</code> to a document from schema B. </p> <p>Since the request triggering the event hook is a <code>POST</code> for A, there's no <code>If-Match</code> header with the necessary B <code>etag</code> in flask request.</p> <p>I've tried to set the value on the flask request before the <code>patch_internal</code> with <code>flask.request.if_match</code>, but seems it is not possible since it is a frozenset.</p> <p>How can I set the <code>etag</code> for the <code>patch_internal</code> call?</p> <p>In summary, the creation of a document updates other document with different schema. Can I do this?</p> <p>Thanks.</p>
1
2016-08-25T19:14:54Z
39,153,577
<p>Can't say this is the best way, but it works.</p> <p>Creating a new request context to call <code>patch_internal</code>, and setting the <code>etag</code> on the <code>environ</code> used to create it, works. Below an example:</p> <pre><code>from flask import request, current_app from eve.methods.patch import patch_internal ... new_environ = request.environ new_env['HTTP_IF_MATCH'] = patched_document['_etag'] with current_app.request_context(new_env): result = patch_internal('my_document_b', {'updated_key': value}, {'_id': my_document_b['_id']}) ... </code></pre>
0
2016-08-25T19:45:28Z
[ "python", "flask", "eve" ]
Python Eve patch_internal set etag
39,153,106
<p>I am trying to make an <code>on_insert</code> event hook for a document from schema A, call eve's <code>patch_internal</code> to a document from schema B. </p> <p>Since the request triggering the event hook is a <code>POST</code> for A, there's no <code>If-Match</code> header with the necessary B <code>etag</code> in flask request.</p> <p>I've tried to set the value on the flask request before the <code>patch_internal</code> with <code>flask.request.if_match</code>, but seems it is not possible since it is a frozenset.</p> <p>How can I set the <code>etag</code> for the <code>patch_internal</code> call?</p> <p>In summary, the creation of a document updates other document with different schema. Can I do this?</p> <p>Thanks.</p>
1
2016-08-25T19:14:54Z
39,943,898
<p>You can set <code>concurrency_check=False</code> in your call to <code>patch_internal</code></p> <pre><code>from eve.methods.patch import patch_internal with app.request_context(): payload = {"value_key": "new value"} lookup = {"_id": "12ababa12..."} patch_internal("res_name", payload, **lookup, concurrency_check=False) </code></pre>
1
2016-10-09T12:58:59Z
[ "python", "flask", "eve" ]
How to apply bitwise operator to compare a list of objects
39,153,200
<p>Suppose I have a long list of objects (say, a list of numpy matrices of bool elements) <code>foo = [a, b, c]</code>, that I want to compare with some bitwise operator, to get something like <code>a | b | c</code>.</p> <p>If I could use this bitwise operation as a function, say a <code>bitwiseor</code> function, I could simply do this with <code>bitwiseor(*foo)</code>. However, I was not able to find whether the bitwise or can be written in such functional form.</p> <p>Is there some handy way to handle this kind of problem? Or should I just use a loop to compare all the elements cumulatively?</p>
2
2016-08-25T19:21:03Z
39,153,234
<p>Using the functional method in <a href="https://docs.python.org/2/library/operator.html#operator.or_" rel="nofollow"><code>operator</code></a> in combination with <a href="https://docs.python.org/2/library/functions.html#reduce" rel="nofollow"><code>reduce</code></a>:</p> <pre><code>&gt;&gt;&gt; import operator, functools &gt;&gt;&gt; functools.reduce(operator.or_, [1,2,3]) 3 </code></pre>
8
2016-08-25T19:22:45Z
[ "python", "bitwise-operators" ]
How to apply bitwise operator to compare a list of objects
39,153,200
<p>Suppose I have a long list of objects (say, a list of numpy matrices of bool elements) <code>foo = [a, b, c]</code>, that I want to compare with some bitwise operator, to get something like <code>a | b | c</code>.</p> <p>If I could use this bitwise operation as a function, say a <code>bitwiseor</code> function, I could simply do this with <code>bitwiseor(*foo)</code>. However, I was not able to find whether the bitwise or can be written in such functional form.</p> <p>Is there some handy way to handle this kind of problem? Or should I just use a loop to compare all the elements cumulatively?</p>
2
2016-08-25T19:21:03Z
39,154,848
<p><code>numpy</code> has a number of functions that support reducing along an axis. See <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.bitwise_or.html#numpy.bitwise_or" rel="nofollow"><code>numpy.bitwise_or</code></a> and <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.ufunc.reduce.html#numpy.ufunc.reduce" rel="nofollow"><code>numpy.bitwise_or.reduce</code></a>.</p> <pre><code>np.bitwise_or.reduce(your_array) </code></pre>
1
2016-08-25T21:15:37Z
[ "python", "bitwise-operators" ]
Python Pandas Create Records from Complex Dictionary
39,153,266
<p>I have processed some very complex nested json objects to get the following general dictionary format:</p> <pre><code>{'key1':'value1', 'key2':'value2', 'key3':'value3', 'key4':'value4', 'key5':[['value5', 'value6', 'value7'], ['value8', 'value9', 'value10']], 'key6':[['value5', 'value6', 'value7'], ['value8', 'value9', 'value10']]} </code></pre> <p>In the list of lists, each list indicates something that should be an "individual transaction" equivalent. Each transaction shares key1, key2, key3, key4 pairs. There can be an arbitrary number of lists. I am trying to efficiently turn these into records in a pandas dataframe like the following:</p> <pre><code> key1_field, key2_field, key3_field, key4_field, key5_or_key6_field_1, key5_or_key6_field_2, key5_or_key6_field_3, key5_or_key6_indicator value1, value2, value3, value 4, value5, value6, value7, key5 value1, value2, value3, value 4, value5, value6, value7, key6 value1, value2, value3, value 4, value8, value9, value10, key5 value1, value2, value3, value 4, value8, value9, value10, key6 </code></pre> <p>Any assistance would be sincerely appreciated! It has been a challenge enough getting this to this point. Thanks!</p> <p><strong>EDIT:</strong></p> <p>As asked, I can post how I have been trying to approach this:</p> <pre><code>import pandas as pd import numpy as np d = {'key1':'value1', 'key2':'value2', 'key3':'value3', 'key4':'value4', 'key5':[['value5', 'value6', 'value7'], ['value8', 'value9', 'value10']], 'key6':[['value5', 'value6', 'value7'], ['value8', 'value9', 'value10']]} df = pd.DataFrame({k : pd.Series(v) for k, v in d.iteritems()}) </code></pre> <p>My remaining issue is that the single key values are NaN after the first row.</p> <p><a href="http://i.stack.imgur.com/mZ3HO.png" rel="nofollow"><img src="http://i.stack.imgur.com/mZ3HO.png" alt="enter image description here"></a></p>
2
2016-08-25T19:25:19Z
39,153,883
<p>One option is to read the dictionary as it is and reshape the data frame:</p> <pre><code>df = pd.DataFrame({'key1':'value1', 'key2':'value2', 'key3':'value3', 'key4':'value4', 'key5':[['value5', 'value6', 'value7'], ['value8', 'value9', 'value10']], 'key6':[['value5', 'value6', 'value7'], ['value8', 'value9', 'value10']]}) df.set_index(['key1', 'key2', 'key3', 'key4']).stack().apply(pd.Series) \ .rename(columns = lambda x: "value_" + str(x)).reset_index() # key1 key2 key3 key4 level_4 value_0 value_1 value_2 # 0 value1 value2 value3 value4 key5 value5 value6 value7 # 1 value1 value2 value3 value4 key6 value5 value6 value7 # 2 value1 value2 value3 value4 key5 value8 value9 value10 # 3 value1 value2 value3 value4 key6 value8 value9 value10 </code></pre>
2
2016-08-25T20:06:08Z
[ "python", "json", "pandas", "dictionary" ]
Python Pandas Create Records from Complex Dictionary
39,153,266
<p>I have processed some very complex nested json objects to get the following general dictionary format:</p> <pre><code>{'key1':'value1', 'key2':'value2', 'key3':'value3', 'key4':'value4', 'key5':[['value5', 'value6', 'value7'], ['value8', 'value9', 'value10']], 'key6':[['value5', 'value6', 'value7'], ['value8', 'value9', 'value10']]} </code></pre> <p>In the list of lists, each list indicates something that should be an "individual transaction" equivalent. Each transaction shares key1, key2, key3, key4 pairs. There can be an arbitrary number of lists. I am trying to efficiently turn these into records in a pandas dataframe like the following:</p> <pre><code> key1_field, key2_field, key3_field, key4_field, key5_or_key6_field_1, key5_or_key6_field_2, key5_or_key6_field_3, key5_or_key6_indicator value1, value2, value3, value 4, value5, value6, value7, key5 value1, value2, value3, value 4, value5, value6, value7, key6 value1, value2, value3, value 4, value8, value9, value10, key5 value1, value2, value3, value 4, value8, value9, value10, key6 </code></pre> <p>Any assistance would be sincerely appreciated! It has been a challenge enough getting this to this point. Thanks!</p> <p><strong>EDIT:</strong></p> <p>As asked, I can post how I have been trying to approach this:</p> <pre><code>import pandas as pd import numpy as np d = {'key1':'value1', 'key2':'value2', 'key3':'value3', 'key4':'value4', 'key5':[['value5', 'value6', 'value7'], ['value8', 'value9', 'value10']], 'key6':[['value5', 'value6', 'value7'], ['value8', 'value9', 'value10']]} df = pd.DataFrame({k : pd.Series(v) for k, v in d.iteritems()}) </code></pre> <p>My remaining issue is that the single key values are NaN after the first row.</p> <p><a href="http://i.stack.imgur.com/mZ3HO.png" rel="nofollow"><img src="http://i.stack.imgur.com/mZ3HO.png" alt="enter image description here"></a></p>
2
2016-08-25T19:25:19Z
39,174,123
<p>Try this:</p> <pre><code>pd.DataFrame({k : pd.Series(v) for k, v in d.iteritems()}).ffill() </code></pre>
1
2016-08-26T20:20:21Z
[ "python", "json", "pandas", "dictionary" ]
How to add a Group Knob in nuke using python api
39,153,282
<p>Using tcl script in nuke, adding a group knob to a node looks like this</p> <pre><code>addUserKnob {20 start_group l "My Group" n 1} ... add other knobs addUserKnob {20 end_group l endGroup n -1} </code></pre> <p>It appears that the Group knob uses the same knob type as the Tab knob, except that it uses the <code>n</code> keyword argument. I don't see any information in the <a href="https://www.thefoundry.co.uk/products/nuke/developers/100/pythonreference/nuke.Tab_Knob-class.html" rel="nofollow">python api documentation</a> on how to set the <code>n</code> argument so that nuke creates a Group instead of a Tab.</p> <p>My python code looks something like this</p> <pre><code># Get node node = nuke.toNode('MyNode') # Add new tab to node tab = nuke.Tab_Knob('custom_tab', 'Custom Tab') node.addKnob(tab) # Add a group knob group = nuke.Tab_Knob('group_1', 'Group 1') # some other argument or flag? node.addKnob(group) # Add some other knobs name = nuke.String_Knob('name', 'Name') node.addKnob(name) # Add some type of "end group" knob? ? </code></pre> <p>I'm assuming I should be using the <code>Tab_Knob</code> in python just like I use the Tab knob type (i.e. <code>20</code>) in tcl script, and that there is both a start and end knob for the group, but I'm not sure how that should be done in python.</p>
2
2016-08-25T19:25:57Z
39,215,569
<p>Here is how you add Group knobs using python in nuke.</p> <pre><code>node = nuke.toNode('MyNode') # A Group node is created by passing a 3rd argument to the Tab Knob # This will create a Group knob that is open by default begin = nuke.Tab_Knob('begin', 'My Group :', 1) # Alternatively, if you want to create a Group knob that is closed by # default, you can pass this constant in as the 3rd argument instead # of 1 begin = nuke.Tab_Knob('begin', 'My Group :', nuke.TABBEGINCLOSEDGROUP) # Add Group knob to node node.addKnob(begin) # Create and add some other knobs. They will be inside the group. button1 = nuke.PyScript_Knob("button1", "Button 1") button2 = nuke.PyScript_Knob("button2", "Button 2") button3 = nuke.PyScript_Knob("button3", "Button 3") node.addKnob(button1) node.addKnob(button2) node.addKnob(button3) # Create and add a Close group knob begin = nuke.Tab_Knob('begin', 'My Group :', -1) </code></pre>
0
2016-08-29T21:33:40Z
[ "python", "nuke" ]
Memory error in python with numpy.pad function
39,153,343
<p>I read a csv file in python and create an 4664605 x 4 array. I want a matrix. So, I use the numpy.pad (wit constant value = 0) function in order to create 4664605 x 4664605 matrix. But I have the following error messaage:</p> <blockquote> <p>Traceback (most recent call last): File "C:\Users\Angelika\Eclipse\Projects\vonNeumann\vonNeumann.py", line 7, in A_new = np.pad(A, ((0,0),(0,4664601)), 'constant',constant_values=(0)) File "C:\Anaconda\lib\site-packages\numpy\lib\arraypad.py", line 1394, in pad newmat = _append_const(newmat, pad_after, after_val, axis) File "C:\Anaconda\lib\site-packages\numpy\lib\arraypad.py", line 138, in _append_const return np.concatenate((arr, np.zeros(padshape, dtype=arr.dtype)), MemoryError</p> </blockquote> <p>I have checked the maximum size of my system in case of overflowing but it is ok. More specifically, sys.maxsize = 9223372036854775807 and matrix size = 21758539806025. The issue is that when append rows everything is ok. That is, the result is a 9329210 x 4 array. But I can't add 4664601 columns in order to have a matrix. I don't know what to do.</p> <p>Thank you very much, Angelika</p>
1
2016-08-25T19:30:06Z
39,154,019
<p>This is more of a question than an answer. But it's too long for comment lines.</p> <p>The distinction between a <code>4664605 x 4 array</code> and <code>4664605 x 4664605 matrix</code> doesn't make much sense. Squareness does not define a <code>matrix</code>, at least not in most contexts.</p> <p>What's the purpose of adding many 0 filled columns to this array? Even if you had memory to create one that big, would you have enough memory to hold several copies (as needed for math and many other operations)?</p> <p>The error line:</p> <p>return np.concatenate((arr, np.zeros(padshape, dtype=arr.dtype))</p> <p><code>arr</code> must be (4664605,4) in shape, and <code>padshape</code> <code>(4664605, 466401)</code>. So it is trying make an <code>zeros</code> array of <code>padshape</code> size, and then make a new array of the final size. So simply constructing this requires space for 2 very larger arrays.</p> <p>You might save a bit of space by doing the <code>pad</code> directly</p> <pre><code>res = np.zeros((4664605, 4664605), dtype=arg.dtype) res[:,:4] = arr </code></pre> <p>But still - why make such a big array that is mostly zero?</p>
1
2016-08-25T20:14:38Z
[ "python", "arrays", "numpy", "out-of-memory", "anaconda" ]
Using Python's regex .match() method to get the string before and after an underscore
39,153,351
<p><strong><em>I have the following code...</em></strong></p> <pre><code>tablesInDataset = ["henry_jones_12345678", "henry_jones", "henry_jones_123"] for table in tablesInDataset: tableregex = re.compile("\d{8}") tablespec = re.match(tableregex, table) everythingbeforedigits = tablespec.group(0) digits = tablespec.group(1) </code></pre> <p>My regex should only return the string if it contains 8 digits after an underscore. Once it returns the string, I want to use <code>.match()</code> to get two groups using the <code>.group()</code> method. The first group should contain a string will all of the characters before the digits and the second should contain a string with the 8 digits. </p> <p>Could someone please help me figure out the correct regex to use to get the results I am looking for using <code>.match()</code> and <code>.group()</code>?</p>
4
2016-08-25T19:30:37Z
39,153,393
<p>I think this pattern should match what you need: <code>(.*?_)(\d{8})</code>.</p> <p>First group includes everything up to the 8 digits, including the underscore. Second group is the 8 digits.</p> <p>If you don't want the underscore included, use this instead: <code>(.*?)_(\d{8})</code></p>
2
2016-08-25T19:33:22Z
[ "python", "regex", "string", "iterator" ]
Using Python's regex .match() method to get the string before and after an underscore
39,153,351
<p><strong><em>I have the following code...</em></strong></p> <pre><code>tablesInDataset = ["henry_jones_12345678", "henry_jones", "henry_jones_123"] for table in tablesInDataset: tableregex = re.compile("\d{8}") tablespec = re.match(tableregex, table) everythingbeforedigits = tablespec.group(0) digits = tablespec.group(1) </code></pre> <p>My regex should only return the string if it contains 8 digits after an underscore. Once it returns the string, I want to use <code>.match()</code> to get two groups using the <code>.group()</code> method. The first group should contain a string will all of the characters before the digits and the second should contain a string with the 8 digits. </p> <p>Could someone please help me figure out the correct regex to use to get the results I am looking for using <code>.match()</code> and <code>.group()</code>?</p>
4
2016-08-25T19:30:37Z
39,153,397
<pre><code>tableregex = re.compile("(.*)_(\d{8})") </code></pre>
4
2016-08-25T19:33:40Z
[ "python", "regex", "string", "iterator" ]
Using Python's regex .match() method to get the string before and after an underscore
39,153,351
<p><strong><em>I have the following code...</em></strong></p> <pre><code>tablesInDataset = ["henry_jones_12345678", "henry_jones", "henry_jones_123"] for table in tablesInDataset: tableregex = re.compile("\d{8}") tablespec = re.match(tableregex, table) everythingbeforedigits = tablespec.group(0) digits = tablespec.group(1) </code></pre> <p>My regex should only return the string if it contains 8 digits after an underscore. Once it returns the string, I want to use <code>.match()</code> to get two groups using the <code>.group()</code> method. The first group should contain a string will all of the characters before the digits and the second should contain a string with the 8 digits. </p> <p>Could someone please help me figure out the correct regex to use to get the results I am looking for using <code>.match()</code> and <code>.group()</code>?</p>
4
2016-08-25T19:30:37Z
39,153,400
<p>Use capture groups:</p> <pre><code>&gt;&gt;&gt; import re &gt;&gt;&gt; pat = re.compile(r'(?P&lt;name&gt;.*)_(?P&lt;number&gt;\d{8})') &gt;&gt;&gt; pat.findall(s) [('henry_jones', '12345678')] </code></pre> <p>You get the nice feature of named groups, if you want it:</p> <pre><code>&gt;&gt;&gt; match = pat.match(s) &gt;&gt;&gt; match.groupdict() {'name': 'henry_jones', 'number': '12345678'} </code></pre>
5
2016-08-25T19:33:52Z
[ "python", "regex", "string", "iterator" ]
Using Python's regex .match() method to get the string before and after an underscore
39,153,351
<p><strong><em>I have the following code...</em></strong></p> <pre><code>tablesInDataset = ["henry_jones_12345678", "henry_jones", "henry_jones_123"] for table in tablesInDataset: tableregex = re.compile("\d{8}") tablespec = re.match(tableregex, table) everythingbeforedigits = tablespec.group(0) digits = tablespec.group(1) </code></pre> <p>My regex should only return the string if it contains 8 digits after an underscore. Once it returns the string, I want to use <code>.match()</code> to get two groups using the <code>.group()</code> method. The first group should contain a string will all of the characters before the digits and the second should contain a string with the 8 digits. </p> <p>Could someone please help me figure out the correct regex to use to get the results I am looking for using <code>.match()</code> and <code>.group()</code>?</p>
4
2016-08-25T19:30:37Z
39,153,647
<p>Here you go:</p> <pre><code>import re tablesInDataset = ["henry_jones_12345678", "henry_jones", "henry_jones_123"] rx = re.compile(r'^(\D+)_(\d{8})$') matches = [(match.groups()) \ for item in tablesInDataset \ for match in [rx.search(item)] \ if match] print(matches) </code></pre> <p>Better than any <em>dot-star-soup</em> :)</p>
1
2016-08-25T19:50:38Z
[ "python", "regex", "string", "iterator" ]
How to sort stopped EC2s by time using "state_transition_reason" variable? Python Boto3
39,153,410
<p>I am seeing a steep increase on my AWS account costs. The largest cost items are: <strong>EC2: 67% RDS: 12%</strong></p> <p>I have more than 50 stopped EC2s. One of them has been sitting there in a stopped state from September of the year 2015.</p> <p>I found the way to get the stopped time of EC2s using variable called:</p> <blockquote> <p>state_transition_reason</p> </blockquote> <p>Here how the code looks:</p> <pre><code>import boto3 session = boto3.Session(region_name="us-east-1") ec2 = session.resource('ec2') instances = ec2.instances.filter( Filters=[{'Name': 'instance-state-name', 'Values': ['stopped']}]) count = 0 for i in instances: print "{0}, {1}, {2}".format( i.id, i.state_transition_reason, i.state['Name']) count +=1 print count </code></pre> <p>It prints out the following information:</p> <pre><code>i-pll78233b, User initiated (2016-07-06 21:14:03 GMT), stopped i-tr62l5647, User initiated (2015-12-18 21:35:20 GMT), stopped i-9oc4391ca, User initiated (2016-03-17 04:37:46 GMT), stopped 55 </code></pre> <p><strong>My question is</strong>: How can I sort instances (EC2s) by their time being stopped. In my example I would love to see the output in the following order starting from year 2015 accordingly:</p> <pre><code>i-tr62l5647, User initiated (2015-12-18 21:35:20 GMT), stopped i-9oc4391ca, User initiated (2016-03-17 04:37:46 GMT), stopped i-pll78233b, User initiated (2016-07-06 21:14:03 GMT), stopped 55 </code></pre> <p>Thanks.</p>
1
2016-08-25T19:34:22Z
39,153,501
<p>As long as the User initiated part never varies, we can simply sort the instances by state_transition_reason:</p> <pre><code>sortedInstances = sorted(instances, key=lambda k: k.state_transition_reason) </code></pre>
0
2016-08-25T19:40:35Z
[ "python", "amazon-web-services", "amazon-ec2", "aws-lambda", "boto3" ]
How to efficiently preform loop statements for data analysis?
39,153,448
<p>I wrote this code recently to comb over data points and it works beautifully for small data sets. However whenever the data sets get too big all I get is junk output and a message in the pycharm console that reads <code>! Too much output to process</code>. Then the maximum line output seems to be around 55,000 lines. </p> <p>The point of this code is to analyze the proximity of coordinates in file1 to all elements in file2. Then return any resulting matches of coordinates which share proximity. As you will see below I wrote a nested for loop to do this which I understand may be a sort of brute force tactic so that could perhaps be the issue in getting the error message later on?</p> <p>Heres the code:</p> <pre><code>import numpy as np import math as ma filename1 = "C:\Users\Justin\Desktop\file1.data" data1 = np.genfromtxt(filename1, skip_header=1, usecols=(0, 1)) #dtype=[ #("x1", "f9"), #("y1", "f9")]) #print "data1", data1 filename2 = "C:\Users\Justin\Desktop\file2.data" data2 = np.genfromtxt(filename2, skip_header=1, usecols=(0, 1)) #dtype=[ #("x2", "f9"), #("y2", "f9")]) #print "data2",data2 def d(a,b): d = ma.acos(ma.sin(ma.radians(a[1]))*ma.sin(ma.radians(b[1])) +ma.cos(ma.radians(a[1]))*ma.cos(ma.radians(b[1]))* (ma.cos(ma.radians((a[0]-b[0]))))) return d results = open("results.txt", "w") for coor1 in data1: for coor2 in data2: n=0 a = [coor1[0], coor1[1]] b = [coor2[0], coor2[1]] #print "a", a #print "b", b if d(a, b) &lt; 0.07865: # if true what happens results.write("\t".join([str(coor1), str(coor2), "True", str(d)]) + "\n") else: results.write("\t".join([str(coor1), str(coor2), "False", str(d)]) + "\n") results.close() </code></pre> <p>Ideally I wont get this issue when I start cross checking data files of over 500,000 coordinates each since I doubt many of them will share much proximity.</p> <p>But there are 2 reasons for posting this (again sort of). First of all to share this code with anybody who can use it since it has already proved to be a powerful tool to analyze data or coordinates in any arbitrary spherical space. Secondly, to see if anybody had any advice on how to possibly make this more efficient and help me solve the error message?</p> <p>Most specifically the error message appears when my proximity restriction is a huge amount of separation as well as reading in both elements "a" and "b". </p> <p>I really doubt that pycharm has an issue processing more than 55,000 lines of output but I dont know... My guess is that I either botched the code, or it could be a windows 10 problem?</p> <p>Thanks in advance for any help. I am pretty new to this so anything advice will surely be useful. </p>
1
2016-08-25T19:36:57Z
39,155,156
<p>As dblclik mentions in his comment on the post, there are certainly ways to make you code more efficient, avoiding the full computation of the nested for-loop. However I do not think that this will help you solve the error-message, that you are getting:</p> <p>I do not think that PyCharm has an issue processing n lines of code as you are mentioning, I rather suspect, that it is the fact, that you are printing, i.e. <em>outputting</em>, all of you output to PyCharm, which especially when printing the distance of not-so-close x y's requires memory.</p> <p>I would suggest that you instead of printing your results try to save it to a .txt file or in a distance matrix. In this way you are also able save the output of calculations!</p> <p><strong>Example:</strong> Saving results to list</p> <pre><code>results = [] ... for coor1 in data1: for coor2 in data2: distance = d(a, b) if distance &lt; thresh: results.append((str(coor1), str(coor2), "True", str(distance))) else: results.append((str(coor1), str(coor2), "False", str(distance))) </code></pre> <p><strong>Example:</strong> Saving to textfile</p> <pre><code>results = open("results.txt", "w") ... for coor1 in data1: for coor2 in data2: distance = d(a, b) if distance &lt; thresh: results.write("\t".join([str(coor1), str(coor2), "True", str(distance)])+"\n") else: results.write("\t".join([str(coor1), str(coor2), "False", str(distance)])+"\n") results.close() </code></pre> <p>I hope this method will help you run your script fully through!</p>
2
2016-08-25T21:42:10Z
[ "python", "numpy", "coordinates", "pycharm", "data-analysis" ]
What is this kind of chain assignment and comparison in python?
39,153,474
<p>(Python 2.7 question)</p> <p>I found a pattern similar to the following in a python codebase:</p> <pre><code>&gt;&gt;&gt; a = b == 7 &gt;&gt;&gt; a True &gt;&gt;&gt; b 7 &gt;&gt;&gt; a = b == -7 &gt;&gt;&gt; a False &gt;&gt;&gt; b 7 </code></pre> <p>I'm looking for the terminology for this - I found some other answers that referred to (a = b = 7) as "chain assignment". What is the format (a = b == 7) called?</p>
0
2016-08-25T19:39:07Z
39,153,550
<p><code>b == 7</code> is comparison for equality. The result of a comparison is a boolean value that is assigned to <code>a</code>.</p>
1
2016-08-25T19:43:46Z
[ "python", "python-2.7" ]
What is this kind of chain assignment and comparison in python?
39,153,474
<p>(Python 2.7 question)</p> <p>I found a pattern similar to the following in a python codebase:</p> <pre><code>&gt;&gt;&gt; a = b == 7 &gt;&gt;&gt; a True &gt;&gt;&gt; b 7 &gt;&gt;&gt; a = b == -7 &gt;&gt;&gt; a False &gt;&gt;&gt; b 7 </code></pre> <p>I'm looking for the terminology for this - I found some other answers that referred to (a = b = 7) as "chain assignment". What is the format (a = b == 7) called?</p>
0
2016-08-25T19:39:07Z
39,153,570
<p>This is just a normal assignment statement. If you are ever curious about how a line of python is parsed, try the <code>ast</code> module:</p> <pre><code>&gt;&gt;&gt; import ast &gt;&gt;&gt; ast.dump(ast.parse('a = b == 7'), annotate_fields=False) "Module([Assign([Name('a', Store())], Compare(Name('b', Load()), [Eq()], [Num(7)]))])" </code></pre> <p>We can see that there is an equality comparison with <code>b</code> and <code>7</code>, and that result is used in an assignment to <code>a</code>. </p>
1
2016-08-25T19:44:57Z
[ "python", "python-2.7" ]
What is this kind of chain assignment and comparison in python?
39,153,474
<p>(Python 2.7 question)</p> <p>I found a pattern similar to the following in a python codebase:</p> <pre><code>&gt;&gt;&gt; a = b == 7 &gt;&gt;&gt; a True &gt;&gt;&gt; b 7 &gt;&gt;&gt; a = b == -7 &gt;&gt;&gt; a False &gt;&gt;&gt; b 7 </code></pre> <p>I'm looking for the terminology for this - I found some other answers that referred to (a = b = 7) as "chain assignment". What is the format (a = b == 7) called?</p>
0
2016-08-25T19:39:07Z
39,153,584
<p>This is just an assignment of boolean expression to a variable:</p> <pre><code> a = (b == 7) # ^ comparation expression that evaluate to True or False # ^ assign the expression to a </code></pre>
1
2016-08-25T19:45:45Z
[ "python", "python-2.7" ]
Converting binary numbers to hexadecimal in Python not working for more than 5 digits (Homework)
39,153,583
<p>I have an assignment from my computer science teacher to make a program that converts a binary number to hexadecimal using Python. I have made it so that the binary properly converts to base ten/decimal and the decimal properly converts to hex, but it only works if the binary number is less than 5 digits (i.e. 01101 correctly turns out 16 and 11111 turns 1F, but something like 11010110 stupidly becomes "6") Here's my code:</p> <pre><code>def main(): print("Convert binary numbers to hexadecimal.") binary = list(input("Enter a binary number: ")) # User input for i in range(len(binary)): # Convert binary to list of integers binary[i] = int(binary[i]) baseten = 0 for i in range(len(binary)): # Converts binary to base ten baseten = int(baseten + binary[i] * 2**i) x = int(0) while 16**(x+1) &lt; baseten: # Determines beginning value of exponent x+=1 while x &gt;= 0: value = int(baseten/16**x) if value &lt; 10: print(value, end="") if value == 10: print("A", end="") if value == 11: print("B", end="") if value == 12: print("C", end="") if value == 13: print("D", end="") if value == 14: print("E", end="") if value == 15: print("F", end="") baseten-=int(16**x) x-=1 </code></pre> <p>NOTE - I know Java pretty well, I've had over a year of experience with it, so don't expect not to understand something that is a bit complicated. Also the goal is not just to use the hex() function.</p>
2
2016-08-25T19:45:38Z
39,153,839
<p>You may convert the <code>binary</code> string to <code>hex</code> string (without using <code>hex()</code> function) as:</p> <pre><code>&gt;&gt;&gt; binary_string = '0000010010001101' &gt;&gt;&gt; '%0*X' % ((len(binary_string) + 3) // 4, int(binary_string, 2)) '048D' </code></pre> <p>There is simpler way to convert <code>binary</code> to <code>hexadecimal</code> using <code>hex()</code> which might be useful for someone else:</p> <pre><code>&gt;&gt;&gt; hex(int('010110', 2)) '0x16' </code></pre>
0
2016-08-25T20:02:58Z
[ "python", "computer-science" ]
Converting binary numbers to hexadecimal in Python not working for more than 5 digits (Homework)
39,153,583
<p>I have an assignment from my computer science teacher to make a program that converts a binary number to hexadecimal using Python. I have made it so that the binary properly converts to base ten/decimal and the decimal properly converts to hex, but it only works if the binary number is less than 5 digits (i.e. 01101 correctly turns out 16 and 11111 turns 1F, but something like 11010110 stupidly becomes "6") Here's my code:</p> <pre><code>def main(): print("Convert binary numbers to hexadecimal.") binary = list(input("Enter a binary number: ")) # User input for i in range(len(binary)): # Convert binary to list of integers binary[i] = int(binary[i]) baseten = 0 for i in range(len(binary)): # Converts binary to base ten baseten = int(baseten + binary[i] * 2**i) x = int(0) while 16**(x+1) &lt; baseten: # Determines beginning value of exponent x+=1 while x &gt;= 0: value = int(baseten/16**x) if value &lt; 10: print(value, end="") if value == 10: print("A", end="") if value == 11: print("B", end="") if value == 12: print("C", end="") if value == 13: print("D", end="") if value == 14: print("E", end="") if value == 15: print("F", end="") baseten-=int(16**x) x-=1 </code></pre> <p>NOTE - I know Java pretty well, I've had over a year of experience with it, so don't expect not to understand something that is a bit complicated. Also the goal is not just to use the hex() function.</p>
2
2016-08-25T19:45:38Z
39,153,970
<p>I'm going to focus on helping you debug what's going wrong in your code as opposed to suggesting an alternative way to do it. It is homework after all :)</p> <p>As others have noticed, the logic for base 10 conversion is off. A binary string of <code>01101</code> is equivalent to <code>13</code> in base ten, and <code>D</code> in hex. This is because you need to iterate through your binary string starting at the least significant digit. Without changing your code too much, you can do this by indexing to <code>len(binary) - 1 - i</code> instead of <code>i</code> in your conversion loop:</p> <pre><code>for i in range(len(binary)): # Converts binary to base ten baseten = int(baseten + binary[len(binary) - 1 - i] * 2**i) </code></pre> <p>Your second problem is that <code>baseten-=int(16**x)</code> should be <code>baseten -= value*(16**x)</code>.</p> <p>These changes will correctly return <code>D6</code> for a binary string of <code>11010110</code>.</p>
2
2016-08-25T20:11:45Z
[ "python", "computer-science" ]
Converting binary numbers to hexadecimal in Python not working for more than 5 digits (Homework)
39,153,583
<p>I have an assignment from my computer science teacher to make a program that converts a binary number to hexadecimal using Python. I have made it so that the binary properly converts to base ten/decimal and the decimal properly converts to hex, but it only works if the binary number is less than 5 digits (i.e. 01101 correctly turns out 16 and 11111 turns 1F, but something like 11010110 stupidly becomes "6") Here's my code:</p> <pre><code>def main(): print("Convert binary numbers to hexadecimal.") binary = list(input("Enter a binary number: ")) # User input for i in range(len(binary)): # Convert binary to list of integers binary[i] = int(binary[i]) baseten = 0 for i in range(len(binary)): # Converts binary to base ten baseten = int(baseten + binary[i] * 2**i) x = int(0) while 16**(x+1) &lt; baseten: # Determines beginning value of exponent x+=1 while x &gt;= 0: value = int(baseten/16**x) if value &lt; 10: print(value, end="") if value == 10: print("A", end="") if value == 11: print("B", end="") if value == 12: print("C", end="") if value == 13: print("D", end="") if value == 14: print("E", end="") if value == 15: print("F", end="") baseten-=int(16**x) x-=1 </code></pre> <p>NOTE - I know Java pretty well, I've had over a year of experience with it, so don't expect not to understand something that is a bit complicated. Also the goal is not just to use the hex() function.</p>
2
2016-08-25T19:45:38Z
39,154,213
<p>The following program can convert binary numbers into hexadecimal numbers, but it can also convert from and to other bases as well depending on the settings that you provide in your code:</p> <pre><code>import string TABLE = string.digits + string.ascii_uppercase def main(): initial_variable = input('Please enter a number: ') base_variable = 2 convert_variable = 16 integer = str_to_int(initial_variable, base_variable) hexadecimal = int_to_str(integer, convert_variable) print(hexadecimal) def str_to_int(text, base): integer = 0 for character in text: if character not in TABLE: raise ValueError('found unknown character') value = TABLE.index(character) if value &gt;= base: raise ValueError('found digit outside base') integer *= base integer += value return integer def int_to_str(integer, base): array = [] while integer: integer, value = divmod(integer, base) array.append(TABLE[value]) return ''.join(reversed(array)) if __name__ == '__main__': main() </code></pre>
0
2016-08-25T20:27:18Z
[ "python", "computer-science" ]
Whitespace at the end of line is not ignored by python regex
39,153,613
<p>The leading spaces are ignored but the trailing ones are not in the below regular expression code. It's just a <code>"Name = Value"</code> string but with spaces. I thought the <code>\s*</code> after the capture would ignore spaces.</p> <pre><code>import re line = " Name = Peppa Pig " match = re.search(r"\s*(Name)\s*=\s*(.+)\s*", line) print(match.groups()) &gt;&gt;&gt;('Name', 'Peppa Pig ') # Why extra spaces after Pig! </code></pre> <p>What am I missing?</p>
2
2016-08-25T19:48:23Z
39,153,658
<p>You're getting trailing spaces because of greedy nature of <code>.+</code>.</p> <p>You can use this regex to correctly capture your value:</p> <pre><code>&gt;&gt;&gt; re.search(r"\s*(Name)\s*=\s*(.+?)\s*$", line).groups() ('Name', 'Peppa Pig') </code></pre> <p><code>\s*$</code> ensures we are capturing value before trailing white spaces at end.</p>
8
2016-08-25T19:51:11Z
[ "python", "regex" ]
Whitespace at the end of line is not ignored by python regex
39,153,613
<p>The leading spaces are ignored but the trailing ones are not in the below regular expression code. It's just a <code>"Name = Value"</code> string but with spaces. I thought the <code>\s*</code> after the capture would ignore spaces.</p> <pre><code>import re line = " Name = Peppa Pig " match = re.search(r"\s*(Name)\s*=\s*(.+)\s*", line) print(match.groups()) &gt;&gt;&gt;('Name', 'Peppa Pig ') # Why extra spaces after Pig! </code></pre> <p>What am I missing?</p>
2
2016-08-25T19:48:23Z
39,154,641
<p>Instead of using <code>(.+)\s*</code> <em>(where the <code>\s*</code> is useless since "zero or more white-spaces" isn't a constraint after the greedy quantifier <code>.+</code>, it's like to write nothing)</em>, you can use <code>(.*\S)</code> that will trim automatically the string after the last non-whitespace character <code>\S</code>.</p> <pre><code>match = re.search(r"\b(Name)\s*=\s*(.*\S)", line) </code></pre> <p>Question: is the capture of the already known "Name" literal string really needed?</p>
2
2016-08-25T20:59:21Z
[ "python", "regex" ]
Whitespace at the end of line is not ignored by python regex
39,153,613
<p>The leading spaces are ignored but the trailing ones are not in the below regular expression code. It's just a <code>"Name = Value"</code> string but with spaces. I thought the <code>\s*</code> after the capture would ignore spaces.</p> <pre><code>import re line = " Name = Peppa Pig " match = re.search(r"\s*(Name)\s*=\s*(.+)\s*", line) print(match.groups()) &gt;&gt;&gt;('Name', 'Peppa Pig ') # Why extra spaces after Pig! </code></pre> <p>What am I missing?</p>
2
2016-08-25T19:48:23Z
39,154,879
<p>The last <code>.+</code> grabs the whole rest of the line (as <code>.</code> matches any char but a newline), and then starts backtracking, checking if the subsequent subpatterns should match. Since the subsequent subpattern is <code>\s*</code> that can match an empty string (it matches 0+ whitespaces), this pattern successfully matches at the end of the string, and a valid match with trailing whitespaces is returned.</p> <p>See <a href="https://regex101.com/r/dL6eN0/1" rel="nofollow">your regex demo</a> (pay special attention at Step 15):</p> <p><a href="http://i.stack.imgur.com/rsB6L.png" rel="nofollow"><img src="http://i.stack.imgur.com/rsB6L.png" alt="enter image description here"></a></p> <p>You may let Python do the <code>strip</code> job inside a list comprehension and simplify the regex to just <code>(Name)\s*=(.+)</code>:</p> <pre><code>import re line = " Name = Peppa Pig " match = [(x,y.strip()) for x,y in re.findall(r"(Name)\s*=(.+)", line)] print(match) </code></pre> <p>See <a href="http://ideone.com/OSXzdM" rel="nofollow">Python demo</a></p>
1
2016-08-25T21:18:33Z
[ "python", "regex" ]
Why can't python's datetime.max survive a round trip through timestamp / fromtimestamp?
39,153,700
<p>In most cases I can round-trip datetimes to and from a timestamp as follows:</p> <pre><code>from datetime import datetime dt = datetime(2016, 1, 1, 12, 34, 56, 789) print(dt) print(datetime.fromtimestamp(dt.timestamp())) </code></pre> <blockquote> <p>2016-01-01 12:34:56.000789</p> <p>2016-01-01 12:34:56.000789</p> </blockquote> <p>But this doesn't work for datetime.max. Why is that?</p> <pre><code>dt = datetime.max print(dt) print(datetime.fromtimestamp(dt.timestamp())) </code></pre> <blockquote> <p>9999-12-31 23:59:59.999999</p> <p>Traceback (most recent call last): File "python", line 9, in ValueError: year is out of range</p> </blockquote> <p>More precisely, why hasn't the datetime library taken this case into account?</p>
1
2016-08-25T19:53:41Z
39,153,819
<p>Simply because the maximum of a datetime object is not the same as the maximum of a valid timestamp.</p> <p>There's also a good reason to limit the range of timestamps: they are but a simple python <code>float</code>, which, on "normal" machines are double precision floating points. But: you lose more than a couple of seconds in precision:</p> <pre><code>print(datetime.max.timestamp()) 253402297200.0 print(datetime.max.second) 59 print(datetime.max.microsecond) 999999 </code></pre> <p><strong>spot the error.</strong></p> <p>Timestamps based on floating point numbers are, <em>by definition</em> less accurate the more they are in the future. So not being able to represent arbitrary valid <code>datetimes</code> in a timestamp is perfectly reasonable, just as restricting one a couple thousand years in the future.</p> <p>so:</p> <blockquote> <p>More precisely, why hasn't the datetime library taken this case into account?</p> </blockquote> <p>Because timestamps so far in the future are unreliably and very likely do not represent the time you've meant, so rejecting them is a wise thing.</p> <p>Takeaway: a floating point number like <code>timestamp()</code> produces is not an appropriate way of transporting times with fixed precision. If you at all can, avoid it.</p>
1
2016-08-25T20:01:36Z
[ "python", "python-3.x", "datetime" ]
Why can't python's datetime.max survive a round trip through timestamp / fromtimestamp?
39,153,700
<p>In most cases I can round-trip datetimes to and from a timestamp as follows:</p> <pre><code>from datetime import datetime dt = datetime(2016, 1, 1, 12, 34, 56, 789) print(dt) print(datetime.fromtimestamp(dt.timestamp())) </code></pre> <blockquote> <p>2016-01-01 12:34:56.000789</p> <p>2016-01-01 12:34:56.000789</p> </blockquote> <p>But this doesn't work for datetime.max. Why is that?</p> <pre><code>dt = datetime.max print(dt) print(datetime.fromtimestamp(dt.timestamp())) </code></pre> <blockquote> <p>9999-12-31 23:59:59.999999</p> <p>Traceback (most recent call last): File "python", line 9, in ValueError: year is out of range</p> </blockquote> <p>More precisely, why hasn't the datetime library taken this case into account?</p>
1
2016-08-25T19:53:41Z
39,153,870
<pre><code>dt = datetime.max </code></pre> <p>is checking against the maximum of the datetime class. It is a static value.</p> <pre><code>datetime.fromtimestamp(dt.timestamp())) </code></pre> <p>This, however, is working against a timestamp. Different classes, different methods. <a href="https://docs.python.org/3.5/library/datetime.html#datetime.datetime.max" rel="nofollow">Here</a> are the docs for datetime's methods.</p>
0
2016-08-25T20:05:00Z
[ "python", "python-3.x", "datetime" ]
Numpy array of numpy arrays with certain format
39,153,756
<p>I want to declare a numpy array (<code>arr</code>) with elements having a certain format, like this:</p> <pre><code>dt1 = np.dtype([('sec', '&lt;i8'), ('nsec', '&lt;i8')]) dt2 = np.dtype([('name', 'S10'), ('value', '&lt;i4')]) arr = np.array([('x', dtype=dt1), ('y', dtype=dt2)]) </code></pre> <p>The structure is something like this:</p> <pre><code>arr['x'] = elements with the format dt1 (sec and nsec) arr['y'] = array of n elements with the format dt2 (name and value) </code></pre> <p>And the elements within should be accessed like this:</p> <pre><code>arr['x']['sec'], arr['x']['nsec'], arr['y'][0]['name'] etc. </code></pre> <p>But I get an <code>invalid syntax</code> error. What is the correct syntax in this situation?</p>
2
2016-08-25T19:57:08Z
39,154,267
<p>A doubly compound dtype might work:</p> <pre><code>In [834]: dt1 = np.dtype([('sec', '&lt;i8'), ('nsec', '&lt;i8')]) ...: dt2 = np.dtype([('name', 'S10'), ('value', '&lt;i4')]) ...: In [835]: dt = np.dtype([('x', dt1),('y', dt2)]) In [837]: z=np.ones((3,), dtype=dt) In [838]: z Out[838]: array([((1, 1), (b'1', 1)), ((1, 1), (b'1', 1)), ((1, 1), (b'1', 1))], dtype=[('x', [('sec', '&lt;i8'), ('nsec', '&lt;i8')]), ('y', [('name', 'S10'), ('value', '&lt;i4')])]) In [839]: z['x']['sec'] Out[839]: array([1, 1, 1], dtype=int64) In [841]: z['y'][0]['name']='y0 name' In [842]: z Out[842]: array([((1, 1), (b'y0 name', 1)), ((1, 1), (b'1', 1)), ((1, 1), (b'1', 1))], dtype=[('x', [('sec', '&lt;i8'), ('nsec', '&lt;i8')]), ('y', [('name', 'S10'), ('value', '&lt;i4')])]) In [843]: z[0] Out[843]: ((1, 1), (b'y0 name', 1)) </code></pre> <p>A dictionary would work with similar syntax</p> <pre><code>In [845]: d={'x':np.ones((3,),dt1), 'y':np.zeros((4,),dt2)} In [846]: d Out[846]: {'x': array([(1, 1), (1, 1), (1, 1)], dtype=[('sec', '&lt;i8'), ('nsec', '&lt;i8')]), 'y': array([(b'', 0), (b'', 0), (b'', 0), (b'', 0)], dtype=[('name', 'S10'), ('value', '&lt;i4')])} In [847]: d['y'][0]['name']='y0 name' </code></pre>
3
2016-08-25T20:31:29Z
[ "python", "arrays", "numpy", "syntax" ]
Determining the Order Files Are Run in a Website Built By Someone Else
39,153,790
<p>Ok, this question is going to sound pretty dumb, but I'm an absolute novice when it comes to web development and have been tasked with fixing a website for my job (that has absolutely nothing in the way of documentation).</p> <p>Basically, I'm wondering if there is any tool or method for tracking the order a website loads files when it is used. I just want to know a very high-level order of the pipeline. The app I've been tasked with maintaining is written in a mix of django, javascript, and HTML (none of which I really know, besides some basic django). I can understand how django works, and I kind of understand what's going on with HTML, but (for instance) I'm at a complete loss as to how the HTML code is calling javascript, and how that information is transfered back to HTML. I wish I could show the code I'm using, but it can't be released publicly. </p> <p>I'm looking for what amounts to a debugger that will let me step through each file of code, but I don't think it works like that for web development.</p> <p>Thank you</p>
0
2016-08-25T19:59:31Z
39,154,045
<p>Try opening in the page in Chrome and hitting F12 - there's a tonne of developer tools and web page debuggers in there.</p> <p>For your particular question about loading order, check the Network tab, then hit refresh on your page - it'll show you every file that the browser loads, starting with the HTML in your browsers address bar. </p> <p>If you're trying to figure out javascript, check out the Sources tab. It even allows you to create break points -very handy for following along with a page is doing. </p>
0
2016-08-25T20:16:01Z
[ "javascript", "python", "django" ]
why have duplications in mu code using python
39,153,860
<p>I read from a xlsx file. I'm trying to group by router column and add all mb coulmn. However, I have some duplications and I don`t know why?</p> <p>Here is the code, can help me? </p> <pre><code>def totalroutermb(): arrayItems = items arrayItems2 = items arrayRouterAndMb = [] num_mb = float(0.0) for user in arrayItems: for user2 in arrayItems2: if user.router == user2.router: num_mb += float(user2.download_in_mb) arrayItems2.remove(user2) dict = {"router": user.router, "Total": num_mb} arrayRouterAndMb.append(dict) for user in arrayRouterAndMb: print(user) print("\n") </code></pre> <p>And print this:</p> <pre><code>{'router': 'Kut_2007_CO_Sag_a7:41', 'Total': 8861222409750.0} {'router': 'Kut_2017_CO_Sol_e0:06', 'Total': 12448391550377.031} {'router': 'Kut_Giris_AsansorSol_a7:3e', 'Total': 12460052878502.203} {'router': 'Kut_Giris_AsansorSol_a7:3e', 'Total': 12460052878502.203} {'router': 'Kut_Giris_AsansorSol_a7:3e', 'Total': 12470382956627.203} {'router': 'Kut_Giris_Masa1_a4:82', 'Total': 18009186394127.203} {'router': 'Kut_Kat1_Sag1_a9:3e', 'Total': 35296935066002.2} {'router': 'Kut_Kat1_Sag1_a9:3e', 'Total': 35316851362878.78} </code></pre>
2
2016-08-25T20:04:12Z
39,186,920
<p>I solved it!! Sorry but I'm only few days with python, i'm learning</p> <pre><code>def totalroutermb(): arrayRouterAndMb = [] for item in items: aux = [item.router.strip(), float(item.download_in_mb)] arrayRouterAndMb.append(aux) # order array arrayRouterAndMb.sort() # add all mb from each router arrayEachRouterMb = [] pos = 0 firstObj = arrayRouterAndMb[0] num_mb = firstObj[1] for obj in arrayRouterAndMb: pos += 1 if pos == len(items): dict = {"Total": num_mb, "router": obj[0]} arrayEachRouterMb.append(dict) break objNext = arrayRouterAndMb[pos] if objNext[0] == obj[0]: num_mb += float(objNext[1]) elif obj[0] != objNext[0]: dict = {"Total": num_mb, "router": obj[0]} arrayEachRouterMb.append(dict) num_mb = objNext[1] for user in arrayEachRouterMb: print(user) print("\n") </code></pre>
0
2016-08-28T00:16:05Z
[ "python", "python-3.x" ]
How to define and use percentage in Pint
39,153,885
<p>I'm currently using <a href="https://pint.readthedocs.io/en/0.7.2/" rel="nofollow">Pint</a> to handle units and unit conversions. This seems to work well for the units that are already defined in Pint, for example</p> <pre><code>&gt;&gt;&gt; import pint &gt;&gt;&gt; ureg = pint.UnitRegistry() &gt;&gt;&gt; Q = ureg.Quantity &gt;&gt;&gt; a = Q(5, 'm/s') &gt;&gt;&gt; a &lt;Quantity(5, 'meter / second')&gt; &gt;&gt;&gt; a.to('ft/s') &lt;Quantity(16.404199475065617, 'foot / second')&gt; </code></pre> <p>I tried to <a href="https://pint.readthedocs.io/en/0.7.2/defining.html#programmatically" rel="nofollow">define my own units</a>, which represent percentage. As far as unit conversions go, a percentage is simply 100 times a dimensionless fraction, which is how I defined it.</p> <pre><code>&gt;&gt;&gt; ureg.define('percent = dimensionless * 100 = pct') &gt;&gt;&gt; a = Q(5, 'pct') &gt;&gt;&gt; a &lt;Quantity(5, 'percent')&gt; </code></pre> <p>However I cannot seem to convert back and forth between fraction (<code>'dimensionless'</code>) and <code>'pct'</code>.</p> <pre><code>&gt;&gt;&gt; a.to('dimensionless') Traceback (most recent call last): File "&lt;pyshell#31&gt;", line 1, in &lt;module&gt; a.to('dimensionless') File "C:\Python35\python-3.5.1.amd64\lib\site-packages\pint\quantity.py", line 263, in to magnitude = self._convert_magnitude_not_inplace(other, *contexts, **ctx_kwargs) File "C:\Python35\python-3.5.1.amd64\lib\site-packages\pint\quantity.py", line 231, in _convert_magnitude_not_inplace return self._REGISTRY.convert(self._magnitude, self._units, other) File "C:\Python35\python-3.5.1.amd64\lib\site-packages\pint\unit.py", line 1026, in convert return self._convert(value, src, dst, inplace) File "C:\Python35\python-3.5.1.amd64\lib\site-packages\pint\unit.py", line 1042, in _convert src_dim = self._get_dimensionality(src) File "C:\Python35\python-3.5.1.amd64\lib\site-packages\pint\unit.py", line 813, in _get_dimensionality self._get_dimensionality_recurse(input_units, 1.0, accumulator) File "C:\Python35\python-3.5.1.amd64\lib\site-packages\pint\unit.py", line 837, in _get_dimensionality_recurse self._get_dimensionality_recurse(reg.reference, exp2, accumulator) File "C:\Python35\python-3.5.1.amd64\lib\site-packages\pint\unit.py", line 835, in _get_dimensionality_recurse reg = self._units[self.get_name(key)] KeyError: '' </code></pre> <p>What I'd essentially like to do is be able to convert between e.g. "0.73" and "73%". How can I define and use such a unit?</p>
3
2016-08-25T20:06:15Z
39,154,101
<p>It seems that GitHub issue <a href="https://github.com/hgrecco/pint/issues/185" rel="nofollow">hgrecco/pint#185</a> covers the case you're describing.</p> <p>Using the work-around discussed in that issue works for me using <code>Pint-0.7.2</code>:</p> <pre><code>from pint.unit import ScaleConverter from pint.unit import UnitDefinition import pint ureg = pint.UnitRegistry() Q = ureg.Quantity ureg.define(UnitDefinition('percent', 'pct', (), ScaleConverter(1 / 100.0))) a = Q(5, 'pct') print a print a.to('dimensionless') </code></pre> <p>Output:</p> <pre><code>5 percent 0.05 dimensionless </code></pre>
3
2016-08-25T20:20:10Z
[ "python", "units-of-measurement", "pint" ]
Update a list every time a function within a class is executexecuted with the function arguments and values
39,154,006
<p>In the last 6 months, I have created many classes and function for different calculations. Now I would like to join them into a class so it could be used by other users. Like many of the methods can be performed in sequence and the sequence may require to be run multiple times for different sets of data. I would like to create a function within the main class that saves the procedure used by the user so the user could run it again without going through the whole procedure again.</p> <p>Important: I would not like to change the classes that have already been implemented, but if this is the only way, no problem.</p> <p>The idea would be more or less like the code represented below:</p> <pre><code>class process(object): def __init__(self): self.procedure = [] def MethodA(valueA): pass def MethodB(valueB): pass def UpdateProcedure(self, method, arguments, values): # Execute this every time that MethodA or MethodB is executed. new = {'Method':method, 'Arguments':arguments, 'Values':values} self.procedure.append(new) </code></pre> <p>For example:</p> <pre><code>a = process.MethodA(2) b = process.MethodB(3) print(process.procedure) &gt;&gt;&gt; [{'Method':'MethodA', 'Arguments':['valueA'], 'Values':[2]}, {'Method':'MethodB', 'Arguments':['valueB'], 'Values':[3]}] </code></pre> <p>I have tried to use the functions <code>inspect.currentframe</code> and <code>__getattribute__</code> together, but I did not get good results.</p> <p>Any ideas to help me with this issue?</p> <p>Thanks</p>
0
2016-08-25T20:13:45Z
39,154,730
<p>Here's how to use a decorator, per Moinuddin Quadri's suggestion:</p> <pre><code>def remember(func): def inner(*args, **kwargs): args[0].procedure.append({'method': func, 'args': args, 'kwargs': kwargs}) return func(*args, **kwargs) return inner class process(object): def __init__(self): self.procedure = [] @remember def MethodA(self, valueA): pass @remember def MethodB(self, valueB): pass </code></pre> <p>This makes it so that every time each of these decorated methods is called, it will append their arguments to that object's procedure array. This decorator will fail on non-class methods.</p> <p>Calling these commands:</p> <pre><code>p = process() p.MethodA(1) p.MethodB(2) p.MethodA(3) p.MethodB(4) print p.procedure </code></pre> <p>will result in the following if pretty printed:</p> <pre><code>[{'args': (&lt;__main__.process object at 0x7f25803d28d0&gt;, 1), 'kwargs': {}, 'method': &lt;function MethodA at 0x7f25803d4758&gt;}, {'args': (&lt;__main__.process object at 0x7f25803d28d0&gt;, 2), 'kwargs': {}, 'method': &lt;function MethodB at 0x7f25803d4848&gt;}, {'args': (&lt;__main__.process object at 0x7f25803d28d0&gt;, 3), 'kwargs': {}, 'method': &lt;function MethodA at 0x7f25803d4758&gt;}, {'args': (&lt;__main__.process object at 0x7f25803d28d0&gt;, 4), 'kwargs': {}, 'method': &lt;function MethodB at 0x7f25803d4848&gt;}] </code></pre> <p>where <code>&lt;__main__.process object at 0x7f25803d28d0&gt;</code> is the process object (<code>self</code>)</p> <p>Read more about decorators here: <a href="https://en.wikipedia.org/wiki/Python_syntax_and_semantics#Decorators" rel="nofollow">https://en.wikipedia.org/wiki/Python_syntax_and_semantics#Decorators</a></p>
1
2016-08-25T21:06:30Z
[ "python" ]
How to get matrix rows by indexes?
39,154,021
<p>Assuming that we have a matrix X and a target column y as following:</p> <pre><code>import numpy as np X = np.ones([10,2]) for i in range(0,X.shape[0]): X[i][0] = i y = [0,1,2,1,0,0,1,2,3,3] </code></pre> <p>I want to get rows of X depending on the value of y. From the small example above:</p> <p>For y == 0, I want to get rows of X as:</p> <pre><code>[[0 1] [4 1] [5 1]] </code></pre> <p>For y == 3, I want to get rows of X as:</p> <pre><code>[[8 1] [9 1]] </code></pre> <p>And so on. How can I solve this problem?</p> <p>I also tried</p> <pre><code>print(X[y == 0][:]) </code></pre> <p>But it did not work.</p>
1
2016-08-25T20:14:44Z
39,154,166
<p>You were close, you just need <code>y</code> to be a numpy array instead of a list:</p> <pre><code>&gt;&gt;&gt; y = np.array(y) &gt;&gt;&gt; X[y==0] array([[0, 1], [4, 1], [5, 1]]) &gt;&gt;&gt; X[y==3] array([[8, 1], [9, 1]]) </code></pre>
0
2016-08-25T20:23:42Z
[ "python", "matrix", "indexing" ]
Pyspark: spark data frame column width configuration in Jupyter Notebook
39,154,093
<p>I have the following code in Jupyter Notebook:</p> <pre><code>import pandas as pd pd.set_option('display.max_colwidth', 80) my_df.select('field_1','field_2').show() </code></pre> <p>I want to increase the column width so I could see the full value of <code>field_1</code> and <code>field_2</code>. I know we can use <code>pd.set_option('display.max_colwidth', 80)</code> for pandas data frame, but it doesn't seem to work for spark data frame. </p> <p>Is there a way to increase the column width for the spark data frame like what we did for pandas data frame? Thanks!</p>
0
2016-08-25T20:19:09Z
39,154,175
<p>I don't think you can set a specific width, but this will show ensure your data is not cutoff no matter the size</p> <pre><code>my_df.select('field_1','field_2').show(10, truncate = False) </code></pre>
1
2016-08-25T20:24:15Z
[ "python", "apache-spark", "pyspark", "spark-dataframe", "jupyter-notebook" ]
Pyspark: spark data frame column width configuration in Jupyter Notebook
39,154,093
<p>I have the following code in Jupyter Notebook:</p> <pre><code>import pandas as pd pd.set_option('display.max_colwidth', 80) my_df.select('field_1','field_2').show() </code></pre> <p>I want to increase the column width so I could see the full value of <code>field_1</code> and <code>field_2</code>. I know we can use <code>pd.set_option('display.max_colwidth', 80)</code> for pandas data frame, but it doesn't seem to work for spark data frame. </p> <p>Is there a way to increase the column width for the spark data frame like what we did for pandas data frame? Thanks!</p>
0
2016-08-25T20:19:09Z
39,164,559
<p>This should give you what you want</p> <pre><code>import pandas as pd pd.set_option('display.max_colwidth', 80) my_df.select('field_1','field_2').limit(100).toPandas() </code></pre>
0
2016-08-26T10:52:10Z
[ "python", "apache-spark", "pyspark", "spark-dataframe", "jupyter-notebook" ]
Why does referencing a concatenated pandas dataframe return multiple entries?
39,154,230
<p>When I create a dataframe using concat like this:</p> <pre><code>import pandas as pd dfa = pd.DataFrame({'a':[1],'b':[2]}) dfb = pd.DataFrame({'a':[3],'b':[4]}) dfc = pd.concat([dfa,dfb]) </code></pre> <p>And I try to reference like I would for any other DataFrame I get the following result:</p> <pre><code>&gt;&gt;&gt; dfc['a'][0] 0 1 0 3 Name: a, dtype: int64 </code></pre> <p>I would expect my concatenated DataFrame to behave like a normal DataFrame and return the integer that I want like this simple DataFrame does:</p> <pre><code>&gt;&gt;&gt; dfa['a'][0] 1 </code></pre> <p>I am just a beginner, is there a simple explanation for why the same call is returning an entire DataFrame and not the single entry that I want? Or, even better, an easy way to get my concatenated DataFrame to respond like a normal DataFrame when I try to reference it? Or should I be using something other than concat?</p>
3
2016-08-25T20:28:43Z
39,154,289
<p>You've mistaken what normal behavior is. <code>dfc['a'][0]</code> is a label lookup and matches anything with an index value of <code>0</code> in which there are two because you concatenated two dataframes with index values including <code>0</code>.</p> <p>in order to specify position of <code>0</code></p> <pre><code>dfc['a'].iloc[0] </code></pre> <p>or you could have constructed <code>dfc</code> like</p> <pre><code>dfc = pd.concat([dfa,dfb], ignore_index=True) dfc['a'][0] </code></pre> <p>Both returning </p> <pre><code>1 </code></pre>
1
2016-08-25T20:33:03Z
[ "python", "pandas", "concat" ]
Why does referencing a concatenated pandas dataframe return multiple entries?
39,154,230
<p>When I create a dataframe using concat like this:</p> <pre><code>import pandas as pd dfa = pd.DataFrame({'a':[1],'b':[2]}) dfb = pd.DataFrame({'a':[3],'b':[4]}) dfc = pd.concat([dfa,dfb]) </code></pre> <p>And I try to reference like I would for any other DataFrame I get the following result:</p> <pre><code>&gt;&gt;&gt; dfc['a'][0] 0 1 0 3 Name: a, dtype: int64 </code></pre> <p>I would expect my concatenated DataFrame to behave like a normal DataFrame and return the integer that I want like this simple DataFrame does:</p> <pre><code>&gt;&gt;&gt; dfa['a'][0] 1 </code></pre> <p>I am just a beginner, is there a simple explanation for why the same call is returning an entire DataFrame and not the single entry that I want? Or, even better, an easy way to get my concatenated DataFrame to respond like a normal DataFrame when I try to reference it? Or should I be using something other than concat?</p>
3
2016-08-25T20:28:43Z
39,154,326
<p><strong>EDITED</strong> (thx <a href="http://stackoverflow.com/users/2336654/pirsquared">piRSquared</a>'s comment)</p> <p>Use <code>append()</code> instead <code>pd.concat()</code>:</p> <pre><code>dfc = dfa.append(dfb, ignore_index=True) dfc['a'][0] 1 </code></pre>
1
2016-08-25T20:35:39Z
[ "python", "pandas", "concat" ]
Connection reset by peer [Errno 104] in Python
39,154,241
<p>I have witten a distributed program. Every node (Virtual machine) in the network sends data (through outgoing connection) to and receives data (through incomming connection) from every other node. Before sending data, all nodes has opend a socket to every other node (including the single source node). After a delay of 3 seconds the source starts sending a different file chunk to each of other nodes in the network. Every node starts forwarding the receiveing chunk after arrival of the first packet.</p> <p>The programs finishes successfully for multiple times without any error. But, sometimes one random node reset the incomming connections (while still sends data through its outgoing connections).</p> <p>Each node has both n-2 sender threads and n-1 receiver threads.</p> <p>Sending Function:</p> <pre><code>def relaySegment_Parallel(self): connectionInfoList = [] seenSegments = [] readyServers = [] BUFFER_SIZE = Node.bufferSize while len(readyServers) &lt; self.connectingPeersNum-len(Node.sources) and self.isMainThreadActive(): #Data won't be relayed to the sources try: tempIp = None for ip in Node.IPAddresses: if ip not in readyServers and ip != self.ip and ip not in self.getSourcesIp(): tempIp = ip s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) s.connect((ip, Node.dataPort)) connectionInfoList.append((s, ip)) readyServers.append(ip) if Node.debugLevel2Enable: print "RelayHandler: Outgoing connection established with IP: " + str(ip) except socket.error, v: errorcode = v[0] if errorcode == errno.ECONNRESET: print "(RelayHandler) Connection reset ! Node's IP: " + str(tempIp) if errorcode == errno.ECONNREFUSED: print "(RelayHandler) Node " + str(tempIp) + " are not ready yet!" continue except: print "Error: Cannot connect to IP: " + str (tempIp) continue print "(RelayHandler) Ready to relay data to " + str(len(readyServers)) + " numeber of servers." try: pool = ThreadPool(processes = Node.threadPoolSize) while Node.terminateFlag == 0 and not self.isDistributionDone() and self.isMainThreadActive(): if len(self.toSendTupleList) &gt; 0: self.toSendLock.acquire() segmentNo, segmentSize, segmentStartingOffset, data = self.toSendTupleList.pop(0) self.toSendLock.release() if len(data) &gt; 0: if segmentNo not in seenSegments: #Type: 0 = From Sourece , 1 = From Rlayer #Sender Type/Segment No./Segment Size/Segment Starting Offset/ tempList = [] for s, ip in connectionInfoList: tempData = "1/" + str(self.fileSize) + "/" + str(segmentNo) + "/" + str(segmentSize) + "/" + str(segmentStartingOffset) + "/" tempList.append((s, ip, tempData)) pool.map(self.relayWorker, tempList) seenSegments.append(segmentNo) relayList = [] for s, ip in connectionInfoList: relayList.append((s, ip, data)) pool.map(self.relayWorker, relayList) for s, ip in connectionInfoList: s.shutdown(1)# 0:Further receives are disallowed -- 1: Further sends are disallow / sends -- 2: Further sends and receives are disallowed. s.close() pool.close() pool.join() except socket.error, v: errorcode=v[0] if errorcode==errno.ECONNREFUSED: print "(RelayHandler) Error: Connection Refused in RelaySegment function. It can not connect to: ", ip else: print "\n(RelayHandler) Error1 in relaying segments (Parallel) to ", ip, " !!! ErrorCode: ", errorcode traceback.print_exception(*sys.exc_info()) except: print "\n(RelayHandler) Error2 in relaying segments (Parallel) to ", ip traceback.print_exception(*sys.exc_info()) </code></pre> <p>Receiving Function:</p> <pre><code>def receiveDataHandler(self): s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) try: s.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)# Allows us to resue the port immediately after termination of the program s.bind((self.ip, Node.dataPort)) s.listen(Node.MaxNumClientListenedTo) threadsList = [] fHandler = fileHandler(self.inFileAddr, Node.bufferSize) isStart = False executionTime = 0 connectedPeersSofar = 0 while (not self.connectingPeersNum == connectedPeersSofar) and self.isMainThreadActive() and Node.terminateFlag == 0 and not self.isReceptionDone(): conn, ipAddr = s.accept() thread_receiveData = Thread2(target = self.receiveData_Serial, args = (conn, ipAddr, fHandler)) thread_receiveData.start() if Node.debugLevel2Enable: print 'Receive Handler: New thread started for connection from address:', ipAddr connectedPeersSofar += 1 threadsList.append(thread_receiveData) if isStart == False: isStart = True print "(RecieiverHandeler) Receiver stops listening: Peers Num "+str(self.connectingPeersNum) +i " connected peers so far: " + str(connectedPeersSofar) for i in range(0, len(threadsList)): self.startTime = threadsList[i].join() if isStart: executionTime = float(time.time()) - float(self.startTime) else: print "\n\t No Start! Execution Time: --- 0 seconds ---" , "\n" s.shutdown(2)# 0:Further receives are disallowed -- 1: Further sends are disallow / sends -- 2: Further sends and receives are disallowed. s.close() return executionTime except socket.error, v: errorcode = v[0] if errorcode == 22: # 22: Invalid arument print "Error: Invalid argument in connection acceptance (receive data handler)" elif errorcode==errno.ECONNREFUSED: print "Error: Connection Refused in receive" else: print "Error1 in Data receive Handler !!! ErrorCode: ", errorcode traceback.print_exception(*sys.exc_info()) except: print "Error2 in Data receive Handler !!!" traceback.print_exception(*sys.exc_info()) </code></pre> <p>The Sending thread of all nodes prints that the node is connected to all other nodes (including the random malfunctioning node). However, the Receiving function of the random node waits on </p> <blockquote> <p>s.accept()</p> </blockquote> <p>and does not accept any connection but the connection from the single source which is the last one to connect. The random node just wait without raising any exception.</p> <p>It seems that </p> <blockquote> <p>s.listen()</p> </blockquote> <p>(TCP protocole) of the random node makes the senders think that they are connected, while </p> <blockquote> <p>s.accept()</p> </blockquote> <p>does not accept any one but the last one. Then, for some reason it resets the conneciton, and that is why others (senders) raise the "Connection reset by peer" exception when they try to send data. The only sender that finishes its job without any error is the sources which is the last one to connect.</p> <p>Error:</p> <pre><code>Traceback (most recent call last): File "/home/ubuntu/DCDataDistribution/Node.py", line 137, in relayWorker socketConn.sendall(data) File "/usr/lib/python2.7/socket.py", line 224, in meth return getattr(self._sock,name)(*args) error: [Errno 104] Connection reset by peer </code></pre> <p>Why is that happening?</p> <p>FYI: I am running my program on Amazon EC2 instances. The type of each instance is t2.micro (1 vCPU, 2.5 GHz, Intel Xeon Family (Up to 3.3 GHz) and, 1 GiB memory). The Ubuntu Server 14.04 LTS (HVM) is running on every instances. </p>
0
2016-08-25T20:29:34Z
39,164,848
<pre><code> for s, ip in connectionInfoList: s.shutdown(1)# 0:Further receives are disallowed -- 1: Further sends are disallow / sends -- 2: Further sends and receives are disallowed. s.close() pool.close() pool.join() </code></pre> <p>You <code>shutdown</code> the connections while some <code>relayWorker</code> thread in the <code>pool</code> may still be unfinished. Reverse the order:</p> <pre><code> pool.close() pool.join() for s, ip in connectionInfoList: s.close() </code></pre>
0
2016-08-26T11:06:35Z
[ "python", "sockets", "parallel-processing", "distributed-computing", "python-multithreading" ]
Python: LINQ capable class supporting simultaneously laziness and fluent design
39,154,269
<p>How to create an Python class which would be a iterable wrapper with LINQ-like methods (select, where, orderby, etc.) without using extension methods or monkey patching. ?</p> <p>That is this LinqCapable class would be capable to return its own type when it is relevant (i.e. fluent design) and support lazy evaluation.</p> <p>I'm just looking here for a snippet as a starting point.</p>
0
2016-08-25T20:31:37Z
39,156,138
<p>You should return a "Linq capable class" to achieve chaining and furthermore your implementation is not lazy: look at asq where method: it is based on the I filter and it appears correct... Anyway, here it is a very basic implementation based on my understanding of your question and comments</p> <pre><code>class LinqCapable(object): def __init__(self, iterable=None): self._iterable = iterable self._predicates = [] def where(self, predicate): chain = LinqCapable(self._iterable) chain._predicates = self._predicates chain._predicates.append(predicate) return chain def toArray(self): for item in self._iterable: isOk = True for predicate in self._predicates: if (not predicate(item)): isOk = False break if (isOk): yield item </code></pre> <h2>Usage</h2> <pre><code>test = LinqCapable([1,2,3]) def pred1(l: int) -&gt; bool: return l&gt;1 chain1 = test.where(pred1) def pred2(l: int) -&gt; bool: return l&lt;3 chain2 = chain1.where(pred2) list(chain2.toArray()) </code></pre> <h2>Edit (adding also a select method)</h2> <p>I've added also a simple select method. The objective here is to aggregate predicates and selectors when possible, to avoid inefficient nested loops.</p> <pre><code>class LinqCapable(object): def __init__(self, iterable=None, predicates = [], selectors = [], tree=None): self._iterable = iterable self._predicates = list(predicates) self._selectors = list(selectors) self._tree = tree def select(self, selector): if (len(self._predicates) == 0): chain = LinqCapable(self._iterable, [], self._selectors, self._tree) chain._selectors.append(selector) else: chain = LinqCapable(None, [], []) if (len(self._selectors) == 0): chain._tree = LinqCapable(self._iterable, self._predicates, [], self._tree) else: chain._tree = self chain._selectors.append(selector) return chain def where(self, predicate): chain = LinqCapable(self._iterable, self._predicates, self._selectors, self._tree) chain._predicates.append(predicate) return chain def enumerate(self): if (self._tree != None): self._iterable = list(self._tree.enumerate()) return self._cycle() def _cycle(self): for item in self._iterable: for selector in self._selectors: item = selector(item) isOk = True for predicate in self._predicates: if (not predicate(item)): isOk = False break if (isOk): yield item </code></pre> <p>so an example would be</p> <pre><code>test = LinqCapable([1,2, 20,200, 300]) def pred1(l: int) -&gt; bool: return l&gt;1 chain1 = test.where(pred1) def pred2(l: int) -&gt; bool: return l&lt;300 def sel1(l: int) -&gt; str: return str(l) def sel2(l: str) -&gt; str: return '&lt;' + l + '&gt;' def pred3(l: str) -&gt; bool: return len(l) &gt; 3 def sel3(l: str) -&gt; str: return l[1:-1] def sel4(l: str) -&gt; int: return int(l) chain2 = chain1.where(pred2).select(sel1).select(sel2).where(pred3).select(sel3).select(sel4) print(list(chain2.enumerate())) </code></pre>
2
2016-08-25T23:17:22Z
[ "python", "linq", "iterator", "generator" ]
Python: LINQ capable class supporting simultaneously laziness and fluent design
39,154,269
<p>How to create an Python class which would be a iterable wrapper with LINQ-like methods (select, where, orderby, etc.) without using extension methods or monkey patching. ?</p> <p>That is this LinqCapable class would be capable to return its own type when it is relevant (i.e. fluent design) and support lazy evaluation.</p> <p>I'm just looking here for a snippet as a starting point.</p>
0
2016-08-25T20:31:37Z
39,158,593
<p>It's not enough to just return a generator for the implemented linq methods, you need to have it return an instance of the wrapper to be able to chain additional calls.</p> <p>You can create a metaclass which can rewrap the linq implementations. So with this, you can just implement the methods you want to support and use some special decorators to ensure it remains chainable.</p> <pre><code>def linq(iterable): from functools import wraps def as_enumerable(f): f._enumerable = True return f class EnumerableMeta(type): def __new__(metacls, name, bases, namespace): cls = type.__new__(metacls, name, bases, namespace) def to_enumerable(f): @wraps(f) def _f(self, *args, **kwargs): return cls(lambda: f(self, *args, **kwargs)) return _f for n, f in namespace.items(): if hasattr(f, '_enumerable'): setattr(cls, n, to_enumerable(f)) return cls class Enumerable(metaclass=EnumerableMeta): def __init__(self, _iterable): self._iterable = _iterable def __iter__(self): return iter(self._iterable()) @as_enumerable def intersect(self, second): yield from set(self._iterable()).intersection(second) @as_enumerable def select(self, selector): yield from map(selector, self._iterable()) @as_enumerable def union(self, second): yield from set(self._iterable()).union(second) @as_enumerable def where(self, predicate): yield from filter(predicate, self._iterable()) @as_enumerable def skip(self, count): yield from (x for x, i in enumerate(self._iterable()) if i &gt;= count) @as_enumerable def skip_while(self, predicate): it = iter(self._iterable()) for x in it: if not predicate(x): yield x break yield from it @as_enumerable def take(self, count): yield from (x for x, i in enumerate(self._iterable()) if i &lt; count) @as_enumerable def take_while(self, predicate): for x in self._iterable(): if not predicate(x): break yield x @as_enumerable def zip(self, second, result_selector=lambda a, b: (a, b)): yield from map(lambda x: result_selector(*x), zip(self._iterable(), second)) def single(self, predicate=lambda _: True): has_result = False for x in self._iterable(): if predicate(x): if has_result: raise TypeError('sequence contains more elements') value = x has_result = True if not has_result: raise TypeError('sequence contains no elements') return value def sum(self, selector=lambda x: x): return sum(map(selector, self._iterable())) def to_dict(self, key_selector, element_selector=lambda x: x): return { (key_selector(x), element_selector(x)) for x in self._iterable() } def to_list(self): return list(self._iterable()) return Enumerable(lambda: iterable) </code></pre> <p>So you'd be able to do things like this with any iterable sequence as you might do it in C#.</p> <pre><code># save a linq query query = linq(range(100)) # even numbers as strings evenstrs = query.where(lambda i: i%2 == 0).select(str) # build a different result using the same query instances odds = query.where(lambda i: i%2 != 0) smallnums = query.where(lambda i: i &lt; 50) </code></pre> <pre><code># dynamically build a query query = linq(some_list_of_objects) if some_condition: query = query.where(some_predicate) if some_other_condition: query = query.where(some_other_predicate) result = query.to_list() </code></pre>
1
2016-08-26T05:03:29Z
[ "python", "linq", "iterator", "generator" ]
Using Pandas str.contains to compare row-by-row
39,154,306
<p>I have set up the following very simple database to illustrate what I'm trying to do:</p> <pre><code>teams = pd.DataFrame({"spreads":['New England Patriots -7.0','Atlanta Falcons 2.5','New Orleans Saints -4.5']}) teams['home'] = ['New England Patriots','Carolina Panthers','New Orleans Saints'] teams['away'] = ['Miami Dolphins','Atlanta Falcons','Tampa Bay Buccaneers'] </code></pre> <p>I'm essentially trying to extract the spread value. At first I was trying to use str.contains to first extract the team name thus separating out the numeric value but it seems that I can't use that as a comparison tool for a row-by-row analysis. If anyone has any tips for how to extract the numeric value (I don't think I can use a regex because there are cases where no '-' sign appears) or at the very least tell me what methodology to use to determine if the team listed for each row is the home or away team I would greatly appreciate it.</p>
0
2016-08-25T20:34:04Z
39,154,468
<p>Use <code>.str.extract</code></p> <pre><code>teams.spreads.str.extract(r'(-?\d+\.?\d*)', expand=False) 0 -7.0 1 2.5 2 -4.5 Name: spreads, dtype: object </code></pre> <p>Fancier</p> <pre><code>teams.spreads.str.extract(r'(?P&lt;spread_val&gt;-?\d+\.?\d*)', expand=True) </code></pre> <p><a href="http://i.stack.imgur.com/LRIg5.png" rel="nofollow"><img src="http://i.stack.imgur.com/LRIg5.png" alt="enter image description here"></a></p>
2
2016-08-25T20:46:19Z
[ "python", "pandas" ]
Using Pandas str.contains to compare row-by-row
39,154,306
<p>I have set up the following very simple database to illustrate what I'm trying to do:</p> <pre><code>teams = pd.DataFrame({"spreads":['New England Patriots -7.0','Atlanta Falcons 2.5','New Orleans Saints -4.5']}) teams['home'] = ['New England Patriots','Carolina Panthers','New Orleans Saints'] teams['away'] = ['Miami Dolphins','Atlanta Falcons','Tampa Bay Buccaneers'] </code></pre> <p>I'm essentially trying to extract the spread value. At first I was trying to use str.contains to first extract the team name thus separating out the numeric value but it seems that I can't use that as a comparison tool for a row-by-row analysis. If anyone has any tips for how to extract the numeric value (I don't think I can use a regex because there are cases where no '-' sign appears) or at the very least tell me what methodology to use to determine if the team listed for each row is the home or away team I would greatly appreciate it.</p>
0
2016-08-25T20:34:04Z
39,156,900
<p>Try this <a href="http://pandas.pydata.org/pandas-docs/stable/text.html#splitting-and-replacing-strings" rel="nofollow">Splitting Strings</a>:</p> <pre><code>teams['spreads_val'] = teams['spreads'].str.rsplit(" ").str.get(-1) 0 -7.0 1 2.5 2 -4.5 Name: spreads_vals, dtype: object </code></pre>
1
2016-08-26T01:00:46Z
[ "python", "pandas" ]
Pyspark: show histogram of a data frame column
39,154,325
<p>In pandas data frame, I am using the following code to plot histogram of a column:</p> <pre><code>my_df.hist(column = 'field_1') </code></pre> <p>Is there something that can achieve the same goal in pyspark data frame? (I am in Jupyter Notebook) Thanks!</p>
3
2016-08-25T20:35:24Z
39,891,776
<p>Unfortunately I don't think that there's a clean <code>plot()</code> or <code>hist()</code> function in the PySpark Dataframes API, but I'm hoping that things will eventually go in that direction.</p> <p>For the time being, you could compute the histogram in Spark, and plot the computed histogram as a bar chart. Example:</p> <pre><code>import pandas as pd import pyspark.sql as sparksql # Let's use UCLA's college admission dataset file_name = "http://www.ats.ucla.edu/stat/data/binary.csv" # Creating a pandas dataframe from Sample Data pandas_df = pd.read_csv(file_name) sql_context = sparksql.SQLcontext(sc) # Creating a Spark DataFrame from a pandas dataframe spark_df = sql_context.createDataFrame(df) spark_df.show(5) </code></pre> <p>This is what the data looks like:</p> <pre><code>Out[]: +-----+---+----+----+ |admit|gre| gpa|rank| +-----+---+----+----+ | 0|380|3.61| 3| | 1|660|3.67| 3| | 1|800| 4.0| 1| | 1|640|3.19| 4| | 0|520|2.93| 4| +-----+---+----+----+ only showing top 5 rows # This is what we want df.hist('gre'); </code></pre> <p><a href="http://i.stack.imgur.com/vdVfB.png" rel="nofollow">Histogram when plotted in using df.hist()</a></p> <pre><code># Doing the heavy lifting in Spark. We could leverage the `histogram` function from the RDD api gre_histogram = spark_df.select('gre').rdd.flatMap(lambda x: x).histogram(11) # Loading the Computed Histogram into a Pandas Dataframe for plotting pd.DataFrame(zip(list(gre_histogram)[0], list(gre_histogram)[1]),columns=['bin','frequency']).set_index('bin').plot(kind='bar'); </code></pre> <p><a href="http://i.stack.imgur.com/AMYkM.png" rel="nofollow">Histogram computed by using RDD.histogram()</a></p>
0
2016-10-06T08:58:39Z
[ "python", "pyspark", "spark-dataframe", "jupyter-notebook" ]
Two Sum. I use a nested loop but it seems not work
39,154,367
<pre><code>class Solution(object) def twoSum(self,num,target): #nested for loop, but the result shows only the inner loop works for i in range(0,len(num)-1,1): for j in range(i+1,len(num),1): target=target-num[i]-num[j] if target!=0: target=target+num[i]+num[j] else: break return i,j Solution().twoSum([0,1,1,3,0],4) </code></pre> <p>Output:(0, 4) #the first number never changes and always be zero.So I think it may be the problem of the outer loop, but why and what can I do to make it work? </p>
0
2016-08-25T20:39:01Z
39,173,428
<p>Since you're not telling your goal, I guess that you're looking for sort of solution like this?<br /></p> <pre><code>class Solution(object): def twoSum(self, num, target): ans = [] for i in range(len(num) - 1): for j in range(i + 1, len(num)): s = num[i] + num[j] if s == target: ans.append((i, j)) return ans print Solution().twoSum([0, 1, 1, 3, 0], 4) </code></pre> <p>People would really just feel confused if you just ask this way.</p>
1
2016-08-26T19:24:39Z
[ "python", null, "output" ]
Scraping all text using Scrapy without knowing webpages' structure
39,154,393
<p>I am conducting a research which relates to distributing the indexing of the internet.</p> <p>While several such projects exist (IRLbot, Distributed-indexing, Cluster-Scrapy, Common-Crawl etc.), mine is more focused on incentivising such behavior. I am looking for a simple way to crawl real webpages without knowing anything about their URL or HTML structure and:</p> <ol> <li>extract all their text (in order to index it)</li> <li>Collect all their URLs and add them to the URLs to crawl</li> <li>Prevent crashing and elegantly continuing (even without the scraped text) in case of malformed webpage</li> </ol> <p>To clarify - this is only for Proof of Concept (PoC), so I don't mind it won't scale, it's slow, etc. I am aiming at scraping most of the text which is presented to the user, in most cases, with or without dynamic content, and with as little "garbage" such as functions, tags, keywords etc. A working simple partial solution which works out of the box is preferred over the perfect solution which requires a lot of expertise to deploy.</p> <p>A secondary issue is the storing of the (url,extracted text) for indexing (by a different process?), but I think I will be able to figure it out myself with some more digging.</p> <p>Any advice on how to augment "itsy"'s parse function will be highly appreciated!</p> <p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false"> <div class="snippet-code"> <pre class="snippet-code-html lang-html prettyprint-override"><code>import scrapy from scrapy_1.tutorial.items import WebsiteItem class FirstSpider(scrapy.Spider): name = 'itsy' # allowed_domains = ['dmoz.org'] start_urls = \ [ "http://www.stackoverflow.com" ] # def parse(self, response): # filename = response.url.split("/")[-2] + '.html' # with open(filename, 'wb') as f: # f.write(response.body) def parse(self, response): for sel in response.xpath('//ul/li'): item = WebsiteItem() item['title'] = sel.xpath('a/text()').extract() item['link'] = sel.xpath('a/@href').extract() item['body_text'] = sel.xpath('text()').extract() yield item</code></pre> </div> </div> </p>
2
2016-08-25T20:41:02Z
39,162,546
<p>What you are looking for here is scrapy <a href="http://doc.scrapy.org/en/latest/topics/spiders.html#scrapy.spiders.CrawlSpider" rel="nofollow">CrawlSpider</a></p> <p>CrawlSpider lets you define crawling rules that are followed for every page. It's smart enough to avoid crawling images, documents and other files that are not web resources and it pretty much does the whole thing for you.</p> <p>Here's a good example how your spider might look with CrawlSpider:</p> <pre><code>from scrapy.spiders import CrawlSpider, Rule from scrapy.linkextractors import LinkExtractor class MySpider(CrawlSpider): name = 'crawlspider' start_urls = ['http://scrapy.org'] rules = ( Rule(LinkExtractor(), callback='parse_item', follow=True), ) def parse_item(self, response): item = dict() item['url'] = response.url item['title'] = response.meta['link_text'] # extracting basic body item['body'] = '\n'.join(response.xpath('//text()').extract()) # or better just save whole source item['source'] = response.body return item </code></pre> <p>This spider will crawl every webpage it can find on the website and log the title, url and whole text body.<br> For text body you might want to extract it in some smarter way(to exclude javascript and other unwanted text nodes), but that's an issue on it's own to discuss. Actually for what you are describing you probably want to save full html source rather than text only, since unstructured text is useless for any sort of analitics or indexing.</p> <p>There's also bunch of scrapy settings that can be adjusted for this type of crawling. It's very nicely described in <a href="http://doc.scrapy.org/en/latest/topics/broad-crawls.html" rel="nofollow">Broad Crawl docs page</a></p>
3
2016-08-26T09:08:40Z
[ "python", "web-scraping", "scrapy" ]
Python cant handle exceptions from zipfile.BadZipFile
39,154,470
<p>Need to handle if a zip file is corrupt, so it just pass this file and can go on to the next.</p> <p>In the code example underneath Im trying to catch the exception, so I can pass it. But my script is failing when the zipfile is corrupt*, and give me the "normal" traceback errors* istead of printing "my error", but is running ok if the zipfile is ok.</p> <p>This i a minimalistic example of the code I'm dealing with.</p> <pre><code>path = "path to zipfile" from zipfile import ZipFile with ZipFile(path) as zf: try: print "zipfile is OK" except BadZipfile: print "Does not work " pass </code></pre> <p>part of the traceback is telling me: raise BadZipfile, "File is not a zip file"</p>
-1
2016-08-25T20:46:36Z
39,154,621
<p>You need to put your context manager <em>inside</em> the <code>try-except</code> block:</p> <pre><code>try: with ZipFile(path) as zf: print "zipfile is OK" except BadZipfile: print "Does not work " </code></pre> <p>The error is <em>raised by</em> <code>ZipFile</code> so placing it outside means no handler can be found for the raised exception. In addition make sure you appropriately import <code>BadZipFile</code> from <code>zipfile</code>.</p>
1
2016-08-25T20:57:59Z
[ "python", "python-2.7", "exception-handling", "zipfile" ]
How can i use e and power operation in python 2.7
39,154,611
<p>How can i write <code>x.append(1-e^(-value1^2/2*value2^2))</code> in python 2.7?</p> <p>I don't know how to use power operator and e.</p>
0
2016-08-25T20:57:19Z
39,154,642
<p>Power is <code>**</code> and <code>e^</code> is <code>math.exp</code>:</p> <pre><code>x.append(1 - math.exp(-0.5 * (value1*value2)**2)) </code></pre>
3
2016-08-25T20:59:25Z
[ "python", "math", "equation", "exp" ]
How can i use e and power operation in python 2.7
39,154,611
<p>How can i write <code>x.append(1-e^(-value1^2/2*value2^2))</code> in python 2.7?</p> <p>I don't know how to use power operator and e.</p>
0
2016-08-25T20:57:19Z
39,154,643
<p>Python's power operator is <code>**</code> and Euler's number is <code>math.e</code>, so:</p> <pre><code> from math import e x.append(1-e**(-value1**2/2*value2**2)) </code></pre>
0
2016-08-25T20:59:31Z
[ "python", "math", "equation", "exp" ]
How can i use e and power operation in python 2.7
39,154,611
<p>How can i write <code>x.append(1-e^(-value1^2/2*value2^2))</code> in python 2.7?</p> <p>I don't know how to use power operator and e.</p>
0
2016-08-25T20:57:19Z
39,154,677
<p>Refer <a href="https://docs.python.org/2/library/math.html" rel="nofollow">math</a> library of python. <code>exp(x)</code> function this library is same as <code>e^x</code>. Hence you may write your code as:</p> <p>I have modified the equation by replacing <code>1/2</code> as <code>0.5</code>. Else we have to explicitly type cast the division value to <code>float</code> because Python round of the result of division of two <code>int</code>. (for example: <code>1/2 -&gt; 0</code> in python) </p> <pre><code>import math x.append(1 - math.exp( -0.5 * (value1*value2)**2)) </code></pre>
0
2016-08-25T21:02:01Z
[ "python", "math", "equation", "exp" ]
Use of hdf5 library in Python with ctypes
39,154,663
<p>I would like to use <code>hdf5</code> library directly from <code>Python</code> with <code>ctypes</code>. I know that <code>h5py</code> and <code>PyTables</code> do the job perfectly. The reason I want to do this: I need to work with <code>hdf5</code> files with a <code>Python</code> interpreter where I can not install any package.</p> <p>I am looking for an example that creates a file and write a list of doubles. </p> <p>So far, I have written</p> <pre><code>from ctypes import * hdf5Lib=r'/usr/local/lib/libhdf5.dylib' lib=cdll.LoadLibrary(hdf5Lib) major = c_uint() minor = c_uint() release = c_uint() lib.H5get_libversion(byref(major), byref(minor), byref(release)) H5Fopen=lib.H5Fopen ... </code></pre> <p>I don't know how to call H5Fopen. Should I use <code>H5Fopen.argtypes</code>? Any advice is welcome is to open the hdf5 file, create a dataset that of doubles, write the data and close the file.</p>
0
2016-08-25T21:01:28Z
39,158,110
<p>You shouldn't need to define argtypes as <code>H5Fopen</code> doesn't appear to take arguments. Just call like a regular function:</p> <pre><code>herr_t = lib.H5Fopen() </code></pre> <p><strong>Edit:</strong> for the version with args try:</p> <pre><code>lib.H5Fopen.restype = c_int lib.H5Fopen.argtypes = (c_char_p, c_uint, c_int) herr_t = lib.H5open(name, flags, fapl_id) </code></pre> <p>Just pass a string for the name and ints for the other two.</p>
1
2016-08-26T04:04:06Z
[ "python", "ctypes", "hdf5" ]
Use of hdf5 library in Python with ctypes
39,154,663
<p>I would like to use <code>hdf5</code> library directly from <code>Python</code> with <code>ctypes</code>. I know that <code>h5py</code> and <code>PyTables</code> do the job perfectly. The reason I want to do this: I need to work with <code>hdf5</code> files with a <code>Python</code> interpreter where I can not install any package.</p> <p>I am looking for an example that creates a file and write a list of doubles. </p> <p>So far, I have written</p> <pre><code>from ctypes import * hdf5Lib=r'/usr/local/lib/libhdf5.dylib' lib=cdll.LoadLibrary(hdf5Lib) major = c_uint() minor = c_uint() release = c_uint() lib.H5get_libversion(byref(major), byref(minor), byref(release)) H5Fopen=lib.H5Fopen ... </code></pre> <p>I don't know how to call H5Fopen. Should I use <code>H5Fopen.argtypes</code>? Any advice is welcome is to open the hdf5 file, create a dataset that of doubles, write the data and close the file.</p>
0
2016-08-25T21:01:28Z
39,167,930
<p>I haven't (yet) done any Hdf5 programming, but looking through <a href="https://www.hdfgroup.org/HDF5/doc/RM/RM_H5F.html#File-Open" rel="nofollow">the documentation</a>, I see a few things to tell you.</p> <p>What arguments you would pass to H5Fopen: the name is the filename you want to open. flags will be either H5F_ACC_RDWR (1) or H5F_ACC_RDONLY (0). These flags are mutually exclusive; don't or them together. The second int is the identifier for the file access properties list. You can set it to H5P_DEFAULT (0) for basic cases.</p> <p>Here's the important thing from the documentation, though:</p> <blockquote> <p><strong>Note that H5Fopen does not create a file if it does not already exist; see H5Fcreate.</strong></p> </blockquote> <p>From what you described, you don't want to use H5Fopen. You want to use H5Fcreate.</p> <p>For H5Fcreate, here's the signature</p> <pre><code>hid_t H5Fcreate( const char *name, unsigned flags, hid_t fcpl_id, hid_t fapl_id ) </code></pre> <p>Name is the filename. fcpl_id and fapl_id can both be H5P_DEFAULT, i.e. 0. Your flags in this case are H5F_ACC_TRUNC (2) to overwrite and existing file or H5F_ACC_EXCL (4) to cause an error when you try to create an existing file.</p> <p>I'm assuming you have access to find out what the constant values I listed are, but if you don't, you can look them up as I did, at <a href="http://stainless-steel.github.io/hdf5/hdf5_sys/constant.H5F_ACC_EXCL.html" rel="nofollow">http://stainless-steel.github.io/hdf5/hdf5_sys/constant.H5F_ACC_EXCL.html</a>.</p> <p>This information, combined with @101's example of how to invoke the functions should give you enough to make your initial file.</p>
1
2016-08-26T13:49:11Z
[ "python", "ctypes", "hdf5" ]