title
stringlengths 10
172
| question_id
int64 469
40.1M
| question_body
stringlengths 22
48.2k
| question_score
int64 -44
5.52k
| question_date
stringlengths 20
20
| answer_id
int64 497
40.1M
| answer_body
stringlengths 18
33.9k
| answer_score
int64 -38
8.38k
| answer_date
stringlengths 20
20
| tags
list |
|---|---|---|---|---|---|---|---|---|---|
Cleaner method for finding the shortest distance between points in a python list?
| 39,396,310
|
<p>I have a list of tuples and an individual point in python e.g. [(1,2) , (2,5), (6,7), (9,3)] and (2,1) , and I want to figure out the fastest path possible created by all combinations of the individual point to the list of points.(Basically I want to find the most efficient way to get to all of the points starting from (2,1)). I have a manhattanDistance function that can take it 2 points and output the distance. However, my algorithm is giving me inconsistent answers (The heuristic is off for some reason)</p>
<p>What would be the correct way to accomplish this?</p>
<p>Here is my previous algorithm:</p>
<pre><code>def bestPath(currentPoint,goalList):
sum = 0
bestList = []
while len(goallist) > 0:
for point in list:
bestList.append((manhattanD(point,currentPoint),point))
bestTup = min(bestList)
bestList = []
dist = bestTup[0]
newP = bestTup[1]
currentPoint = newP
sum += dist
return sum
</code></pre>
| 2
|
2016-09-08T16:44:04Z
| 39,396,555
|
<p>Try to find all combination, then check the shortest distance.</p>
| -1
|
2016-09-08T17:00:05Z
|
[
"python",
"for-loop",
"while-loop",
"distance",
"heuristics"
] |
Cleaner method for finding the shortest distance between points in a python list?
| 39,396,310
|
<p>I have a list of tuples and an individual point in python e.g. [(1,2) , (2,5), (6,7), (9,3)] and (2,1) , and I want to figure out the fastest path possible created by all combinations of the individual point to the list of points.(Basically I want to find the most efficient way to get to all of the points starting from (2,1)). I have a manhattanDistance function that can take it 2 points and output the distance. However, my algorithm is giving me inconsistent answers (The heuristic is off for some reason)</p>
<p>What would be the correct way to accomplish this?</p>
<p>Here is my previous algorithm:</p>
<pre><code>def bestPath(currentPoint,goalList):
sum = 0
bestList = []
while len(goallist) > 0:
for point in list:
bestList.append((manhattanD(point,currentPoint),point))
bestTup = min(bestList)
bestList = []
dist = bestTup[0]
newP = bestTup[1]
currentPoint = newP
sum += dist
return sum
</code></pre>
| 2
|
2016-09-08T16:44:04Z
| 39,396,903
|
<p>Since you don't have so many point, you can easily use a solution that try every possibility.</p>
<p>Here is what you can do:</p>
<p>First get all combinations:</p>
<pre><code>>>> list_of_points = [(1,2) , (2,5), (6,7), (9,3)]
>>> list(itertools.permutations(list_of_points))
[((1, 2), (2, 5), (6, 7), (9, 3)),
((1, 2), (2, 5), (9, 3), (6, 7)),
((1, 2), (6, 7), (2, 5), (9, 3)),
((1, 2), (6, 7), (9, 3), (2, 5)),
((1, 2), (9, 3), (2, 5), (6, 7)),
((1, 2), (9, 3), (6, 7), (2, 5)),
((2, 5), (1, 2), (6, 7), (9, 3)),
((2, 5), (1, 2), (9, 3), (6, 7)),
((2, 5), (6, 7), (1, 2), (9, 3)),
((2, 5), (6, 7), (9, 3), (1, 2)),
((2, 5), (9, 3), (1, 2), (6, 7)),
((2, 5), (9, 3), (6, 7), (1, 2)),
((6, 7), (1, 2), (2, 5), (9, 3)),
((6, 7), (1, 2), (9, 3), (2, 5)),
((6, 7), (2, 5), (1, 2), (9, 3)),
((6, 7), (2, 5), (9, 3), (1, 2)),
((6, 7), (9, 3), (1, 2), (2, 5)),
((6, 7), (9, 3), (2, 5), (1, 2)),
((9, 3), (1, 2), (2, 5), (6, 7)),
((9, 3), (1, 2), (6, 7), (2, 5)),
((9, 3), (2, 5), (1, 2), (6, 7)),
((9, 3), (2, 5), (6, 7), (1, 2)),
((9, 3), (6, 7), (1, 2), (2, 5)),
((9, 3), (6, 7), (2, 5), (1, 2))]
</code></pre>
<p>Then create a function that give you the length of a combination:</p>
<pre><code>def combination_length(start_point, combination):
lenght = 0
previous = start_point
for elem in combination:
lenght += manhattanDistance(previous, elem)
return length
</code></pre>
<p>Finally a function that test every possibility:</p>
<pre><code>def get_shortest_path(start_point, list_of_point):
min = sys.maxint
combination_min = None
list_of_combinations = list(itertools.permutations(list_of_points))
for combination in list_of_combination:
length = combination_length(start_point, combination)
if length < min:
min = length
combination_min = combination
return combination_min
</code></pre>
<p>Then finally you can have:</p>
<pre><code>import sys, itertools
def combination_length(start_point, combination):
lenght = 0
previous = start_point
for elem in combination:
lenght += manhattanDistance(previous, elem)
return length
def get_shortest_path(start_point, list_of_point):
min = sys.maxint
combination_min = None
list_of_combinations = list(itertools.permutations(list_of_points))
for combination in list_of_combination:
length = combination_length(start_point, combination)
if length < min:
min = length
combination_min = combination
return combination_min
list_of_points = [(1,2) , (2,5), (6,7), (9,3)]
print get_shortest_path((2,1), list_of_points)
</code></pre>
| 2
|
2016-09-08T17:23:58Z
|
[
"python",
"for-loop",
"while-loop",
"distance",
"heuristics"
] |
Cleaner method for finding the shortest distance between points in a python list?
| 39,396,310
|
<p>I have a list of tuples and an individual point in python e.g. [(1,2) , (2,5), (6,7), (9,3)] and (2,1) , and I want to figure out the fastest path possible created by all combinations of the individual point to the list of points.(Basically I want to find the most efficient way to get to all of the points starting from (2,1)). I have a manhattanDistance function that can take it 2 points and output the distance. However, my algorithm is giving me inconsistent answers (The heuristic is off for some reason)</p>
<p>What would be the correct way to accomplish this?</p>
<p>Here is my previous algorithm:</p>
<pre><code>def bestPath(currentPoint,goalList):
sum = 0
bestList = []
while len(goallist) > 0:
for point in list:
bestList.append((manhattanD(point,currentPoint),point))
bestTup = min(bestList)
bestList = []
dist = bestTup[0]
newP = bestTup[1]
currentPoint = newP
sum += dist
return sum
</code></pre>
| 2
|
2016-09-08T16:44:04Z
| 39,397,687
|
<p>If this is anything like the traveling salesman problem, then you want to check out the <a href="https://networkx.github.io/" rel="nofollow">NetworkX</a> python module. </p>
| 0
|
2016-09-08T18:17:43Z
|
[
"python",
"for-loop",
"while-loop",
"distance",
"heuristics"
] |
Making a Twitter clone in Django, having trouble with displaying the right user when displaying tweets
| 39,396,328
|
<p>I have user registration. Whenever I log in as a certain user, all the tweets are said to be tweeted by that user, even if it wasn't.</p>
<p><strong>forms.py</strong></p>
<pre><code>from django.contrib.auth.models import User
from django import forms
class UserForm(forms.ModelForm):
password = forms.CharField(widget=forms.PasswordInput)
class Meta:
model = User
fields = ['username', 'password', 'email']
</code></pre>
<p><strong>models.py</strong></p>
<pre><code>from django.db import models
from django.core.urlresolvers import reverse
from django.utils import timezone
from django.contrib.auth.models import User
class Howl(models.Model):
author = models.ForeignKey('auth.user', null=True)
content = models.CharField(max_length=150)
published_date = models.DateTimeField(default=timezone.now)
like_count = models.IntegerField(default=0)
rehowl_count = models.IntegerField(default=0)
def get_absolute_url(self):
return reverse('howl:index')
def __str__(self):
return self.content
</code></pre>
<p><strong>index.html</strong></p>
<pre><code>{% block content %}
<h2>Index.html timeline!</h2>
{% for howl in howls %}
<div class="howl">
<h2><strong>{{user.username}}</strong></h2>
<p class="lead">{{howl.content}} - {{howl.published_date}}</p>
<span>Rehowls: {{howl.rehowl_count}}, Likes: {{howl.like_count}}</span>
</div><!--howl-->
{% endfor%}
{% endblock %}
</code></pre>
<p><strong>views.py</strong></p>
<pre><code>from django.shortcuts import render, redirect
from django.views import generic
from django.views.generic import View
from django.views.generic.edit import CreateView, UpdateView, DeleteView
from .models import Howl
from .forms import UserForm
from django.contrib.auth import authenticate, login
class IndexView(generic.ListView):
template_name = 'howl/index.html'
context_object_name = 'howls'
def get_queryset(self):
return Howl.objects.all()
class HowlCreate(CreateView):
model = Howl
fields = ['content']
class UserFormView(View):
form_class = UserForm
template_name = 'howl/registration_form.html'
def get(self, request):
form = self.form_class(None)
return render(request, self.template_name, {'form': form})
def post(self, request):
form = self.form_class(request.POST)
if form.is_valid():
user = form.save(commit=False)
username = form.cleaned_data['username']
password = form.cleaned_data['password']
user.set_password(password)
user.save()
user = authenticate(username=username, password=password)
if user is not None:
if user.is_active:
login(request, user)
return redirect('howl:index')
</code></pre>
<p>I know that using user.username in index.html is the problem. It shows the current user logged in as the author of these tweets. How do I make it so that it displays the rightful owner of a tweet? </p>
| -1
|
2016-09-08T16:44:55Z
| 39,396,363
|
<p>In your template, you are showing the logged in user <code>{{user.username}}</code> instead of <code>{{howl.author.username}}</code></p>
| 0
|
2016-09-08T16:47:19Z
|
[
"python",
"django"
] |
Making a Twitter clone in Django, having trouble with displaying the right user when displaying tweets
| 39,396,328
|
<p>I have user registration. Whenever I log in as a certain user, all the tweets are said to be tweeted by that user, even if it wasn't.</p>
<p><strong>forms.py</strong></p>
<pre><code>from django.contrib.auth.models import User
from django import forms
class UserForm(forms.ModelForm):
password = forms.CharField(widget=forms.PasswordInput)
class Meta:
model = User
fields = ['username', 'password', 'email']
</code></pre>
<p><strong>models.py</strong></p>
<pre><code>from django.db import models
from django.core.urlresolvers import reverse
from django.utils import timezone
from django.contrib.auth.models import User
class Howl(models.Model):
author = models.ForeignKey('auth.user', null=True)
content = models.CharField(max_length=150)
published_date = models.DateTimeField(default=timezone.now)
like_count = models.IntegerField(default=0)
rehowl_count = models.IntegerField(default=0)
def get_absolute_url(self):
return reverse('howl:index')
def __str__(self):
return self.content
</code></pre>
<p><strong>index.html</strong></p>
<pre><code>{% block content %}
<h2>Index.html timeline!</h2>
{% for howl in howls %}
<div class="howl">
<h2><strong>{{user.username}}</strong></h2>
<p class="lead">{{howl.content}} - {{howl.published_date}}</p>
<span>Rehowls: {{howl.rehowl_count}}, Likes: {{howl.like_count}}</span>
</div><!--howl-->
{% endfor%}
{% endblock %}
</code></pre>
<p><strong>views.py</strong></p>
<pre><code>from django.shortcuts import render, redirect
from django.views import generic
from django.views.generic import View
from django.views.generic.edit import CreateView, UpdateView, DeleteView
from .models import Howl
from .forms import UserForm
from django.contrib.auth import authenticate, login
class IndexView(generic.ListView):
template_name = 'howl/index.html'
context_object_name = 'howls'
def get_queryset(self):
return Howl.objects.all()
class HowlCreate(CreateView):
model = Howl
fields = ['content']
class UserFormView(View):
form_class = UserForm
template_name = 'howl/registration_form.html'
def get(self, request):
form = self.form_class(None)
return render(request, self.template_name, {'form': form})
def post(self, request):
form = self.form_class(request.POST)
if form.is_valid():
user = form.save(commit=False)
username = form.cleaned_data['username']
password = form.cleaned_data['password']
user.set_password(password)
user.save()
user = authenticate(username=username, password=password)
if user is not None:
if user.is_active:
login(request, user)
return redirect('howl:index')
</code></pre>
<p>I know that using user.username in index.html is the problem. It shows the current user logged in as the author of these tweets. How do I make it so that it displays the rightful owner of a tweet? </p>
| -1
|
2016-09-08T16:44:55Z
| 39,399,445
|
<p>On my HowlCreate view, I didn't set the author to the howl. I only set the content. </p>
<pre><code>class HowlCreate(CreateView):
model = Howl
fields = ['content']
# This sets up the author
def form_valid(self, form):
instance = form.save(commit=False)
instance.author = self.request.user # set current user as author
instance.save()
return HttpResponseRedirect(self.get_success_url())
</code></pre>
| 0
|
2016-09-08T20:14:34Z
|
[
"python",
"django"
] |
Kivy: get parent inside widget which is added in python
| 39,396,372
|
<p>How do I get the reference to a parent inside a widget that is not added by kvlang but in python.
Normally you would just call <code>self.parent</code> however that returns <code>Null</code> if the widget is added in python to the parent.</p>
<p>An example:</p>
<pre><code>import kivy
kivy.require('1.9.0') # replace with your current kivy version !
from kivy.app import App
from kivy.lang import Builder
from kivy.uix.screenmanager import ScreenManager,Screen
from kivy.clock import Clock
kvlang = '''
<ScreenManagement>:
ScreenOne:
<ScreenOne>:
name: 'First'
<ScreenTwo>:
name: 'Second'
'''
class ScreenManagement(ScreenManager):
def __init__(self,**kwargs):
super().__init__(**kwargs)
def setup(*args):
self.add_widget(ScreenTwo()) #add ScreenTwo later in python
Clock.schedule_once(setup)
class ScreenOne(Screen):
def __init__(self,**kwargs):
super().__init__()
def setup(*args):
print("Parent of ScreenOne: {}".format(self.parent)) #this is working
Clock.schedule_once(setup)
class ScreenTwo(Screen):
def __init__(self,**kwargs):
super().__init__()
def setup(*args):
print("Parent of ScreenTwo: {}".format(self.parent)) #this is not working, self.parent will return None
Clock.schedule_once(setup)
class MyApp(App):
def build(self):
Builder.load_string(kvlang)
return ScreenManagement()
if __name__ == '__main__':
MyApp().run()
</code></pre>
<p>This will return:</p>
<pre><code>Parent of ScreenOne: <__main__.ScreenManagement object at 0x7f98a3fddb40>
Parent of ScreenTwo: None
</code></pre>
| 0
|
2016-09-08T16:47:38Z
| 39,399,594
|
<p>Widgets added with <code>add_widget</code> actually has a valid reference to their parent:</p>
<pre><code>from kivy.app import App
from kivy.uix.screenmanager import ScreenManager, Screen
from kivy.lang import Builder
Builder.load_string('''
<ScreenTwo>
Label:
text: 'Hello, world'
''')
class ScreenManagement(ScreenManager):
def __init__(self,**kwargs):
super(ScreenManagement, self).__init__(**kwargs)
screen = ScreenTwo()
print(screen.parent)
self.add_widget(screen)
print(screen.parent)
class ScreenTwo(Screen):
def on_touch_down(self, *args):
print(self.parent)
class MyApp(App):
def build(self):
return ScreenManagement()
if __name__ == '__main__':
MyApp().run()
</code></pre>
<p>It's just not available in their <code>__init__</code> method.</p>
| 0
|
2016-09-08T20:24:25Z
|
[
"python",
"kivy"
] |
Is there a way to use the python -m mymod syntax from within the python interpreter?
| 39,396,373
|
<p>Many packages like unittest have an easy to use command line interface, e.g. the test discovery feature in unittest: <a href="https://docs.python.org/2/library/unittest.html#test-discovery" rel="nofollow">https://docs.python.org/2/library/unittest.html#test-discovery</a></p>
<p>However, to achieve the same from within python, it is sometimes necessary to dig in much deeper in the documentation. In the example above, the python code required to achieve the same is much harder to figure out compared to the command line command. </p>
<p>Therefore, I want to know: Is there a consistent way to translate <code>python -m mymod args</code> to something, that can be used within the python interpreter?</p>
<p>Edit: I'm asking for a reasonable strategy what to do in a situation, where I know the python -m command but nothing more. Is this knowledge completely useless when I am forced to use the python interpreter?</p>
| 0
|
2016-09-08T16:47:38Z
| 39,401,172
|
<p>You can use the <a href="https://docs.python.org/2/library/runpy.html#runpy.run_module" rel="nofollow"><code>runpy.run_module</code> function</a>:</p>
<pre><code>import runpy
import sys
sys.argv[1:] = ['arg1', 'arg2', 'arg3']
runpy.run_module('module.name', run_name='__main__', alter_sys=True)
</code></pre>
<p>This executes <code>module.name</code> as if you typed:</p>
<pre><code>python -m module.name arg1 arg2 arg3
</code></pre>
| 1
|
2016-09-08T22:34:11Z
|
[
"python"
] |
logging how to control the times in which flush to log file
| 39,396,393
|
<p>I've to use logging module and wonder if there is a way to log into existing log file , by appending my data to the existing file and more important if I can control the times I flush into the file.</p>
<p>Currently I need to be able to flush all the time to the file because, there are cases in which the script running the logic with the logging can crash, so I need to know exactly where my program stopped.</p>
<p>Any ideas and code examples?
Thanks</p>
| 1
|
2016-09-08T16:48:44Z
| 39,396,567
|
<p>If you use <code>logging.FileHandler</code> and choose an existing log file, by default it will append to that file. The method that actually writes the log record is the <code>emit()</code> method on logging handlers. If you look at the source code for the <code>FileHandler</code>, it <code>flush</code>'s after <em>every</em> write, so it should be doing what you want by default.</p>
<pre><code>import logging
log = logging.getLogger()
handler = logging.FileHandler('/path/to/log.txt')
log.addHandler(handler)
log.warning('This is a message')
</code></pre>
| 1
|
2016-09-08T17:00:36Z
|
[
"python",
"python-2.7",
"logging",
"flush"
] |
Django refuses to accept my one-off default value for FloatField
| 39,396,610
|
<p>I have a class and I'm trying to add a new <code>FloatField</code> to it. Django wants a default value to populate existing rows (reasonable). However, it refuses to accept any value I give it.</p>
<pre><code>You are trying to add a non-nullable field 'FIELDNAME' to CLASS without a default; we can't do that (the database needs something to populate existing rows).
Please select a fix:
1) Provide a one-off default now (will be set on all existing rows with a null value for this column)
2) Quit, and let me add a default in models.py
Select an option: -1
Please select a valid option: -1
Please select a valid option: -1
Please select a valid option: float(-1)
Please select a valid option: -1.0
Please select a valid option: float(-1.0)
Please select a valid option:
</code></pre>
<p>How do I get it to accept my value?</p>
| 1
|
2016-09-08T17:03:18Z
| 39,396,672
|
<p>You should select option 1 and then input your value</p>
<pre><code> 1) Provide a one-off default now (will be set on all existing rows with a null value for this column)
2) Quit, and let me add a default in models.py
1
</code></pre>
<p>After selecting 1. It would ask to provide your default value.</p>
<pre><code>Please enter the default value now, as valid Python
The datetime and django.utils.timezone modules are available, so you can do e.g. timezone.now()
>>> -1.0
</code></pre>
| 1
|
2016-09-08T17:08:05Z
|
[
"python",
"django"
] |
Adding Lat Lon coordinates to separate columns (python/dataframe)
| 39,396,678
|
<p>I'm sure this is a simple thing to do but I am new to Python and cannot work it out!</p>
<p>I have a data frame with one column containing coordinates and I am wanting to remove the brackets and add the Lat/Lon values into separate columns.</p>
<p>Current dataframe:</p>
<pre><code>gridReference
(56.37769816725615, -4.325049868061924)
(56.37769816725615, -4.325049868061924)
(51.749167440074324, -4.963575226888083)
</code></pre>
<p>wanted dataframe:</p>
<pre><code>Latitude Longitude
56.37769816725615 -4.325049868061924
56.37769816725615 -4.325049868061924
51.749167440074324 -4.963575226888083
</code></pre>
<p>Thanks for your help</p>
<p>EDIT:
I have tried:</p>
<p><code>df['lat'], df['lon'] = df.gridReference.str.strip(')').str.strip('(').str.split(', ').values.tolist()</code></p>
<p>but I get the error:</p>
<p><code>AttributeError: Can only use .str accessor with string values, which use np.object_ dtype in pandas</code></p>
<p>I then tried adding:</p>
<p><code>df['gridReference'] = df['gridReference'].astype('str')</code></p>
<p>and got the error:</p>
<p><code>ValueError: too many values to unpack (expected 2)</code></p>
<p>Any help would be appreciated as I am not sure how to make this work! :)</p>
<p><strong>EDIT:</strong>
I keep getting the error
<code>AttributeError: Can only use .str accessor with string values, which use np.object_ dtype in pandas</code></p>
<p>the output for df.dtypes is:</p>
<p><code><class 'pandas.core.frame.DataFrame'>
Int64Index: 22899 entries, 0 to 22898
Data columns (total 1 columns):
LatLon 22899 non-null object
dtypes: object(1)</code></p>
<p>the output for df.info() is:</p>
<p><code>gridReference object
dtype: object</code></p>
| -1
|
2016-09-08T17:08:18Z
| 39,396,974
|
<pre><code>>>> df = pd.DataFrame({'latlong': ['(12, 32)', '(43, 54)']})
>>> df
latlong
0 (12, 32)
1 (43, 54)
>>> split_data = df.latlong.str.strip(')').str.strip('(').str.split(', ')
>>> df['lat'] = split_data.apply(lambda x: x[0])
>>> df['long'] = split_data.apply(lambda x: x[1])
latlong lat long
0 (12, 32) 12 43
1 (43, 54) 32 54
</code></pre>
| 0
|
2016-09-08T17:29:10Z
|
[
"python",
"pandas",
"dataframe"
] |
Adding Lat Lon coordinates to separate columns (python/dataframe)
| 39,396,678
|
<p>I'm sure this is a simple thing to do but I am new to Python and cannot work it out!</p>
<p>I have a data frame with one column containing coordinates and I am wanting to remove the brackets and add the Lat/Lon values into separate columns.</p>
<p>Current dataframe:</p>
<pre><code>gridReference
(56.37769816725615, -4.325049868061924)
(56.37769816725615, -4.325049868061924)
(51.749167440074324, -4.963575226888083)
</code></pre>
<p>wanted dataframe:</p>
<pre><code>Latitude Longitude
56.37769816725615 -4.325049868061924
56.37769816725615 -4.325049868061924
51.749167440074324 -4.963575226888083
</code></pre>
<p>Thanks for your help</p>
<p>EDIT:
I have tried:</p>
<p><code>df['lat'], df['lon'] = df.gridReference.str.strip(')').str.strip('(').str.split(', ').values.tolist()</code></p>
<p>but I get the error:</p>
<p><code>AttributeError: Can only use .str accessor with string values, which use np.object_ dtype in pandas</code></p>
<p>I then tried adding:</p>
<p><code>df['gridReference'] = df['gridReference'].astype('str')</code></p>
<p>and got the error:</p>
<p><code>ValueError: too many values to unpack (expected 2)</code></p>
<p>Any help would be appreciated as I am not sure how to make this work! :)</p>
<p><strong>EDIT:</strong>
I keep getting the error
<code>AttributeError: Can only use .str accessor with string values, which use np.object_ dtype in pandas</code></p>
<p>the output for df.dtypes is:</p>
<p><code><class 'pandas.core.frame.DataFrame'>
Int64Index: 22899 entries, 0 to 22898
Data columns (total 1 columns):
LatLon 22899 non-null object
dtypes: object(1)</code></p>
<p>the output for df.info() is:</p>
<p><code>gridReference object
dtype: object</code></p>
| -1
|
2016-09-08T17:08:18Z
| 39,406,674
|
<pre><code>df['gridReference'].str.strip('()') \
.str.split(', ', expand=True) \
.rename(columns={0:'Latitude', 1:'Longitude'})
Latitude Longitude
0 56.37769816725615 -4.325049868061924
1 56.37769816725615 -4.325049868061924
2 51.749167440074324 -4.963575226888083
</code></pre>
| 1
|
2016-09-09T08:01:04Z
|
[
"python",
"pandas",
"dataframe"
] |
TypeError: list indices must be integers, not str. Know the issue, not the answer
| 39,396,683
|
<p>Unique situation, I know the problem, just dont know a solution. </p>
<pre><code>import string
timefile = open('lasttimemultiple.txt','r+')#opens the file that contains the last time run
lasttime = timefile.read()#reads the last time file
items= int(2)
splitlines = string.split(lasttime,'\n')
print splitlines[items][0:2]
timefile.close() #closes last time
PullType = '00'
datapt = '01'
for items in splitlines:
if splitlines[items][0:2] == PullType:
datapt = splitlines[items]
else:
print ''
print datapt
</code></pre>
<p>I know my issue is I am using 'items' as the index I am calling versus an integer, but I don't know how to use a reference to work through the data without using an non-int variable name. </p>
<p>Any ideas how to achieve this?
Thanks</p>
| 0
|
2016-09-08T17:08:44Z
| 39,396,800
|
<p>You should show the actual traceback. If you had, you would have seen that the error is in this line:</p>
<pre><code>if splitlines[items][0:2] == PullType:
</code></pre>
<p>That's because <code>items</code> here has been redefined by the for loop in the line before. In a for loop in Python, the variable is not a counter, it is the actual item from that iteration. So, in the first iteration, <code>items</code> is the first element of splitlines, etc. So it is a string, not an integer. The fix is to use it directly:</p>
<pre><code>if items[0:2] == PullType:
</code></pre>
<p>(Also, you should think about better variable names: that should be <code>item</code>, not <code>items</code>).</p>
| 0
|
2016-09-08T17:16:37Z
|
[
"python"
] |
Running tensorflow as daemon and piping all output to log file
| 39,396,694
|
<p>To run tensorflow model as daemon I use : </p>
<pre><code>nohup python translate.py --data_dir data &
</code></pre>
<p>This logs error messages to nohup.out but it does not capture Tensorflow stdout . This thread offers describes related : <a href="https://groups.google.com/a/tensorflow.org/forum/#!topic/discuss/SO_JRts-VIs" rel="nofollow">https://groups.google.com/a/tensorflow.org/forum/#!topic/discuss/SO_JRts-VIs</a> but does not provide solution.</p>
<p>I require to run as daemon as model takes quite some time to run. This is to prevent ssh disconnecting due to inactivity.</p>
<p>How to run Tensorflow as daemon process and pipe all output to file ?</p>
| 0
|
2016-09-08T17:09:42Z
| 39,398,882
|
<p>Why not try </p>
<pre><code>nohup python translate.py --data_dir data &> outputfile.txt
</code></pre>
<p>You can then suspend the file your self with kill -19 %1 to suspend the first job or whatever number its present as. Then kill -CONT %1 to restart it. </p>
<p>Other options:</p>
<ul>
<li>"disown" command </li>
<li>tmux (as suggested in the comments)</li>
<li>screen (similar to tmux)</li>
<li>using mosh instead of ssh</li>
<li>save the outputs from within the file translate.py instead of printing to stdout</li>
</ul>
| 0
|
2016-09-08T19:34:46Z
|
[
"python",
"linux",
"tensorflow"
] |
Class property inheritance when property is another class' property in python
| 39,396,798
|
<p>I have a class:</p>
<pre><code>class a():
def __init__(self):
self.x = ["a", "b"]
</code></pre>
<p>and another class:</p>
<pre><code>class b():
def __init__(self, r):
self.y = r
def chg(self, a):
self.y = a
</code></pre>
<p>I do:</p>
<pre><code>>>> m = a()
>>> m.x
["a", "b"]
>>> n = b(m.x)
>>> n.y
["a", "b"]
>>> n.y = ["c", "d"]
>>> n.y
["c", "d"]
>>> m.x
["a", "b"]
</code></pre>
<p>Now why didn't <code>m.x</code> change to <code>["c", "d"]</code>? How can I achieve this?</p>
| -1
|
2016-09-08T17:16:33Z
| 39,396,863
|
<p>Inheritance does not work this way in Python. However, I think your problem is mutability rather than inheritance itself. (You did not apply inheritance, I said these because you used it as a tag and in the title.)</p>
<p>Try it like this.</p>
<pre><code>class a():
def __init__(self):
self.x = [5]
class b():
def __init__(self, r):
self.y = r
def chg(self, a):
self.y = a
m = a()
n = b(m.x)
n.y[0] = 99
print m.x
# Gives 99
</code></pre>
<p>This way, you only create one list, and use it with two different classes.</p>
<p><strong>Note:</strong> You can think this as the <em>pass by reference</em> property in C-type languages.</p>
<p>Just to keep in mind, every assignment in Python is <em>pass by reference</em> because Python handles variables different than those languages. <a href="http://stackoverflow.com/questions/986006/how-do-i-pass-a-variable-by-reference">Read more about it.</a> (This does not affect mutability.)</p>
<p><strong>Edit:</strong> I see you edited you question. Now let me explain why your code does not work as you expected.</p>
<p>Mutability means you can change the elements inside a container, without assigning it to a new address in memory.</p>
<p>When you do,</p>
<pre><code>n.y = ["c", "d"]
</code></pre>
<p>You do not change the list,(So you do not <em>mutate</em> the list) you just assign a new list to that variable name. To change the original list, you must alter every element by using <code>list[index]</code>, so the element would be changed, but the list will be the same list. So you can do this</p>
<pre><code>m = a()
n = b(m.x)
new_list = ["c", "d"]
for i, elem in enumerate(new_list):
# This way you change every element one-by-one.
n.y[i] = elem
print m.x
</code></pre>
<p>If you are not familiar with <code>enumerate</code>, study it immediately. But if you want to use simpler code(And more similar to C implementation.) You can make the assignments like this,</p>
<pre><code>for i in range(len(new_list)):
n.y[i] = new_list[i]
</code></pre>
| 2
|
2016-09-08T17:21:07Z
|
[
"python",
"class",
"inheritance",
"properties"
] |
Class property inheritance when property is another class' property in python
| 39,396,798
|
<p>I have a class:</p>
<pre><code>class a():
def __init__(self):
self.x = ["a", "b"]
</code></pre>
<p>and another class:</p>
<pre><code>class b():
def __init__(self, r):
self.y = r
def chg(self, a):
self.y = a
</code></pre>
<p>I do:</p>
<pre><code>>>> m = a()
>>> m.x
["a", "b"]
>>> n = b(m.x)
>>> n.y
["a", "b"]
>>> n.y = ["c", "d"]
>>> n.y
["c", "d"]
>>> m.x
["a", "b"]
</code></pre>
<p>Now why didn't <code>m.x</code> change to <code>["c", "d"]</code>? How can I achieve this?</p>
| -1
|
2016-09-08T17:16:33Z
| 39,396,948
|
<p>Oke so you need to do a little reading up on how memory works in python. What you do here:</p>
<pre><code>>>> n = b(m.x)
>>> n.y
5
</code></pre>
<p>Is asking for the value m.x en sending it to the constructor of n. The value of m.x is 5 at that moment so 5 is passed to the constructor which then saves the NUMBER 5 in the object not the memory address to the variable m.x. Meaning that n is not linked in anyway to m. So if you where to update m.x, n would remain the same because the data for m is stored in a completely different place than that of n.</p>
<p>In a language like C you could pass the address of the variable to the constructor but this is not possible in Python. The only variables in python that use addresses are array's. So you might want to look into using single element array's if you really need this functionality. </p>
<p>Another tip I would give you is give your classes and variables proper names not just single character names!</p>
| 1
|
2016-09-08T17:26:45Z
|
[
"python",
"class",
"inheritance",
"properties"
] |
no module named AppConfig
| 39,396,929
|
<p>I'm trying to run a server on my computer in Python/Django. In my installed_apps, I had a program called csvimport. It didn't work, so I had to install django-csvimport and I had to change it to csvimport.app.AppConfig in my installed-apps. However, I still get an importerror message saying "no module named AppConfig". (my version of django is 1.8, and django-csvimport is 2.4, by the way) Is this not the correct way to have csvimport in my installed_apps and my program?
Thanks!</p>
| 1
|
2016-09-08T17:25:42Z
| 39,396,993
|
<p><code>AppConfig</code> is Django's base class for the configuration class of custom apps, not necessarily the name of the config class of the app. But you're almost there:</p>
<p><a href="https://github.com/edcrewe/django-csvimport/blob/master/csvimport/app.py#L5" rel="nofollow"><code>csvimport.app.CSVImportConf</code></a></p>
| 0
|
2016-09-08T17:30:23Z
|
[
"python",
"django",
"python-2.7"
] |
SQLAlchemy - AttributeError: _reverse_property
| 39,396,934
|
<p>I'm having some trouble working and learning SQLALCHEMY and I think the issue is to do with my back_populates in relationships, but I've not been able to suss it out. Can you please point me in the right direction? </p>
<p>The tables are created successfully and everything seems to be in order until I try and create a new MailProvider, which causes this error:</p>
<pre><code>Traceback (most recent call last):
File "C:/Users/Music/.PyCharm2016.2/config/scratches/scratch_7.py", line 71, in <module>
gmail = MailProvider(service_provider="gmail.com")
File "<string>", line 2, in __init__
File "C:\Users\Music\mailtesterV2\lib\site-packages\sqlalchemy\orm\instrumentation.py", line 347, in _new_state_if_none
state = self._state_constructor(instance, self)
File "C:\Users\Music\mailtesterV2\lib\site-packages\sqlalchemy\util\langhelpers.py", line 754, in __get__
obj.__dict__[self.__name__] = result = self.fget(obj)
File "C:\Users\Music\mailtesterV2\lib\site-packages\sqlalchemy\orm\instrumentation.py", line 177, in _state_constructor
self.dispatch.first_init(self, self.class_)
File "C:\Users\Music\mailtesterV2\lib\site-packages\sqlalchemy\event\attr.py", line 256, in __call__
fn(*args, **kw)
File "C:\Users\Music\mailtesterV2\lib\site-packages\sqlalchemy\orm\mapper.py", line 2872, in _event_on_first_init
configure_mappers()
File "C:\Users\Music\mailtesterV2\lib\site-packages\sqlalchemy\orm\mapper.py", line 2768, in configure_mappers
mapper._post_configure_properties()
File "C:\Users\Music\mailtesterV2\lib\site-packages\sqlalchemy\orm\mapper.py", line 1708, in _post_configure_properties
prop.init()
File "C:\Users\Music\mailtesterV2\lib\site-packages\sqlalchemy\orm\interfaces.py", line 183, in init
self.do_init()
File "C:\Users\Music\mailtesterV2\lib\site-packages\sqlalchemy\orm\relationships.py", line 1632, in do_init
self._generate_backref()
File "C:\Users\Music\mailtesterV2\lib\site-packages\sqlalchemy\orm\relationships.py", line 1866, in _generate_backref
self._add_reverse_property(self.back_populates)
File "C:\Users\Music\mailtesterV2\lib\site-packages\sqlalchemy\orm\relationships.py", line 1574, in _add_reverse_property
other._reverse_property.add(self)
File "C:\Users\Music\mailtesterV2\lib\site-packages\sqlalchemy\util\langhelpers.py", line 840, in __getattr__
return self._fallback_getattr(key)
File "C:\Users\Music\mailtesterV2\lib\site-packages\sqlalchemy\util\langhelpers.py", line 818, in _fallback_getattr
raise AttributeError(key)
AttributeError: _reverse_property
</code></pre>
<p>My Code:</p>
<pre><code>from sqlalchemy import Column, ForeignKey, Integer, String, Boolean, DateTime
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.orm import relationship, sessionmaker
from sqlalchemy import create_engine
engine = create_engine('sqlite:///:memory:', echo=True)
Session = sessionmaker(bind=engine)
Base = declarative_base()
class MailProvider(Base):
__tablename__ = 'mail_provider'
id = Column(Integer, primary_key=True)
service_provider = Column(String(250), nullable=False)
imap_server = Column(String(250))
imap_server_port = Column(Integer)
imap_server_ssl = Column(String(250))
imap_server_port_ssl = Column(Integer)
imap_server_use_tls = Column(Boolean)
smtp_server = Column(String(250))
smtp_server_port = Column(Integer)
smtp_server_ssl = Column(String(250))
smtp_server_port_ssl = Column(Integer)
smtp_server_use_tls = Column(Boolean)
class MailAccount(Base):
__tablename__ = "mail_account"
id = Column(Integer, primary_key=True)
email_address = Column(String(250), nullable=False)
imap_username = Column(String(250), nullable=False)
imap_password = Column(String(250), nullable=False)
smtp_username = Column(String(250), nullable=False)
smtp_password = Column(String(250), nullable=False)
provider_id = Column(Integer, ForeignKey('mail_provider.id'), nullable=False)
provider = relationship("MailProvider", back_populates="service_provider")
account_owner = Column(String(250), nullable=False)
class SentMail(Base):
__tablename__ = "sent_mail"
id = Column(Integer, primary_key=True)
mail_uuid = Column(String(36), nullable=False)
time_sent = Column(DateTime(timezone=True), nullable=False)
sent_from_id = Column(Integer, ForeignKey('mail_account.id'), nullable=False)
sent_from = relationship("MailAccount", back_populates="email_address")
sent_to = Column(String(250), nullable=False)
msg_subject = Column(String(250))
msg_body = Column(String)
send_status = Column(String(500))
class ReceivedMail(Base):
__tablename__ = "received_mail"
id = Column(Integer, primary_key=True)
sent_mail_id = Column(Integer, ForeignKey('sent_mail.id'), nullable=False)
sent_mail = relationship("SentMail", back_populates="mail_uuid")
time_received = Column(DateTime(timezone=True), nullable=False)
MailProvider.accounts = relationship("MailAccount", back_populates="provider", order_by="MailAccount.id")
MailAccount.mails_sent = relationship("SentMail", back_populates="mail_uuid", order_by="SentMail.time_sent")
SentMail.received_mails = relationship("RecievedMail", back_populates="received_time", order_by="RecievedMail.received_time")
Base.metadata.create_all(engine)
session = Session()
gmail = MailProvider(service_provider="gmail.com")
session.add(gmail)
session.commit()
</code></pre>
| 0
|
2016-09-08T17:25:53Z
| 39,397,346
|
<p>Relationship back_populates reference other relationships, not columns.</p>
<p>Revised code:</p>
<pre><code>from sqlalchemy import Column, ForeignKey, Integer, String, Boolean, DateTime
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.orm import relationship, sessionmaker
from sqlalchemy import create_engine
engine = create_engine('sqlite:///:memory:', echo=True)
Session = sessionmaker(bind=engine)
Base = declarative_base()
class MailProvider(Base):
__tablename__ = 'mail_provider'
id = Column(Integer, primary_key=True)
service_provider = Column(String(250), nullable=False)
imap_server = Column(String(250))
imap_server_port = Column(Integer)
imap_server_ssl = Column(String(250))
imap_server_port_ssl = Column(Integer)
imap_server_use_tls = Column(Boolean)
smtp_server = Column(String(250))
smtp_server_port = Column(Integer)
smtp_server_ssl = Column(String(250))
smtp_server_port_ssl = Column(Integer)
smtp_server_use_tls = Column(Boolean)
accounts = relationship("MailAccount", back_populates="provider")
class MailAccount(Base):
__tablename__ = "mail_account"
id = Column(Integer, primary_key=True)
email_address = Column(String(250), nullable=False)
imap_username = Column(String(250), nullable=False)
imap_password = Column(String(250), nullable=False)
smtp_username = Column(String(250), nullable=False)
smtp_password = Column(String(250), nullable=False)
provider_id = Column(Integer, ForeignKey('mail_provider.id'), nullable=False)
provider = relationship("MailProvider", back_populates="accounts")
account_owner = Column(String(250), nullable=False)
mails_sent = relationship("SentMail", back_populates="sent_from", order_by="SentMail.time_sent")
class SentMail(Base):
__tablename__ = "sent_mail"
id = Column(Integer, primary_key=True)
mail_uuid = Column(String(36), nullable=False)
time_sent = Column(DateTime(timezone=True), nullable=False)
sent_from_id = Column(Integer, ForeignKey('mail_account.id'), nullable=False)
sent_from = relationship("MailAccount", back_populates="mails_sent")
sent_to = Column(String(250), nullable=False)
msg_subject = Column(String(250))
msg_body = Column(String)
send_status = Column(String(500))
received_mails = relationship("ReceivedMail", back_populates="sent_mail", order_by="ReceivedMail.time_received")
class ReceivedMail(Base):
__tablename__ = "received_mail"
id = Column(Integer, primary_key=True)
sent_mail_id = Column(Integer, ForeignKey('sent_mail.id'), nullable=False)
sent_mail = relationship("SentMail", back_populates="received_mails")
time_received = Column(DateTime(timezone=True), nullable=False)
Base.metadata.create_all(engine)
session = Session()
gmail = MailProvider(service_provider="gmail.com")
session.add(gmail)
session.commit()
</code></pre>
| 0
|
2016-09-08T17:54:51Z
|
[
"python",
"python-3.x",
"sqlalchemy"
] |
compute mean of a column with python
| 39,396,973
|
<p>I have a dataframe df : </p>
<pre><code>TIMESTAMP equipement1 equipement2
2016-05-10 13:20:00 0.000000 0.000000
2016-05-10 14:40:00 0.400000 0.500000
2016-05-10 15:20:00 0.500000 0.500000
</code></pre>
<p>I would like to compute for each equipmeentx the ratio when timestamp in [TS_min, TS_max]
For example function : </p>
<pre><code>def ratio(df, 2016-05-10 14:40:00, 2016-05-10 15:20:00)
TIMESTAMP equipement1 equipement2
2016-05-10 14:40:00 0.4/(0.4+0.5) 0.5/(0.5+0.5)
2016-05-10 15:20:00 0.5/(0.4+0.5) 0.5/(0.5+0.5)
</code></pre>
<p>Any idea to help me please?</p>
<p>Thank you</p>
| 0
|
2016-09-08T17:29:02Z
| 39,397,189
|
<p>Assuming TIMESTAMP is a datetime type, here is one way:</p>
<pre><code>df = df.set_index('TIMESTAMP')
r = df.ix['2016-05-10 14:40:00':'2016-05-10 15:20:00']
r/r.sum()
equipement1 equipement2
TIMESTAMP
2016-05-10 14:40:00 0.444444 0.5
2016-05-10 15:20:00 0.555556 0.5
</code></pre>
| 1
|
2016-09-08T17:43:53Z
|
[
"python",
"pandas"
] |
To how list group by values and values in that category
| 39,396,984
|
<p>I want my template to display my group by category as a header, and then all the values for that group by category under it. For example, my table looks like this:</p>
<pre><code>['John','Physics']
['Jim','Physics']
['Sam','Biology']
['Sarah','Biology']
</code></pre>
<p>And I want the template to output this:</p>
<h1>Physics</h1>
<p>John</p>
<p>Jim</p>
<h1>Biology</h1>
<p>Sam</p>
<p>Sarah</p>
<p>I'm not sure what to put in my veiws.py as I would usually do this in SQL --> first group by category, then return all results in that category.</p>
<p>How would my veiws.py and template looks like to accomplish this? Thanks.</p>
<p>My current veiws.py:</p>
<pre><code>def department(request):
students = Students.objects.all().order_by('department')
return render(request, 'department.html', {
'students':students,
})
</code></pre>
<p>Here is my model.py</p>
<pre><code>class Mentors(models.Model):
name = models.CharField(max_length=100)
degree = models.CharField(max_length=100)
department = models.CharField(max_length=100)
</code></pre>
<p>And my template:</p>
<pre><code>{% if mentors %}
<div class="row">
{% for mentor in mentors %}
<div class="col-xs-12 col-sm-6 col-md-6 col-lg-4">
<h3>{{ mentor.department }}</h3>
</div>
<div class="col-xs-12 col-sm-6 col-md-6 col-lg-4">
<div class="thumbnail">
<img src="{{ mentor.image.url }}" class="img-thumbnail">
<div class="caption">
<h4 class="text-center">{{ mentor.name }}, {{ mentor.degree }}</h4>
<p class="text-center"><small>{{ mentor.department }}</small></p>
</div>
</div>
</div>
{% endfor%}
</div><!-- /.end row -->
{% endif %}
</code></pre>
| 0
|
2016-09-08T17:29:51Z
| 39,397,062
|
<p>The template (department.html) is the place where you're going to list all the student. You must have some similar to this:</p>
<pre><code>{% if students %}
{% for student in students %}
<p>{{ student.NAME_FIELD }}</p>
{{ student.DEPARTMENT_FIELD}}
{% endfor %}
{% else %}
No student are available.
{% endif %}
</code></pre>
<p>NAME_FIELD and DEPARTMENT_FIELD are the fields you declared in your model.py</p>
| 1
|
2016-09-08T17:34:57Z
|
[
"python",
"django",
"django-templates",
"django-views",
"django-1.10"
] |
How does Python Pandas process a list of tables?
| 39,397,024
|
<p>I have this simple clean_data function, which will round the numbers in the input data frame. The code works, but I am very puzzled why it works. Could anybody help me understand?</p>
<p>The part where I got confused is this. table_list is a new list of data frame, so after running the code, each item inside table_list should be formatted, while tablea, tableb, and tablec should stay the same. But apparently I am wrong. After running the code, all three tables are formatted correctly. What is going on? Thanks a lot for the help.</p>
<pre><code>table_list = [tablea, tableb, tablec]
def clean_data(df):
for i in df:
df[i] = df[i].map(lambda x: round(x, 4))
return df
map(clean_data, table_list)
</code></pre>
| 1
|
2016-09-08T17:32:17Z
| 39,397,480
|
<p>Simplest way is to break down this code completely:</p>
<pre><code># List of 3 dataframes
table_list = [tablea, tableb, tablec]
# function that cleans 1 dataframe
# This will get applied to each dataframe in table_list
# when the python function map is used AFTER this function
def clean_data(df):
# for loop.
# df[i] will be a different column in df for each iteration
# i iterates througn column names.
for i in df:
# df[i] = will overwrite column i
# df[i].map(lambda x: round(x, 4)) in this case
# does the same thing as df[i].apply(lambda x: round(x, 4))
# in other words, it rounds each element of the column
# and assigns the reformatted column back to the column
df[i] = df[i].map(lambda x: round(x, 4))
# returns the formatted SINGLE dataframe
return df
# I expect this is where the confusion comes from
# this is a python (not pandas) function that applies the
# function clean_df to each item in table_list
# and returns a list of the results.
# map was also used in the clean_df function above. That map was
# a pandas map and not the same function as this map. There do similar
# things, but not exactly.
map(clean_data, table_list)
</code></pre>
<p>Hope that helps.</p>
| 0
|
2016-09-08T18:03:58Z
|
[
"python",
"pandas"
] |
How does Python Pandas process a list of tables?
| 39,397,024
|
<p>I have this simple clean_data function, which will round the numbers in the input data frame. The code works, but I am very puzzled why it works. Could anybody help me understand?</p>
<p>The part where I got confused is this. table_list is a new list of data frame, so after running the code, each item inside table_list should be formatted, while tablea, tableb, and tablec should stay the same. But apparently I am wrong. After running the code, all three tables are formatted correctly. What is going on? Thanks a lot for the help.</p>
<pre><code>table_list = [tablea, tableb, tablec]
def clean_data(df):
for i in df:
df[i] = df[i].map(lambda x: round(x, 4))
return df
map(clean_data, table_list)
</code></pre>
| 1
|
2016-09-08T17:32:17Z
| 39,400,652
|
<p>In Python, a list of dataframes, or any complicated objects, is simply a list of references that will point to the underlying data frames. For example, the first element of table_list is a reference to tablea. Therefore, clean_data will go directly to the data frame, i.e., tablea, following the reference given by table_list[0].</p>
| 0
|
2016-09-08T21:43:04Z
|
[
"python",
"pandas"
] |
Script works differently when ran from the terminal and ran from Python
| 39,397,034
|
<p>I have a short bash script <code>foo.sh</code></p>
<pre><code>#!/bin/bash
cat /dev/urandom | tr -dc 'a-z1-9' | fold -w 4 | head -n 1
</code></pre>
<p>When I run it directly from the shell, it runs fine, exiting when it is done</p>
<pre><code>$ ./foo.sh
m1un
$
</code></pre>
<p>but when I run it from Python</p>
<pre><code>$ python -c "import subprocess; subprocess.call(['./foo.sh'])"
ygs9
</code></pre>
<p>it outputs the line but then just hangs forever. What is causing this discrepancy? </p>
| 8
|
2016-09-08T17:32:50Z
| 39,398,969
|
<p>Adding the <code>trap -p</code> command to the bash script, stopping the hung python process and running <code>ps</code> shows what's going on:</p>
<pre><code>$ cat foo.sh
#!/bin/bash
trap -p
cat /dev/urandom | tr -dc 'a-z1-9' | fold -w 4 | head -n 1
$ python -c "import subprocess; subprocess.call(['./foo.sh'])"
trap -- '' SIGPIPE
trap -- '' SIGXFSZ
ko5o
^Z
[1]+ Stopped python -c "import subprocess; subprocess.call(['./foo.sh'])"
$ ps -H -o comm
COMMAND
bash
python
foo.sh
cat
tr
fold
ps
</code></pre>
<p>Thus, <code>subprocess.call()</code> executes the command with the <code>SIGPIPE</code> signal masked. When <code>head</code> does its job and exits, the remaining processes do not receive the broken pipe signal and do not terminate.</p>
<p>Having the explanation of the problem at hand, it was easy to find the bug in the python bugtracker, which turned out to be <a href="https://bugs.python.org/issue1652">issue#1652</a>.</p>
| 8
|
2016-09-08T19:40:06Z
|
[
"python",
"bash",
"subprocess",
"pipeline"
] |
Script works differently when ran from the terminal and ran from Python
| 39,397,034
|
<p>I have a short bash script <code>foo.sh</code></p>
<pre><code>#!/bin/bash
cat /dev/urandom | tr -dc 'a-z1-9' | fold -w 4 | head -n 1
</code></pre>
<p>When I run it directly from the shell, it runs fine, exiting when it is done</p>
<pre><code>$ ./foo.sh
m1un
$
</code></pre>
<p>but when I run it from Python</p>
<pre><code>$ python -c "import subprocess; subprocess.call(['./foo.sh'])"
ygs9
</code></pre>
<p>it outputs the line but then just hangs forever. What is causing this discrepancy? </p>
| 8
|
2016-09-08T17:32:50Z
| 39,438,276
|
<p>The problem with Python 2 handling <code>SIGPIPE</code> in a non-standard way (i.e., being ignored) is already coined in Leon's answer, and the fix is given in the link: set <code>SIGPIPE</code> to default (<code>SIG_DFL</code>) with, e.g.,</p>
<pre><code>import signal
signal.signal(signal.SIGPIPE,signal.SIG_DFL)
</code></pre>
<hr>
<p>You can try to unset <code>SIGPIPE</code> from within your script with, e.g.,</p>
<pre><code>#!/bin/bash
trap SIGPIPE # reset SIGPIPE
cat /dev/urandom | tr -dc 'a-z1-9' | fold -w 4 | head -n 1
</code></pre>
<p>but, unfortunately, it doesn't work, as per the <a href="https://www.gnu.org/software/bash/manual/bashref.html#index-trap" rel="nofollow">Bash reference manual</a></p>
<blockquote>
<p>Signals ignored upon entry to the shell cannot be trapped or reset.</p>
</blockquote>
<hr>
<p>A final comment: you have a useless use of <code>cat</code> here; it's better to write your script as:</p>
<pre><code>#!/bin/bash
tr -dc 'a-z1-9' < /dev/urandom | fold -w 4 | head -n 1
</code></pre>
<p>Yet, since you're using Bash, you might as well use the <code>read</code> builtin as follows (this will advantageously replace <code>fold</code> and <code>head</code>):</p>
<pre><code>#!/bin/bash
read -n4 a < <(tr -dc 'a-z1-9' < /dev/urandom)
printf '%s\n' "$a"
</code></pre>
<p>It turns out that with this version, you'll have a clear idea of what's going on (and the script will not hang):</p>
<pre><code>$ python -c "import subprocess; subprocess.call(['./foo'])"
hcwh
tr: write error: Broken pipe
tr: write error
$
$ # script didn't hang
</code></pre>
<p>(Of course, it works well with no errors with Python3). And telling Python to use the default signal for <code>SIGPIPE</code> works well too:</p>
<pre><code>$ python -c "import signal; import subprocess; signal.signal(signal.SIGPIPE,signal.SIG_DFL); subprocess.call(['./foo'])"
jc1p
$
</code></pre>
<p>(and also works with Python3).</p>
| 1
|
2016-09-11T16:32:25Z
|
[
"python",
"bash",
"subprocess",
"pipeline"
] |
Set test case files for travis
| 39,397,160
|
<p>I have this <a href="https://github.com/b5y/log2html" rel="nofollow">project</a> on GitHub which has test case files. I run tests locally via pytest and all passed. But travis does not pass these tests and outputs errors:</p>
<p><code>OSError: [Errno 2] No such file or directory: '/home/travis/build/b5y/log2html/tests/test_samples'</code></p>
<p>I set path to test files this way:</p>
<pre><code>DIR_TEST_FILES = os.getcwd() + os.sep + 'tests' + os.sep + 'test_samples'
</code></pre>
<p>I have tried with <code>DIR_TEST_FILES = './test_samples'</code> and with <code>os</code> module through <code>path</code> method. Nothing works for travis.</p>
<p>Any help will be highly appreciated.</p>
| 0
|
2016-09-08T17:41:11Z
| 39,397,468
|
<p>Somehow your tests are being run from the log directory inside your build directory. Make sure that travis-ci is in the same directory when it runs tests as you are when you run tests. It may be helpful to include a <code>pwd</code> or <code>ls</code> in the <code>script:</code> section of your <code>.travis.yml</code> file initially so that you can determine the relative paths to your tests. </p>
| 0
|
2016-09-08T18:03:13Z
|
[
"python",
"travis-ci",
"py.test"
] |
How to get selected attributes of an object in to a python list?
| 39,397,332
|
<p>How do I create a list with the selected attributes of an object in python ? Using list comprehensions.</p>
<p>E.g: </p>
<p>My object A has</p>
<pre><code>A.name
A.age
A.height
</code></pre>
<p>and many more attributes</p>
<p>How do I create a list <code>[name,age]</code></p>
<p>I can do it manually but it looks ugly:</p>
<pre><code>l=[]
l.append(A.name)
l.append(A.age)
</code></pre>
<p>but I am looking for a shortcut.</p>
| 0
|
2016-09-08T17:53:47Z
| 39,397,350
|
<p>What you're looking for is <a href="https://docs.python.org/2/library/operator.html#operator.attrgetter" rel="nofollow"><code>operator.attrgetter</code></a></p>
<pre><code>attrs = ['name', 'age']
l = list(operator.attrgetter(*attrs)(A))
</code></pre>
| 1
|
2016-09-08T17:55:16Z
|
[
"python",
"list",
"list-comprehension",
"python-2.x"
] |
How to get selected attributes of an object in to a python list?
| 39,397,332
|
<p>How do I create a list with the selected attributes of an object in python ? Using list comprehensions.</p>
<p>E.g: </p>
<p>My object A has</p>
<pre><code>A.name
A.age
A.height
</code></pre>
<p>and many more attributes</p>
<p>How do I create a list <code>[name,age]</code></p>
<p>I can do it manually but it looks ugly:</p>
<pre><code>l=[]
l.append(A.name)
l.append(A.age)
</code></pre>
<p>but I am looking for a shortcut.</p>
| 0
|
2016-09-08T17:53:47Z
| 39,397,351
|
<p>Why not just <code>[A.name, A.age]</code>? <code>list</code> literals are simple. <a href="https://docs.python.org/3/library/operator.html#operator.attrgetter">You could use <code>operator.attrgetter</code> if you need to do it a lot</a>, though it returns <code>tuple</code>s when fetching multiple attributes, not <code>list</code>s, so you'd have to convert if you can't live with that.</p>
| 5
|
2016-09-08T17:55:22Z
|
[
"python",
"list",
"list-comprehension",
"python-2.x"
] |
How to get selected attributes of an object in to a python list?
| 39,397,332
|
<p>How do I create a list with the selected attributes of an object in python ? Using list comprehensions.</p>
<p>E.g: </p>
<p>My object A has</p>
<pre><code>A.name
A.age
A.height
</code></pre>
<p>and many more attributes</p>
<p>How do I create a list <code>[name,age]</code></p>
<p>I can do it manually but it looks ugly:</p>
<pre><code>l=[]
l.append(A.name)
l.append(A.age)
</code></pre>
<p>but I am looking for a shortcut.</p>
| 0
|
2016-09-08T17:53:47Z
| 39,397,461
|
<p>You can collect them going through all <code>A</code> class attributes and checking if they aren't method or built-in.</p>
<pre><code>import inspect
def collect_props():
for name in dir(A):
if not inspect.ismethod(getattr(A, name)) and\
not name.startswith('__'):
yield name
print list(collect_props())
</code></pre>
| 0
|
2016-09-08T18:02:46Z
|
[
"python",
"list",
"list-comprehension",
"python-2.x"
] |
Creating new columns from unique values across rows in pandas
| 39,397,389
|
<p>I'm trying to use unique values in a pandas column to generate a new set of column. Here's an example <code>DataFrame</code>:</p>
<pre><code> meas1 meas2 side newindex
0 1 3 L 0
1 2 4 R 0
2 6 8 L 1
3 7 9 R 1
</code></pre>
<p>I'd like to "multiply" my measurement columns with my key columns to generate a <code>DafaFrame</code> that looks like this:</p>
<pre><code> meas1_L meas1_R meas2_L meas2_R
0 1 2 3 4
1 6 7 8 9
</code></pre>
<p>Note it's essentially the inverse of <a href="http://stackoverflow.com/q/39390615/512652">this question</a>.</p>
| 0
|
2016-09-08T17:57:47Z
| 39,397,657
|
<p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.pivot.html" rel="nofollow"><code>DataFrame.pivot</code></a>:</p>
<pre><code># Perform the pivot.
df = df.pivot(index='newindex', columns='side').rename_axis(None)
# Format the columns.
df.columns = df.columns.map('_'.join)
</code></pre>
<p>The resulting output:</p>
<pre><code> meas1_L meas1_R meas2_L meas2_R
0 1 2 3 4
1 6 7 8 9
</code></pre>
| 3
|
2016-09-08T18:15:26Z
|
[
"python",
"pandas",
"dataframe"
] |
Creating new columns from unique values across rows in pandas
| 39,397,389
|
<p>I'm trying to use unique values in a pandas column to generate a new set of column. Here's an example <code>DataFrame</code>:</p>
<pre><code> meas1 meas2 side newindex
0 1 3 L 0
1 2 4 R 0
2 6 8 L 1
3 7 9 R 1
</code></pre>
<p>I'd like to "multiply" my measurement columns with my key columns to generate a <code>DafaFrame</code> that looks like this:</p>
<pre><code> meas1_L meas1_R meas2_L meas2_R
0 1 2 3 4
1 6 7 8 9
</code></pre>
<p>Note it's essentially the inverse of <a href="http://stackoverflow.com/q/39390615/512652">this question</a>.</p>
| 0
|
2016-09-08T17:57:47Z
| 39,397,857
|
<p>Another solution using <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.core.groupby.GroupBy.prod.html" rel="nofollow"><code>groupby.prod</code></a>:</p>
<pre><code>df = df.groupby(['side', 'newindex']).prod().unstack(level=0)
df.columns = ['_'.join(c[0::]) for c in df.columns]
meas1_L meas1_R meas2_L meas2_R
newindex
0 1 2 3 4
1 6 7 8 9
</code></pre>
| 1
|
2016-09-08T18:27:52Z
|
[
"python",
"pandas",
"dataframe"
] |
split in python return an excess blank character
| 39,397,392
|
<p>I have a file with some data that I read,split with <code>space</code>,<code>,</code> ,<code>\n</code> and take it in a matrix.
But my code return an excess blank character into my matrix. Can anybody help me find this bug? Thanks.
code:</p>
<pre><code>import re
lines = [re.split('[,\n ]',line) for line in open('lines.txt')]
print lines
</code></pre>
<p>input:</p>
<pre><code>395,0 398,100
398,100 488,196
488,196 544,233
544,233 506,301
506,301 425,344
425,344 336,355
336,355 271,319
271,319 293,264
293,264 328,232
328,232 329,170
329,170 267,175
267,175 228,199
228,199 214,220
214,220 80,268
80,268 0,273
0,183 96,176
96,176 168,92
168,92 252,124
252,124 300,88
300,88 303,40
303,40 309,0
</code></pre>
<p>output (the fifth column is excess) :</p>
<pre><code>[['395', '0', '398', '100', ''], ['398', '100', '488', '196', ''], ['488', '196', '544', '233', ''], ['544', '233', '506', '301', ''], ['506', '301', '425', '344', ''], ['425', '344', '336', '355', ''], ['336', '355', '271', '319', ''], ['271', '319', '293', '264', ''], ['293', '264', '328', '232', ''], ['328', '232', '329', '170', ''], ['329', '170', '267', '175', ''], ['267', '175', '228', '199', ''], ['228', '199', '214', '220', ''], ['214', '220', '80', '268', ''], ['80', '268', '0', '273', ''], ['0', '183', '96', '176', ''], ['96', '176', '168', '92', ''], ['168', '92', '252', '124', ''], ['252', '124', '300', '88', ''], ['300', '88', '303', '40', ''], ['303', '40', '309', '0', '']]
</code></pre>
| 0
|
2016-09-08T17:57:52Z
| 39,397,427
|
<p>lines read from a text file generally have a newline on the end (unless they're the last line in which case they might not). It's pretty common to see that newline stripped off (e.g. using <a href="https://docs.python.org/3/library/stdtypes.html#str.rstrip" rel="nofollow"><code>str.rstrip</code></a>):</p>
<pre><code>import re
lines = [re.split('[,\n ]', line.rstrip('\n')) for line in open('lines.txt')]
print lines
</code></pre>
<hr>
<p>As an aside, it's better practice to use a context manager for managing open files:</p>
<pre><code>with open('lines.txt') as input_file:
lines = [re.split('[,\n ]', line.rstrip('\n')) for line in input_file]
print lines
</code></pre>
| 2
|
2016-09-08T18:00:33Z
|
[
"python",
"regex",
"split"
] |
How to get the needed varibales due to the condition?
| 39,397,476
|
<p>I have lists with values like this for example:</p>
<pre><code>values = [value_one, value_two, list_A[], list_B[], list_C[]]
</code></pre>
<p>... which are in a map (all these lists have the same structure as the example above, but with different values!):</p>
<pre><code>{
key_one: valuesA,
key_two: valuesB,
key_three: valuesC,
key_four: valuesD,
key_five: valuesE,
...: ...,
...: ...
}
</code></pre>
<p>I also have this function:</p>
<pre><code>def calculate_process(map):
for key, data in map.iteritems():
value_one, value_two, list_A[], list_B[], list_C[] = data
# do some calculation
# ...
# ...
</code></pre>
<p><strong>My problem:</strong></p>
<p>Due to my code, I want (depends on what condition it is) to get only certain lists. So the code should be more like this:</p>
<p><strong>Condition 1? Do:</strong></p>
<pre><code>def calculate_process(map):
for key, data in map.iteritems():
value_one, value_two, list_A[], list_B[] = data
# do some calculation
for value_first, value_second in zip(list_A[], list_B[])
result = (value_one * value_two + value_first) * value_second
# more other calculation
# ...
</code></pre>
<p><strong>Condition 2? Do:</strong></p>
<pre><code>def calculate_process(map):
for key, data in map.iteritems():
value_one, value_two, list_B[], list_C[] = data
# do some calculation
for value_first, value_second in zip(list_B[], list_C[])
result = (value_one * value_two + value_first) * value_second
# more other calculation
# ...
</code></pre>
<p><strong>Condition 3? do:</strong></p>
<pre><code>def calculate_process(map):
for key, data in map.iteritems():
value_one, value_two, list_A[], list_C[] = data
# do some calculation
for value_first, value_second in zip(list_A[], list_C[])
result = (value_one * value_two + value_first) * value_second
# more other calculation
# ...
</code></pre>
<p>But I don't want to have three function, that does the same (because to have one function is way better), only what different is, is that I have to use different lists. I know, I could just write at the parameter something like "<strong>list_to_not_use=list_B</strong>", but I still have to do (here as an example) three "<strong>if</strong>" condition and could get so the certain lists, which I need.</p>
<p>But imagine, that this list structure..</p>
<pre><code>values = [value_one, value_two, list_A[], list_B[], list_C[]]
</code></pre>
<p>.. would be more like this...:</p>
<pre><code>values = [value_one, value_two, list_A[], list_B[], list_C[], list_D[], list_E[], ..., ...]
</code></pre>
<p>...yes, unknown endless list.</p>
<p>So the solution above (with the <strong>list_to_not_use</strong>) is at this situation not that good, because you would have to do more and more new "<strong>if</strong>" condition.</p>
<p><strong>My question:</strong></p>
<p>Is there a super good "impressive" solution, to solve this problem? Or is this such an easy question that you could solve it with ease, that I never have seen a programming code before?</p>
<p>I hope you understand my problem here and I hope you can help me there.</p>
<p>I also would appreciate some code examples :D</p>
<p>Thanks in advance!</p>
| 0
|
2016-09-08T18:03:44Z
| 39,397,900
|
<p>If you can determine under different conditions which lists to be extracted before do_calculate, I suggest you use the positions of lists in the <code>data</code> tuple to extract them.</p>
<pre><code>def positions_of_list():
# conditions
return pos1, pos2
def calculate_process(map):
for key, data in map.iteritems():
pos1, pos2 = positions_of_list()
v1 = data[0]
v2 = data[1]
l1 = data[pos1]
l2 = data[pos2]
# do calculate thereafter
</code></pre>
<p>function <code>positions_of_list</code> return positions of lists in <code>data</code> tuple.</p>
<p>Thanks</p>
| 0
|
2016-09-08T18:30:05Z
|
[
"python",
"python-2.7"
] |
Can I get a list (name - email) of fans who likes a public page using the facebook SDK and python?
| 39,397,544
|
<p>I was trying to get a list of fans who likes a public page.</p>
<p>If that is not possible a list of people who likes comments made in that public page.</p>
<p>This link make me thing that in fact it is possible:</p>
<p><a href="https://www.facebook.com/search/102381354573/likers?ref=about" rel="nofollow">https://www.facebook.com/search/102381354573/likers?ref=about</a></p>
<p>Here's what I tried with Python and Facebook SDK </p>
<pre><code>"""
A simple example script to get all posts on a user's timeline.
Originally created by Mitchell Stewart.
<https://gist.github.com/mylsb/10294040>
"""
import facebook
import requests
def some_action(post, row):
""" Here you might want to do something with each post. E.g. grab the
post's message (post['message']) or the post's picture (post['picture']).
"""
# You'll need an access token here to do anything. You can get a temporary one
# here: https://developers.facebook.com/tools/explorer/
access_token = 'EAATASQOiIDABAC367RRjcCNY8SHWkNaeaZBEByStOCx6arJcMZAsLHvAwBzHmVpzNR9kjaM5G4GsoiEzEkr0YXYoA0rSHdtXXSGMn8RQgrA3ZB2nmBGYQ0rUGFKL6dtAZCEjkfuMFy5hBHLKqDiDs95CiUBcbkKYZCAFA559qiAZDZD'
# Look at Bill Gates's profile for this example by using his Facebook id.
user = 'biligates'
graph = facebook.GraphAPI(access_token)
profile = graph.get_object(user)
likes = graph.get_connections('102381354573', 'likes')
posts = graph.get_connections(profile['id'], 'posts')
while True:
try:
[print_friends(friends=friend) for friend in likes['data']]
likes = requests.get(likes['paging']['next']).json()
except KeyError:
# When there are no more pages (['paging']['next']), break from the
# loop and end the script.
print ('--------------------')
print ('No more data comming')
print ('--------------------')
break
</code></pre>
<p>The result of this script is:</p>
<pre><code>{u'name': u'Washington Redskins', u'id': u'102381354573'}
32237381874 - Kings Dominion
328084609453 - Pierre Thomas
164582060550453 - Papa John's Pizza DMV
112733772094151 - CSN Mid-Atlantic
758770517542315 - Matt Jones
291800575745 - Andre Roberts
546765258765348 - Stephen Paea
487760671369099 - DeSean Jackson
110815543982 - DeAngelo Hall
386180624733248 - Robert Griffin III
1397352627212077 - Dashon Goldson
155219064665 - Pierre Garçon
1557530721156959 - Akeem Davis
159691050732 - Kirk Cousins
1433685776950292 - Redskins Salute
89972093868 - Omni Hotels
525973260837198 - Redskins Team Store
678000172307735 - Family of 3
120167766006 - FanDuel
161882187173357 - Bradenton Redskins Fan Club
668604919890894 - Redskins Facts
188327264572436 - Bob's Discount Furniture
164933623547236 - PrimeSport
352567624896549 - True Health Diagnostics
218158744892220 - Women of Washington Redskins
503110669738004 - Bon Secours Washington Redskins Training Center
267502171241 - NFL Ticket Exchange
174705179277205 - Washington Redskins Cheerleaders
370174424170 - NFL Network
68680511262 - NFL
--------------------
No more data comming
--------------------
</code></pre>
| 0
|
2016-09-08T18:08:06Z
| 39,401,286
|
<p>No, there is no API to get a list of fans. You can only get a list of users who commented or liked something on your Page, but there is no way to get their email. It would be weird anyway, what would you want to do with the email? Without explicit approval of the user, you would not even be allowed to store the email, and any email you send to them would be spam.</p>
<p>You get the email of a user only by authorizing that user with the <code>email</code> permission.</p>
| 2
|
2016-09-08T22:49:11Z
|
[
"python",
"json",
"facebook",
"facebook-graph-api",
"sdk"
] |
Django Admin add edit/create buttons to Parent
| 39,397,545
|
<p>I'm trying to add an add and edit link to my Django admin for the ForeignKey field category. I already have this on the schedule field, however this is handled by djcelery and i cannot figure out how they do this:</p>
<p><a href="http://i.stack.imgur.com/1DtMB.png" rel="nofollow"><img src="http://i.stack.imgur.com/1DtMB.png" alt="enter image description here"></a></p>
<p>My categories field currently looks like this:</p>
<p><a href="http://i.stack.imgur.com/GJ6M6.png" rel="nofollow"><img src="http://i.stack.imgur.com/GJ6M6.png" alt="enter image description here"></a></p>
<p>Both fields are simply added to the admin screen through a foreign key relation.</p>
<p>Is there any default settings for this in Django admin?</p>
| 0
|
2016-09-08T18:08:07Z
| 39,397,731
|
<p>Facepalm! I got the desiredresult once i added a admin for categories:</p>
<pre><code>admin.site.register(Category)
</code></pre>
<p><a href="http://i.stack.imgur.com/J7dHK.png" rel="nofollow"><img src="http://i.stack.imgur.com/J7dHK.png" alt="enter image description here"></a></p>
<p>However, if any one know how to hide the Categories list from the main admin screen, please update my answer.</p>
<p><a href="http://i.stack.imgur.com/Vwy0B.png" rel="nofollow"><img src="http://i.stack.imgur.com/Vwy0B.png" alt="enter image description here"></a></p>
<p>(Would also want to hide 'Categories' from this menu)</p>
| 0
|
2016-09-08T18:20:15Z
|
[
"python",
"django",
"django-admin",
"celery",
"django-celery"
] |
Django Admin add edit/create buttons to Parent
| 39,397,545
|
<p>I'm trying to add an add and edit link to my Django admin for the ForeignKey field category. I already have this on the schedule field, however this is handled by djcelery and i cannot figure out how they do this:</p>
<p><a href="http://i.stack.imgur.com/1DtMB.png" rel="nofollow"><img src="http://i.stack.imgur.com/1DtMB.png" alt="enter image description here"></a></p>
<p>My categories field currently looks like this:</p>
<p><a href="http://i.stack.imgur.com/GJ6M6.png" rel="nofollow"><img src="http://i.stack.imgur.com/GJ6M6.png" alt="enter image description here"></a></p>
<p>Both fields are simply added to the admin screen through a foreign key relation.</p>
<p>Is there any default settings for this in Django admin?</p>
| 0
|
2016-09-08T18:08:07Z
| 39,407,684
|
<p>I cannot comment because of low reputation. So adding as an answer. Returning an empty dict from get_model_perms excludes the model admin from index page, whilst still allowing you to edit instances directly. <a href="http://stackoverflow.com/questions/2431727/django-admin-hide-a-model/4871511#4871511">Here is the more detailed explanation</a>.</p>
| 1
|
2016-09-09T08:59:24Z
|
[
"python",
"django",
"django-admin",
"celery",
"django-celery"
] |
ordered word perminuations in python
| 39,397,626
|
<p>So my question is simple, and half of it is already working.
I need help with generating ordered word-permutations. </p>
<p>My code:</p>
<pre><code>from os.path import isfile
from string import printable
def loadRuleSet(fileLocation):
rules = {}
assert isfile(fileLocation)
for x in open(fileLocation).read().split('\n'):
if not len(x) == 0:
data = x.split(':')
if not len(data[0]) == 0 or not len(data[1]) == 0:
rules[data[0]] = data[1]
return rules
class deform:
def __init__(self, ruleSet):
assert type(ruleSet) == dict
self.ruleSet = ruleSet
def walker(self, string):
spot = []
cnt = 0
for x in string:
spot.append((x, cnt))
cnt += 1
return spot
def replace_exact(self, word, position, new):
cnt = 0
newword = ''
for x in word:
if cnt == position:
newword += new
else:
newword += x
cnt+= 1
return newword
def first_iter(self, word):
data = []
pos = self.walker(word)
for x in pos:
if x[0] in self.ruleSet:
for y in self.ruleSet[x[0]]:
data.append(self.replace_exact(word, x[1], y))
return data
print deform({'a':'@A'}).first_iter('abac')
</code></pre>
<p>My current code does half of the job, but I've reached a "writer's block"</p>
<pre><code>>>>deform({'a':'@'}).first_iter('aaa')
['@aa', 'a@a', 'aa@']
</code></pre>
<p>Here's the results from my currently made script. </p>
<p>What code is supposed to do is - take the word, and reorder it with other characters in the replacement. I've successfully made it do it with one character, but I need help with making all the results. For example:</p>
<pre><code>['@aa', 'a@a', 'aa@', '@@a', 'a@@', '@a@']
</code></pre>
| -1
|
2016-09-08T18:13:47Z
| 39,397,648
|
<p>In your case you can use <code>permutations</code> function which could return all possible orderings, no repeated elements.</p>
<pre><code>from itertools import permutations
from operator import itemgetter
perm_one = sorted(set([''.join(x) for x in permutations('@aa')]))
perm_two = sorted(set([''.join(x) for x in permutations('@@a')]), key=itemgetter(1))
print perm_one + perm_two
</code></pre>
<p>I divided it into two collections because they differ number of <code>@</code> and <code>a</code> characters.</p>
| 2
|
2016-09-08T18:14:46Z
|
[
"python",
"data-generation"
] |
identify global symbols in lambda expressions
| 39,397,679
|
<p>I need to get the list of names of all symbols which I must have available in order to evaluate or execute a piece of code. I tried to use <code>symtable</code> module but it seems that it does not handle properly lambdas and inner functions (<code>def</code>s inside other code). Consider this:</p>
<pre><code>import symtable
symtable.symtable("x+y", '<string>', 'exec').get_symbols()
</code></pre>
<p>I get this: <code>[<symbol 'y'>, <symbol 'x'>]</code> which is exactly what I expect.</p>
<p>but when I write this:</p>
<pre><code>symtable.symtable("z=lambda x: x+y; z(10)", '<string>', 'exec').get_symbols()
</code></pre>
<p>the results is: <code>[<symbol 'z'>]</code> and I have no info about <code>x</code> (never mind, it is a local variable) nor <code>y</code> (which is global and this is what I need).</p>
<p>But when I try to evaluate this with <code>exec("z=lambda x: x+y; z(10)")</code> the value of <code>y</code> is missing. Is there a way I can identify the names of all symbols which must be supplied to an expression or code so that it can be evaluated/executed?</p>
| 0
|
2016-09-08T18:17:16Z
| 39,398,038
|
<p>Without much ado... I need to look into child symbol tables. Recursion is the tool of choice:</p>
<pre><code>import symtable
def find_global_symbols(table):
symbols = set()
for t in table.get_children():
symbols |= find_global_symbols(t)
symbols |= set(s.get_name() for s in table.get_symbols() if s.is_global())
return symbols
table = symtable.symtable("z=lambda x: x+y; z(10)", '<string>', 'exec')
global_symbols = find_global_symbols(table)
</code></pre>
<p>Which gives me <code>y</code> as the answer. This is what I need. Now I know that I have to supply the value of <code>y</code> to be able to execute the code in question.</p>
<p>UPDATE: actually there is a possible improvement if I take into account the names which are imported in each scope</p>
<pre><code>def find_global_symbols(table, imports):
imports = imports | set(s.get_name() for s in table.get_symbols() if s.is_imported())
symbols = set()
for s in table.get_symbols():
if s.is_global() and s.get_name() not in imports:
symbols.add(s.get_name())
for t in table.get_children():
symbols |= find_global_symbols(t, imports)
return symbols
table = symtable.symtable("z=lambda x: x+y; z(10)", '<string>', 'exec')
global_symbols = find_global_symbols(table, set())
</code></pre>
| 0
|
2016-09-08T18:38:43Z
|
[
"python",
"lambda"
] |
how to call def from another .py in different folder
| 39,397,720
|
<p>I have following structure:
utils_dir has generator.py file which has 3 defs.</p>
<p>I have test.py in inline_dir. And I am trying to use defs from generator.py in test.py. </p>
<p>inline_dir and utils_dir are in different folders.
How can I achieve it to use defs?</p>
<p>Tried with creating <code>_init_.py</code> and then calling <code>import generator</code>. - Not worked.
Tried <code>from utils import generator</code> - Not worked</p>
<p><code>Dir structure</code></p>
<p><code>
Support_dir
âââ dir_A
â âââ dir_aa
â âââ----- main.py [Want to use a and b from generator.py]
âââ utils
|
âââ generator.py
|
|___ def a
|___ def b
</code></p>
| 2
|
2016-09-08T18:19:34Z
| 39,398,291
|
<p>It sounds like you're trying to execute a .py file in a subdirectory.</p>
<p>Assuming the following directory structure:</p>
<pre><code>.
âââ inline
â  âââ __init__.py
â  âââ main.py
âââ utils
âââ __init__.py
âââ generator.py
</code></pre>
<p>And your <code>main.py</code> containing a simple import like (the function <code>a()</code> being defined in <code>generator.py</code>):</p>
<pre><code>from utils.generator import a
if __name__ == '__main__':
a()
</code></pre>
<p>And <code>generator.py</code> would look something like this:</p>
<pre><code>def a():
print "hi there"
</code></pre>
<p>You won't be able to run your program using <code>python inline/main.py</code> because this will set the module search path to <code>inline/</code></p>
<p>If you want to execute a file in a subdirectory while importing from your project-level, you could do the following:</p>
<pre><code>PYTHONPATH=. python inline/main.py
</code></pre>
<p><strong>UPDATE:</strong> Added example <code>generator.py</code></p>
| 1
|
2016-09-08T18:55:08Z
|
[
"python"
] |
how to call def from another .py in different folder
| 39,397,720
|
<p>I have following structure:
utils_dir has generator.py file which has 3 defs.</p>
<p>I have test.py in inline_dir. And I am trying to use defs from generator.py in test.py. </p>
<p>inline_dir and utils_dir are in different folders.
How can I achieve it to use defs?</p>
<p>Tried with creating <code>_init_.py</code> and then calling <code>import generator</code>. - Not worked.
Tried <code>from utils import generator</code> - Not worked</p>
<p><code>Dir structure</code></p>
<p><code>
Support_dir
âââ dir_A
â âââ dir_aa
â âââ----- main.py [Want to use a and b from generator.py]
âââ utils
|
âââ generator.py
|
|___ def a
|___ def b
</code></p>
| 2
|
2016-09-08T18:19:34Z
| 39,398,501
|
<p>This is a kind of a python path problem. When you import, python will search current directory and default system path directory. Since utils_dir is not your current work directory (when import, you work in inline_dir), nor in default python search system path, that is why the import not work.</p>
<p>A simple way to solve is </p>
<p>a). First make utils_dir as python package: simply add <strong>init</strong>.py in the directory.</p>
<p>b). Then, add path of parent folder of utils_dir to PYTHONPATH environment variable.</p>
<pre><code>export PYTHONPATH=/home/user/parent_of_utils_dir:$PYTHONPATH
</code></pre>
<p>You can add this line into your .bashrc to make it available all the time.</p>
<p>c). In your test.py, import the function</p>
<pre><code>from utils import generator
</code></pre>
<p>or</p>
<pre><code>import utils.generator
</code></pre>
<p>A more python development way is to use setuptools, and write python script setup.py, which will solve dependency problem. And then you can use</p>
<pre><code>python setup.py develop
</code></pre>
<p>to use in develop mode.</p>
<p>Check more python package development guide at <a href="https://packaging.python.org/" rel="nofollow">https://packaging.python.org/</a></p>
<p>Hope this will help you.</p>
| 1
|
2016-09-08T19:09:31Z
|
[
"python"
] |
Is "python2" / "python3" safe on a script's shebang?
| 39,397,745
|
<p>Sometimes I see <code>#!/usr/bin/python2</code> and <code>#!/usr/bin/python3</code> as opposed to simply <code>#!/usr/bin/python</code>. I get the appeal of this approach, you get to explicitly say if you need Python 2 or 3 without doing some weird version checking.</p>
<p>Are these <code>python2</code> and <code>python3</code> standard though? Will they work everywhere? Or is it risky?</p>
<p>I just confirmed I have <code>python2</code> and <code>python3</code> but I am on Cygwin so I wouldn't think this means it's necessarily the same for a lot of others.</p>
| 1
|
2016-09-08T18:21:02Z
| 39,397,764
|
<p>As a single point of reference -- I don't have a <code>python2</code> executable on my system:</p>
<pre><code>$ python2
-bash: python2: command not found
</code></pre>
<p>So I would definitely not consider this one to be portable. Obviously I could still run your script by selecting an executable explicitly:</p>
<pre><code>python2.7 your_script.py
</code></pre>
<p>Or by symlinking <code>python2</code> to <code>python2.7</code>, but the point is that it won't work out of the box for me (and I imagine for a number of other users as well).</p>
| 1
|
2016-09-08T18:22:06Z
|
[
"python",
"scripting",
"portability",
"shebang"
] |
Python list manipulation based on indexing
| 39,397,853
|
<p>I have two lists:</p>
<p>The first list consists of all the titles of various publications where as the second list consists of all the author names.</p>
<pre><code>list B = ['Moe Terry M 2005 ', 'March James G and Johan P Olsen 2006 ', 'Kitschelt Herbert 2000 ', 'Bates Robert H 1981 ' , .......]
list A = ['"Linkages between Citizens and Politicians in Democratic Polities,"', '"Winners Take All: The Politics of Partial Reform in Postcommunist \n\nTransitions,"', '"Inequality, Social Insurance, and \n\nRedistribution."', '"Majoritarian Electoral Systems and \nConsumer Power: Price-Level Evidence from the OECD Countries."']
</code></pre>
<p>I am running scholar.py as a bash command. The syntax goes like this</p>
<pre><code>scholar = "python scholar.py -c 1 --author " + str(name) + "--phrase " + str(title)
</code></pre>
<p>Now, what I am trying to do is get each title and author in order so that I can use them with scholar.
But I am not able to figure out how can I get the first author name with first title .</p>
<p>I would have used indexing if the lists were small.</p>
| 0
|
2016-09-08T18:27:34Z
| 39,398,006
|
<p>Is this is what you are looking for?</p>
<pre><code>list B = ['Moe Terry M 2005 ', 'March James G and Johan P Olsen 2006 ', 'Kitschelt Herbert 2000 ', 'Bates Robert H 1981 ' , .......]
list A = ['"Linkages between Citizens and Politicians in Democratic Polities,"', '"Winners Take All: The Politics of Partial Reform in Postcommunist \n\nTransitions,"', '"Inequality, Social Insurance, and \n\nRedistribution."', '"Majoritarian Electoral Systems and \nConsumer Power: Price-Level Evidence from the OECD Countries."']
for i,j in zip(B,A):
print i, j #python 2.x
print(i , j) #python3.x
</code></pre>
| 1
|
2016-09-08T18:36:49Z
|
[
"python",
"list"
] |
cannot downloads file using python socket programming
| 39,397,878
|
<p>i want to download a file from this url (<a href="http://justlearn.16mb.com/a.jpg" rel="nofollow">http://justlearn.16mb.com/a.jpg</a>) using python sockets only and i dont know how to do it as i am a novice in python.</p>
<p>Actually my main goal is to download files in half part using wifi connection and other half part using ethernet connection.</p>
<p>Thank you in advance for helping.</p>
<pre><code>import os
import socket
tcpd = 'http://justlearn.16mb.com/a.jpg'
portd = 80
ipd = socket.gethostbyname('http://justlearn.16mb.com/a.jpg')
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.connect((tcpd,portd))
BUFFER_SIZE = 1024
with open('a.jpg', 'wb') as f:
print ('file opened')
while True:
#print('receiving data...')
data = s.recv(1024)
#print('data=%s', (data))
if not data:
f.close()
break
# write data to a file
f.write(data)
print('Successfully get the file')
s.close()
print('connection closed')
</code></pre>
| 1
|
2016-09-08T18:28:49Z
| 39,399,007
|
<p>You might want to try something like this instead. I am unable to test it due to a proxy, but the example should help you in the right direction. Using sockets directly would make this unnecessarily difficult.</p>
<pre><code>#! /usr/bin/env python3
import http.client
def main():
connection = http.client.HTTPConnection('justlearn.16mb.com')
connection.request('GET', '/a.jpg')
response = connection.getresponse()
if response.status != 200:
raise RuntimeError(response.reason)
with open('a.jpg', 'wb') as file:
while not response.closed:
buffer = response.read(1 << 12)
if not buffer:
break
file.write(buffer)
connection.close()
if __name__ == '__main__':
main()
</code></pre>
<p>Here is another example that is shorter and uses the <code>urlopen</code> function from the <code>urllib.request</code> package instead. The code is simpler since the HTTP code is handled in the background instead.</p>
<pre><code>#! /usr/bin/env python3
from urllib.request import urlopen
def main():
with urlopen('http://justlearn.16mb.com/a.jpg') as source, \
open('a.jpg', 'wb') as destination:
while True:
buffer = source.read(1 << 12)
if not buffer:
break
destination.write(buffer)
if __name__ == '__main__':
main()
</code></pre>
| 0
|
2016-09-08T19:42:41Z
|
[
"python",
"sockets",
"wifi",
"ethernet",
"download-manager"
] |
Replace multi line code using python
| 39,397,883
|
<p>I am trying to replace some lines in HTML file using python.</p>
<pre><code>#! /usr/local/bin/python
import os,sys,string,filecmp,shutil,stat,pwd,datetime,time,copy,glob,re,getpass,commands
sys.path.insert(0,os.path.join(os.environ['ADM_TOOLS'],'llib'))
import tooldets,CSrcPrj,comnfuncs,COraConnect
patchHtmlName = 'patchmaintenance.html'
f = open (patchHtmlName, "rt")
g = open ('file3.txt', "rt")
h = open ('file4.txt', "rt")
contents = f.read()
contents1 = g.read()
contents2 = h.read()
#" ".join(contents1.split())
newJSCode = contents.replace(contents1, contents2)
fp2 = open(patchHtmlName, "w")
fp2.write(newJSCode)
fp2.close()
</code></pre>
<p>While the code for File3 is :</p>
<pre><code>}
function fnIsValidEmailId(str)
And the code for File4 is :
document.getElementById("beforePage").style.display = "none";
document.getElementById("afterPage").style.display = "block";
}
function fnIsValidEmailId(str)
</code></pre>
<p>I want to replace the code of file3 to file4.</p>
<p>If I am trying to replace the content of file3 to a single line instead of multi line, then the code works fine and replaces it.</p>
<p>While executing the script, it does not give any error but does not show the desired output.</p>
<p>Please help</p>
| 1
|
2016-09-08T18:29:02Z
| 39,399,451
|
<p>Try this one</p>
<pre><code>import re
...
...
newJSCode = re.sub(r'.*%s'%contents1, contents2, contents, re.DOTALL)
</code></pre>
<p>This will replace contents1 with contents2 in contents. I'm not sure whether I undestood which one is to be replaced, but anyway if you want the opposite just change contents1 to contents2 an vice versa</p>
| 0
|
2016-09-08T20:14:50Z
|
[
"python"
] |
Running a Regex loop in a Pandas Dataframe
| 39,397,897
|
<p>I currently have a date column that has some issues. I have attempted to fix the problem but cannot come to a conclusion.</p>
<p>Here is the data:</p>
<pre><code># Import data
df_views = pd.read_excel('PageViews.xlsx')
# Check data types
df_views.dtypes
Out[57]:
Date object
Customer ID int64
dtype: object
</code></pre>
<p>The date column is not in a 'datetime' data format as expected. Further inspection yields:</p>
<pre><code>df_views.ix[:5]
Date Customer ID
0 01/25/2016 104064596300
1 02/28/2015 102077474472
2 11/17/2016 106430081724
3 02/24/2016 107770391692
4 10/05/2016 106523680888
5 02/24/2016 107057691592
</code></pre>
<p>I quickly check which rows does not follow the proper format xx/xx/xxxx</p>
<pre><code>print (df_views[df_views["Date"].str.len() != 10])
Date Customer ID
189513 12/14/ 106285770688
189514 10/28/ 107520462840
189515 11/01/ 102969804360
189516 11/10/ 102106417100
189517 02/16/ 107810168068
189518 10/25/ 102096164504
189519 02/08/ 107391760644
189520 02/29/ 107353558928
189521 10/24/ 107209142140
189522 12/20/ 107875461336
189523 12/23/ 107736375428
189524 11/12/ 106561080372
189525 01/27/ 102676548120
189526 11/19/ 107733043896
189527 12/31/ 107774452412
189528 01/21/ 102610956040
189529 01/09/ 108052836888
189530 02/21/ 106380330112
189531 02/02/ 107844459772
189532 12/12/ 102006641640
189533 12/16/ 106604647688
189534 11/14/ 102383102504
</code></pre>
<p>I have attempted to create a for loop but cannot figure out how to approach my loop.</p>
<p>Important note: I know that the time period for all observations is between September 2015 through February 2016.</p>
<p>So if the month is 09/10/11/12 - then I can add "2015" to the date,
otherwise if the month is 01/02, I can add "2016".</p>
<pre><code>for row in df_views["Date"]:
if len(row) != 10:
if row.str.contains("^09|10|11|12\/"):
row.str.cat("2015")
elif row.str.contains("^01|02\/"):
row.str.cat("2016")
else:
continue
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-87-684e121dd62d> in <module>()
5 for row in df_views["Date"]:
6 if len(row) != 10:
----> 7 if row.str.contains("^09|10|11|12\/"):
8 row.str.cat("2015")
9 elif row.str.contains("^01|02\/"):
AttributeError: 'str' object has no attribute 'str'
</code></pre>
| 2
|
2016-09-08T18:29:58Z
| 39,398,702
|
<p>As <a href="http://stackoverflow.com/questions/39397897/running-a-regex-loop-in-a-pandas-dataframe/39398702#comment66122084_39397897">@BrenBam has already written in the comment</a> - try to avoid using loops. Pandas gives us tons of vectorized (read fast and efficient) methods:</p>
<pre><code>In [67]: df
Out[67]:
Date Customer ID
0 12/14/2001 106285770688
1 10/28/2000 107520462840
2 11/01/ 102969804360
3 11/10/ 102106417100
4 02/16/ 107810168068
5 10/25/ 102096164504
6 02/08/ 107391760644
7 02/29/ 107353558928
8 10/24/ 107209142140
9 12/20/ 107875461336
10 12/23/ 107736375428
11 11/12/ 106561080372
12 01/27/ 102676548120
13 11/19/ 107733043896
14 12/31/ 107774452412
15 01/21/ 102610956040
16 01/09/ 108052836888
17 02/21/ 106380330112
18 02/02/ 107844459772
19 12/12/ 102006641640
20 12/16/ 106604647688
21 11/14/ 102383102504
In [68]: df.ix[df.Date.str.match(r'^(?:09|10|11|12)\/\d{2}\/$', as_indexer=True), 'Date'] += '2015'
In [69]: df.ix[df.Date.str.match(r'^(?:01|02)\/\d{2}\/$', as_indexer=True), 'Date'] += '2016'
In [70]: df
Out[70]:
Date Customer ID
0 12/14/2001 106285770688
1 10/28/2000 107520462840
2 11/01/2015 102969804360
3 11/10/2015 102106417100
4 02/16/2016 107810168068
5 10/25/2015 102096164504
6 02/08/2016 107391760644
7 02/29/2016 107353558928
8 10/24/2015 107209142140
9 12/20/2015 107875461336
10 12/23/2015 107736375428
11 11/12/2015 106561080372
12 01/27/2016 102676548120
13 11/19/2015 107733043896
14 12/31/2015 107774452412
15 01/21/2016 102610956040
16 01/09/2016 108052836888
17 02/21/2016 106380330112
18 02/02/2016 107844459772
19 12/12/2015 102006641640
20 12/16/2015 106604647688
21 11/14/2015 102383102504
</code></pre>
| 1
|
2016-09-08T19:22:07Z
|
[
"python",
"regex",
"loops",
"pandas",
"numpy"
] |
Using third party Email system for Django Password Reset
| 39,398,013
|
<p>I have a Django App hosted on Google Compute Engine(which doesn't allow port 25/465/587 to send Emails). So, I integrated a third party Email system in the Django App. Third party Email system works find on Google Compute Engine too. </p>
<p>But when I use Django Reset Password, that email is still getting sent over by the Django Default Email System. Can this Django Default Email system for password Reset be changed ?</p>
<p>If yes, Can someone please explain how it can be changed ? </p>
<p>Thanks,</p>
| 1
|
2016-09-08T18:37:16Z
| 39,398,101
|
<p>There is something like <a href="https://docs.djangoproject.com/el/1.10/topics/email/#email-backends" rel="nofollow">Email backends</a></p>
<pre><code># settings.py
EMAIL_BACKEND = 'project.backends.mail.CustomEmailBackend'
# project/backends/mail.py
from django.core.mail.backends.base import BaseEmailBackend
class CustomEmailBackend(BaseEmailBackend):
def send_messages(self, messages):
for message in messages:
# do the stuff with each message
print(message.subject, message.body, message.to, message.cc)
</code></pre>
<p>Remeber that path dotted in <code>EMAIL_BACKEND</code> variable in <code>settings.py</code> must be the same as location of your <code>CustomEmailBackend</code> class in your project folder tree.</p>
<p>Each <code>message</code> has the same <a href="https://docs.djangoproject.com/el/1.10/topics/email/#emailmessage-objects" rel="nofollow">properties</a>.
Of course <code>send_mail</code> from <code>django.core.mail</code> will work as usuall, but use your <code>CustomEmailBackend</code> for sending emails.</p>
| 1
|
2016-09-08T18:43:09Z
|
[
"python",
"django",
"email",
"passwords",
"google-compute-engine"
] |
Referring to outer scope from python class
| 39,398,019
|
<p>I have a (simplified) module, something like this:</p>
<pre><code>import tkinter as tk
__outerVar = {<dict stuff>}
class Editor(tk.Frame):
...
def _insideFunction(self):
for p in __outerVar.keys():
<do stuff>
</code></pre>
<p>I'm getting a <code>NameError: name '_Editor__outerVar' is not defined</code> on the use of <code>__outerVar</code> when I try to instantiate Editor. I tried putting "<code>global __outerVar</code>" at the top of <code>insideFunction</code>, even though I'm not writing to <code>__outerVar</code>, same error.</p>
<p>I'm sure I'm just misunderstanding some python scope rule here. Help?</p>
<p>py 3.5</p>
| 1
|
2016-09-08T18:37:46Z
| 39,398,068
|
<p>You're seeing name mangling in effect. From the <a href="https://docs.python.org/2/tutorial/classes.html#tut-private" rel="nofollow">documentation</a>:</p>
<blockquote>
<p>Any identifier of the form <code>__spam</code> (at least two leading underscores, at most one trailing underscore) is textually replaced with <code>_classname__spam</code>, where classname is the current class name with leading underscore(s) stripped. <strong>This mangling is done without regard to the syntactic position of the identifier, as long as it occurs within the definition of a class.</strong></p>
</blockquote>
<p>As far as I can think, the only way around this is to rename the <code>__outerVar</code> in the global scope to something that doesn't start with double underscores.</p>
| 3
|
2016-09-08T18:41:00Z
|
[
"python",
"class",
"python-3.x",
"module",
"scope"
] |
Referring to outer scope from python class
| 39,398,019
|
<p>I have a (simplified) module, something like this:</p>
<pre><code>import tkinter as tk
__outerVar = {<dict stuff>}
class Editor(tk.Frame):
...
def _insideFunction(self):
for p in __outerVar.keys():
<do stuff>
</code></pre>
<p>I'm getting a <code>NameError: name '_Editor__outerVar' is not defined</code> on the use of <code>__outerVar</code> when I try to instantiate Editor. I tried putting "<code>global __outerVar</code>" at the top of <code>insideFunction</code>, even though I'm not writing to <code>__outerVar</code>, same error.</p>
<p>I'm sure I'm just misunderstanding some python scope rule here. Help?</p>
<p>py 3.5</p>
| 1
|
2016-09-08T18:37:46Z
| 39,398,072
|
<p>Python replaces any names preceded by a double underscore <code>__</code> in order to simulate 'private attributes'. In essence <code>__name</code> becomes <code>_classname__name</code>. This, called name mangling, happens only within classes as documented <a href="https://docs.python.org/3/tutorial/classes.html#private-variables" rel="nofollow">in the docs</a>:</p>
<blockquote>
<p>This mangling is done without regard to the syntactic position of the identifier, as long as it occurs within the definition of a class.</p>
</blockquote>
<p>The solution is don't use <code>__name</code> names, using something like <code>_name</code> or just <code>name</code> suffices.</p>
<p>As an addendum, in <a href="https://www.python.org/dev/peps/pep-0008/#method-names-and-instance-variables" rel="nofollow"><code>PEP 8 -- Method Names and Instance Variables</code></a> it states:</p>
<blockquote>
<p>Python mangles these names with the class name: if class <code>Foo</code> has an attribute named <code>__a</code> , it cannot be accessed by <code>Foo.__a</code> . (An insistent user could still gain access by calling <code>Foo._Foo__a</code>.) <em>Generally, double leading underscores should be used only to avoid name conflicts with attributes in classes designed to be subclassed.</em></p>
</blockquote>
<p>So, unless you're designing for cases were subclass name clashing might be issue, don't use double leading underscores.</p>
| 3
|
2016-09-08T18:41:12Z
|
[
"python",
"class",
"python-3.x",
"module",
"scope"
] |
Django BooleanField as a dropdown
| 39,398,031
|
<p>Is there a way to make a Django BooleanField a drop down in a form?</p>
<p>Right now it renders as a radio button. Is it possible to have a dropdown with options: 'Yes', 'No' ?</p>
<p>Currently my form definition for this field is:</p>
<pre><code>attending = forms.BooleanField(required=True)
</code></pre>
| 1
|
2016-09-08T18:38:18Z
| 39,399,015
|
<p>I believe a solution that can solve your problem is something along the lines of this:</p>
<pre><code>TRUE_FALSE_CHOICE = (
(True, "Yes"),
(False, "No")
}
boolfield = forms.ChoiceField(choices = TRUE_FALSE_CHOICES, label="Some Label",
initial='', widget=forms.Select(), required=True)
</code></pre>
<p>Might not be exact but it should get you pointed in the right direction.</p>
| 3
|
2016-09-08T19:43:11Z
|
[
"python",
"django"
] |
what is correct way of type hint a function that return only a specific set of values?
| 39,398,138
|
<p>I have a function that can only return <code>a</code>, <code>b</code> or <code>c</code> all of them are of type <code>T</code> but I want to make part of its signature this fact because of the special meaning they carry in the context of the function, how I do that?</p>
<p>currently I use this</p>
<pre><code>def fun(...) -> "a or b or c":
#briefly explain the meaning of a, b and c in its docstring
</code></pre>
<p>is that the correct one?</p>
<p>I know that I can do this</p>
<pre><code>def fun(...) -> T:
#briefly explain the meaning of a, b and c in its docstring
</code></pre>
<p>but as I said I want to express in the signature that the function only return those specific values</p>
| 1
|
2016-09-08T18:45:31Z
| 39,398,193
|
<p>If all are of the same exact type just <em>add that as the return type</em>:</p>
<pre><code>def func(...) -> T: # or int or whatever else
</code></pre>
<blockquote>
<p>I want to express in the signature that the function only return those specific values</p>
</blockquote>
<p>Type hints don't specify a name or a value they just specify a <em>type</em>; a type checker tries to <em>act</em> on the <em>type</em> that's provided.</p>
<p>If you're just doing this for documentation purposes, add <code>'a or b or c</code>' to it if you want; users will understand it but type checkers <em>won't</em> and they definitely won't act on it.</p>
| 0
|
2016-09-08T18:48:56Z
|
[
"python",
"python-3.x",
"type-hinting"
] |
what is correct way of type hint a function that return only a specific set of values?
| 39,398,138
|
<p>I have a function that can only return <code>a</code>, <code>b</code> or <code>c</code> all of them are of type <code>T</code> but I want to make part of its signature this fact because of the special meaning they carry in the context of the function, how I do that?</p>
<p>currently I use this</p>
<pre><code>def fun(...) -> "a or b or c":
#briefly explain the meaning of a, b and c in its docstring
</code></pre>
<p>is that the correct one?</p>
<p>I know that I can do this</p>
<pre><code>def fun(...) -> T:
#briefly explain the meaning of a, b and c in its docstring
</code></pre>
<p>but as I said I want to express in the signature that the function only return those specific values</p>
| 1
|
2016-09-08T18:45:31Z
| 39,398,431
|
<p>You can't specify that your function returns only a subset of a type's values using type hinting alone. As the name implies, type hinting is all about <em>types</em> not values.</p>
<p>However, you can create a new <code>enum.Enum</code> subtype that only has the values you're going to return and use it in the function. Then you can type hint that you're returning the enum type.</p>
<pre><code>import enum
class cmp_results(enum.IntEnum):
less = -1
equal = 0
greater = 1
def my_cmp_function(x, y) -> cmp_results:
if x < y: return cmp_results.less
elif x == y: return cmp_results.equal
else: return cmp_results.greater
</code></pre>
<p>This may be overkill. Just hinting <code>int</code> as the return type (and documenting the specific values) is probably good enough.</p>
| 5
|
2016-09-08T19:05:17Z
|
[
"python",
"python-3.x",
"type-hinting"
] |
Expert system (used for database access) vs. ORM
| 39,398,181
|
<p>I have recently discovered <a href="http://pyke.sourceforge.net" rel="nofollow">PyKE</a>, and noticed that one of the given examples of a potential use (actually, the use for which it was originally built) was to compile SELECT statements to query a database, and map the result to a dictionary. The author emphasizes that this is not an ORM.</p>
<p>From this, I have two sub-questions:</p>
<ol>
<li>How is that usage of PyKE (or another expert system capable of executing or emitting code or returning structured data for use by a calling program) not effectively an ORM?</li>
<li>In what circumstances would it be preferable to use an expert system (such as PyKE) to query a database, as opposed to a purpose-built ORM? I assume there must be some, given that PyKE was made for the purpose.</li>
</ol>
| 0
|
2016-09-08T18:47:40Z
| 39,511,064
|
<h3>To your first question</h3>
<p>An ORM is a layer between your logic and data that maps one to the other. Relational DBs often don't store data the way your objects use that data, so ORMs aim to abstract away the mental gymnastics needed to write the SQL to transform the data from one representation to another. (imho, They tend not to do it well or efficiently)</p>
<p>For PyKE specifically, <a href="http://pyke.sourceforge.net/examples.html#sqlgen" rel="nofollow">one aspect</a> offers SELECT compilation, but not apparently CRUDing (CRUD: Create, Read, Update, Delete), which is the most important thing an ORM does.</p>
<h3>On the second question</h3>
<p>PyKE would be used for when you might need a knowledge engine (natural language data search, for instance). Then PyKE may know how to extract the data from the database in the most efficient way to complete its own goals.</p>
<p>An ORM, on the other hand, would be given the data that represents some object in your program, such as a purchase order from a website, and it would insert the object when the PO is created, get it out of the DB when a session is restored, alter the data as the user adds and removes items to the PO, and remove the PO if the purchase is completed or aborted (CRUD). </p>
<hr>
<p><strong>tldr</strong></p>
<p>PyKE is a specialized library that abstracts away some of the need to write SQL to use the library, but doesn't offer a complete set of DB interaction, because that's not what it was built for.</p>
<p>ORMs do offer this interaction with DBs, and attempt to make it easier to use data in a very dynamic way; though in my experience to use / not to use an ORM over hand crafted SQL sparks some pretty fierce debates.</p>
| 2
|
2016-09-15T12:29:22Z
|
[
"python",
"orm",
"language-agnostic",
"expert-system",
"pyke"
] |
What is the equivalent function in Biopython for BioPerl's Bio::DB::Fasta?
| 39,398,183
|
<p>I'm translating a Perl code to a Python code using BioPython.</p>
<p>I got something like:</p>
<pre><code>my $db = Bio::DB::Fasta->new($path,$options)
</code></pre>
<p>and I'm looking for a similar function in Biopython. Is there anything like this?</p>
| 0
|
2016-09-08T18:47:53Z
| 39,400,180
|
<p>You can find the IO for FASTA files at <a href="http://biopython.org/DIST/docs/api/Bio.SeqIO-module.html" rel="nofollow">http://biopython.org/DIST/docs/api/Bio.SeqIO-module.html</a></p>
<p>About the indexing, I think Biopython doesn't handle '.fai' files like Bio::DB:Fasta. You can have a dictionary (like a perl hash) using the <code>SeqIO.index()</code> method. This is a quick example as shown in the docs:</p>
<pre><code>from Bio import SeqIO
record_dict = SeqIO.index("fasta_file.fas", "fasta")
print(record_dict["ID of the fasta sequence"])
</code></pre>
<p><code>SeqIO.index</code> creates a dict in memory to allow random access to each sequence. Read the docs to view the limitations of that dict: <a href="http://biopython.org/DIST/docs/api/Bio.File._IndexedSeqFileDict-class.html" rel="nofollow">http://biopython.org/DIST/docs/api/Bio.File._IndexedSeqFileDict-class.html</a></p>
| 1
|
2016-09-08T21:04:19Z
|
[
"python",
"perl",
"biopython",
"bioperl"
] |
Create unique MultiIndex from Non-unique Index Python Pandas
| 39,398,251
|
<p>I have a pandas DataFrame with a non-unique index:</p>
<pre><code>index = [1,1,1,1,2,2,2,3]
df = pd.DataFrame(data = {'col1': [1,3,7,6,2,4,3,4]}, index=index)
df
Out[12]:
col1
1 1
1 3
1 7
1 6
2 2
2 4
2 3
3 4
</code></pre>
<p>I'd like to turn this into unique MultiIndex and preserve order, like this:</p>
<pre><code> col1
Ind2
1 0 1
1 3
2 7
3 6
2 0 2
1 4
2 3
3 0 4
</code></pre>
<p>I would imagine pandas would have a function for something like this but haven't found anything</p>
| 2
|
2016-09-08T18:53:06Z
| 39,398,424
|
<p>You can do a <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.core.groupby.GroupBy.cumcount.html" rel="nofollow"><code>groupby.cumcount</code></a> on the index, and then append it as a new level to the index using <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.set_index.html" rel="nofollow"><code>set_index</code></a>:</p>
<pre><code>df = df.set_index(df.groupby(level=0).cumcount(), append=True)
</code></pre>
<p>The resulting output:</p>
<pre><code> col1
1 0 1
1 3
2 7
3 6
2 0 2
1 4
2 3
3 0 4
</code></pre>
| 2
|
2016-09-08T19:04:25Z
|
[
"python",
"pandas"
] |
How to remove/hide total sum in tree view in odoo?
| 39,398,260
|
<p>I have odoo tree view in which there are some warehouse stock values displaying in columns. And its calculating total sum of these warehouse values in the bottom. I want to remove total sum in the bottom in tree view, how i can do that? you can see my tree view code i applied sum="false", total="false" but its not working. Anybody have idea that how it can be possible to remove total sum in tree view in odoo? I am also attaching image so you can easily understand my question. Thanks in advance...<a href="http://i.stack.imgur.com/M0Gza.png" rel="nofollow"><img src="http://i.stack.imgur.com/M0Gza.png" alt="enter image description here"></a></p>
<p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false">
<div class="snippet-code">
<pre class="snippet-code-html lang-html prettyprint-override"><code><tree string="Warehouse Product" editable="bottom" create="false" edit="false" delete="false" sum="false">
<field name="warehouse_id"/>
<field name="qty" sum="Quantity"/>
<field name="incoming_qty" sum="Incoming"/>
<field name="outgoing_qty" sum="Total Confirmed"/>
<field name="reserved_event" sum="Events"/>
<field name="reserved_sale" sum="Total Reserved"/>
<field name="backorder_qty" sum="Backordes"/>
<field name="actual_qty" sum="Actual Qty"/>
<field name="warehouse_inventory" sum="Total Warehouse Qty"/>
</tree></code></pre>
</div>
</div>
</p>
<p>Its done, i just remove sum="" from every field and it remove bottom line of total sum, here is my updated code</p>
<pre><code><tree string="Warehouse Product" editable="bottom" create="false" edit="false" delete="false">
<field name="warehouse_id"/>
<field name="qty"/>
<field name="incoming_qty"/>
<field name="outgoing_qty"/>
<field name="reserved_event"/>
<field name="reserved_sale"/>
<field name="backorder_qty"/>
<field name="actual_qty"/>
<field name="warehouse_inventory"/>
</tree>
</code></pre>
| 1
|
2016-09-08T18:53:23Z
| 39,398,430
|
<p>If you just want to go to Settings/User Technical -> Interface -> Views you can edit the view like this. Just remove the sum tag entirely from the rows you wish not to be totalled.</p>
<pre><code><tree string="Warehouse Product" editable="bottom" create="false" edit="false" delete="false">
<field name="warehouse_id"/>
<field name="qty" sum="Quantity"/>
<field name="incoming_qty"/>
<field name="outgoing_qty"/>
<field name="reserved_event"/>
<field name="reserved_sale"/>
<field name="backorder_qty"/>
<field name="actual_qty"/>
<field name="warehouse_inventory"/>
</code></pre>
<p></p>
| 3
|
2016-09-08T19:05:07Z
|
[
"python",
"xml",
"openerp",
"views",
"odoo-8"
] |
How to remove/hide total sum in tree view in odoo?
| 39,398,260
|
<p>I have odoo tree view in which there are some warehouse stock values displaying in columns. And its calculating total sum of these warehouse values in the bottom. I want to remove total sum in the bottom in tree view, how i can do that? you can see my tree view code i applied sum="false", total="false" but its not working. Anybody have idea that how it can be possible to remove total sum in tree view in odoo? I am also attaching image so you can easily understand my question. Thanks in advance...<a href="http://i.stack.imgur.com/M0Gza.png" rel="nofollow"><img src="http://i.stack.imgur.com/M0Gza.png" alt="enter image description here"></a></p>
<p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false">
<div class="snippet-code">
<pre class="snippet-code-html lang-html prettyprint-override"><code><tree string="Warehouse Product" editable="bottom" create="false" edit="false" delete="false" sum="false">
<field name="warehouse_id"/>
<field name="qty" sum="Quantity"/>
<field name="incoming_qty" sum="Incoming"/>
<field name="outgoing_qty" sum="Total Confirmed"/>
<field name="reserved_event" sum="Events"/>
<field name="reserved_sale" sum="Total Reserved"/>
<field name="backorder_qty" sum="Backordes"/>
<field name="actual_qty" sum="Actual Qty"/>
<field name="warehouse_inventory" sum="Total Warehouse Qty"/>
</tree></code></pre>
</div>
</div>
</p>
<p>Its done, i just remove sum="" from every field and it remove bottom line of total sum, here is my updated code</p>
<pre><code><tree string="Warehouse Product" editable="bottom" create="false" edit="false" delete="false">
<field name="warehouse_id"/>
<field name="qty"/>
<field name="incoming_qty"/>
<field name="outgoing_qty"/>
<field name="reserved_event"/>
<field name="reserved_sale"/>
<field name="backorder_qty"/>
<field name="actual_qty"/>
<field name="warehouse_inventory"/>
</tree>
</code></pre>
| 1
|
2016-09-08T18:53:23Z
| 39,398,447
|
<p>Just get rid of the sum attribute(s)</p>
<pre><code><tree string="Warehouse Product" editable="bottom" create="false" edit="false" delete="false">
<field name="warehouse_id"/>
<field name="qty" />
<field name="incoming_qty" />
<field name="outgoing_qty" />
<field name="reserved_event" />
<field name="reserved_sale" />
<field name="backorder_qty" />
<field name="actual_qty" />
<field name="warehouse_inventory" />
</tree>
</code></pre>
| 1
|
2016-09-08T19:05:47Z
|
[
"python",
"xml",
"openerp",
"views",
"odoo-8"
] |
How to remove/hide total sum in tree view in odoo?
| 39,398,260
|
<p>I have odoo tree view in which there are some warehouse stock values displaying in columns. And its calculating total sum of these warehouse values in the bottom. I want to remove total sum in the bottom in tree view, how i can do that? you can see my tree view code i applied sum="false", total="false" but its not working. Anybody have idea that how it can be possible to remove total sum in tree view in odoo? I am also attaching image so you can easily understand my question. Thanks in advance...<a href="http://i.stack.imgur.com/M0Gza.png" rel="nofollow"><img src="http://i.stack.imgur.com/M0Gza.png" alt="enter image description here"></a></p>
<p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false">
<div class="snippet-code">
<pre class="snippet-code-html lang-html prettyprint-override"><code><tree string="Warehouse Product" editable="bottom" create="false" edit="false" delete="false" sum="false">
<field name="warehouse_id"/>
<field name="qty" sum="Quantity"/>
<field name="incoming_qty" sum="Incoming"/>
<field name="outgoing_qty" sum="Total Confirmed"/>
<field name="reserved_event" sum="Events"/>
<field name="reserved_sale" sum="Total Reserved"/>
<field name="backorder_qty" sum="Backordes"/>
<field name="actual_qty" sum="Actual Qty"/>
<field name="warehouse_inventory" sum="Total Warehouse Qty"/>
</tree></code></pre>
</div>
</div>
</p>
<p>Its done, i just remove sum="" from every field and it remove bottom line of total sum, here is my updated code</p>
<pre><code><tree string="Warehouse Product" editable="bottom" create="false" edit="false" delete="false">
<field name="warehouse_id"/>
<field name="qty"/>
<field name="incoming_qty"/>
<field name="outgoing_qty"/>
<field name="reserved_event"/>
<field name="reserved_sale"/>
<field name="backorder_qty"/>
<field name="actual_qty"/>
<field name="warehouse_inventory"/>
</tree>
</code></pre>
| 1
|
2016-09-08T18:53:23Z
| 39,398,489
|
<p>Its done, i just remove sum="" from every field and it remove bottom line of total sum, here is my updated code</p>
<p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false">
<div class="snippet-code">
<pre class="snippet-code-html lang-html prettyprint-override"><code><tree string="Warehouse Product" editable="bottom" create="false" edit="false" delete="false">
<field name="warehouse_id"/>
<field name="qty"/>
<field name="incoming_qty"/>
<field name="outgoing_qty"/>
<field name="reserved_event"/>
<field name="reserved_sale"/>
<field name="backorder_qty"/>
<field name="actual_qty"/>
<field name="warehouse_inventory"/>
</tree></code></pre>
</div>
</div>
</p>
| 0
|
2016-09-08T19:08:50Z
|
[
"python",
"xml",
"openerp",
"views",
"odoo-8"
] |
How to resolve memory issue of pandas while reading big csv files
| 39,398,283
|
<p>I have a 100GB csv file with millions of rows. I need to read, say, 10,000 rows at a time in pandas dataframe and write that to the SQL server in chunks. </p>
<p>I used chunksize as well as iteartor as suggested on <a href="http://pandas-docs.github.io/pandas-docs-travis/io.html#iterating-through-files-chunk-by-chunk" rel="nofollow">http://pandas-docs.github.io/pandas-docs-travis/io.html#iterating-through-files-chunk-by-chunk</a>, and have gone through many similar questions,but I am still getting the out of memory error. </p>
<p>Can you suggest a code to read very big csv files in pandas dataframe iteratively?</p>
| 2
|
2016-09-08T18:54:36Z
| 39,399,220
|
<p>Demo:</p>
<pre><code>for chunk in pd.read_csv(filename, chunksize=10**5):
chunk.to_sql('table_name', conn, if_exists='append')
</code></pre>
<p>where <code>conn</code> is a SQLAlchemy engine (created by <code>sqlalchemy.create_engine(...)</code>)</p>
| 1
|
2016-09-08T19:57:44Z
|
[
"python",
"csv",
"pandas",
"dataframe",
"iterator"
] |
How to write python code that finishes a read from stdin even though the read buffer isn't full
| 39,398,297
|
<p>Python question:</p>
<pre><code>$ python -V
Python 2.4.3
</code></pre>
<p>Searched for the answer to this and maybe didn't know what search to use.</p>
<p>Basically the question is simple.
I have perl code like this and it works perfect.</p>
<pre><code>while ($count)
{
$count = sysread(STDIN,$data,2000);
if ($count)
{
#print "Read $data\n";
showHex($data);
$s->mcast_send($data,"$MulticastIP:$MulticastPort");
}
}
</code></pre>
<p>What this does, which is exactly what I want, is that sysread, even though given a max read of 2000, will return early if it has read at least one byte and there is a delay, in time, in the incoming data. </p>
<p><strong>Important: My incoming data is in raw-binary, not newline delimited text.</strong></p>
<p>I need this in python because the IO::Socket::Multicast in perl doesn't come standard in the perl libraries by default.</p>
<p>But I can't figure out how to do this in python. <strong>Python annoyingly waits until all 2000 bytes are read before returning.</strong></p>
<p>My python code: go.py (for this example I only read 10 bytes at at time)</p>
<pre><code>#!/bin/env python
import sys
while True:
buf = sys.stdin.read(10)
if len(buf) > 0:
print "Read %d" % (len(buf))
</code></pre>
<p>Proof of the issue: go.sh</p>
<pre><code>while (( 1 ))
do
echo Sending ab 1>&2
echo ab
sleep 1
done | ./go.py
Sending ab
Sending ab
Sending ab
Sending ab
Read 10
Sending ab
Sending ab
Sending ab
Read 10
</code></pre>
<p>I know from experience that a read() on any file descriptor returns early if it has read at least one byte and comes to a point where there is no more data immediately available to read, like from a pipe from the stdout output of another program.</p>
<pre><code>READ(2) Linux Programmerâs Manual READ(2)
NAME
read - read from a file descriptor
SYNOPSIS
#include <unistd.h>
ssize_t read(int fd, void *buf, size_t count);
DESCRIPTION
read() attempts to read up to count bytes from file descriptor fd into the buffer starting at buf.
If count is zero, read() returns zero and has no other results. If count is greater than SSIZE_MAX, the result is unspecified.
RETURN VALUE
On success, the number of bytes read is returned (zero indicates end of file), and the file position is advanced by this number. It is not an error if this number is smaller than the number of bytes
requested; this may happen for example because fewer bytes are actually available right now (maybe because we were close to end-of-file, or because we are reading from a pipe, or from a terminal), or
because read() was interrupted by a signal. On error, -1 is returned, and errno is set appropriately. In this case it is left unspecified whether the file position (if any) changes.
</code></pre>
<p>So how do I accomplish this?</p>
| 1
|
2016-09-08T18:55:31Z
| 39,400,253
|
<p>Non-blocking mode can be enabled for stdin like shown below. I added some sleep there to prevent CPU hogging.</p>
<pre><code>#!/usr/bin/env python
import os
import sys
import time
import fcntl
# Set stdin to non-blocking mode
flags = fcntl.fcntl(sys.stdin.fileno(), fcntl.F_GETFL)
fcntl.fcntl(sys.stdin.fileno(), fcntl.F_SETFL, flags | os.O_NONBLOCK)
while True:
try:
buf = sys.stdin.read(10)
except IOError:
buf = []
if len(buf) > 0:
print "Read %d" % (len(buf))
time.sleep(0.01)
</code></pre>
<p>This will give output like</p>
<pre><code>Sending ab
Read 3
Sending ab
Read 3
Sending ab
Read 3
Sending ab
Read 3
Sending ab
Read 3
Sending ab
Read 3
Sending ab
Read 3
...
</code></pre>
| 0
|
2016-09-08T21:10:12Z
|
[
"python",
"stdin"
] |
How to write python code that finishes a read from stdin even though the read buffer isn't full
| 39,398,297
|
<p>Python question:</p>
<pre><code>$ python -V
Python 2.4.3
</code></pre>
<p>Searched for the answer to this and maybe didn't know what search to use.</p>
<p>Basically the question is simple.
I have perl code like this and it works perfect.</p>
<pre><code>while ($count)
{
$count = sysread(STDIN,$data,2000);
if ($count)
{
#print "Read $data\n";
showHex($data);
$s->mcast_send($data,"$MulticastIP:$MulticastPort");
}
}
</code></pre>
<p>What this does, which is exactly what I want, is that sysread, even though given a max read of 2000, will return early if it has read at least one byte and there is a delay, in time, in the incoming data. </p>
<p><strong>Important: My incoming data is in raw-binary, not newline delimited text.</strong></p>
<p>I need this in python because the IO::Socket::Multicast in perl doesn't come standard in the perl libraries by default.</p>
<p>But I can't figure out how to do this in python. <strong>Python annoyingly waits until all 2000 bytes are read before returning.</strong></p>
<p>My python code: go.py (for this example I only read 10 bytes at at time)</p>
<pre><code>#!/bin/env python
import sys
while True:
buf = sys.stdin.read(10)
if len(buf) > 0:
print "Read %d" % (len(buf))
</code></pre>
<p>Proof of the issue: go.sh</p>
<pre><code>while (( 1 ))
do
echo Sending ab 1>&2
echo ab
sleep 1
done | ./go.py
Sending ab
Sending ab
Sending ab
Sending ab
Read 10
Sending ab
Sending ab
Sending ab
Read 10
</code></pre>
<p>I know from experience that a read() on any file descriptor returns early if it has read at least one byte and comes to a point where there is no more data immediately available to read, like from a pipe from the stdout output of another program.</p>
<pre><code>READ(2) Linux Programmerâs Manual READ(2)
NAME
read - read from a file descriptor
SYNOPSIS
#include <unistd.h>
ssize_t read(int fd, void *buf, size_t count);
DESCRIPTION
read() attempts to read up to count bytes from file descriptor fd into the buffer starting at buf.
If count is zero, read() returns zero and has no other results. If count is greater than SSIZE_MAX, the result is unspecified.
RETURN VALUE
On success, the number of bytes read is returned (zero indicates end of file), and the file position is advanced by this number. It is not an error if this number is smaller than the number of bytes
requested; this may happen for example because fewer bytes are actually available right now (maybe because we were close to end-of-file, or because we are reading from a pipe, or from a terminal), or
because read() was interrupted by a signal. On error, -1 is returned, and errno is set appropriately. In this case it is left unspecified whether the file position (if any) changes.
</code></pre>
<p>So how do I accomplish this?</p>
| 1
|
2016-09-08T18:55:31Z
| 39,459,742
|
<p>I think I figured out the best way. Unless someone can figure out how to make read behave like C and sysread in perl.
I make it non-blocking like above, but use a select to wait until data is available. Combining both, I get what I want. Wait until data is available and read the available data without blocking. Yea!!!!!!</p>
<pre><code>import os
import sys
import select
import fcntl
# Make stdin non blocking
flags = fcntl.fcntl(sys.stdin.fileno(), fcntl.F_GETFL)
fcntl.fcntl(sys.stdin.fileno(), fcntl.F_SETFL, flags | os.O_NONBLOCK)
while True:
rlist,a,a = select.select( [sys.stdin.fileno()], [], [] )
for f in rlist:
try:
buf = sys.stdin.read(1000)
except IOError:
print "IOError"
buf = []
if len(buf) > 0:
print "Read %d" % (len(buf))
else:
print "No Data Available"
</code></pre>
<p>Output: Note.. I never see 'IOError' or 'No Data Available'</p>
<pre><code>Sending ab
Read 3
Sending ab
Read 3
Sending ab
Read 3
Sending ab
Read 3
</code></pre>
| 1
|
2016-09-12T22:33:16Z
|
[
"python",
"stdin"
] |
How to write python code that finishes a read from stdin even though the read buffer isn't full
| 39,398,297
|
<p>Python question:</p>
<pre><code>$ python -V
Python 2.4.3
</code></pre>
<p>Searched for the answer to this and maybe didn't know what search to use.</p>
<p>Basically the question is simple.
I have perl code like this and it works perfect.</p>
<pre><code>while ($count)
{
$count = sysread(STDIN,$data,2000);
if ($count)
{
#print "Read $data\n";
showHex($data);
$s->mcast_send($data,"$MulticastIP:$MulticastPort");
}
}
</code></pre>
<p>What this does, which is exactly what I want, is that sysread, even though given a max read of 2000, will return early if it has read at least one byte and there is a delay, in time, in the incoming data. </p>
<p><strong>Important: My incoming data is in raw-binary, not newline delimited text.</strong></p>
<p>I need this in python because the IO::Socket::Multicast in perl doesn't come standard in the perl libraries by default.</p>
<p>But I can't figure out how to do this in python. <strong>Python annoyingly waits until all 2000 bytes are read before returning.</strong></p>
<p>My python code: go.py (for this example I only read 10 bytes at at time)</p>
<pre><code>#!/bin/env python
import sys
while True:
buf = sys.stdin.read(10)
if len(buf) > 0:
print "Read %d" % (len(buf))
</code></pre>
<p>Proof of the issue: go.sh</p>
<pre><code>while (( 1 ))
do
echo Sending ab 1>&2
echo ab
sleep 1
done | ./go.py
Sending ab
Sending ab
Sending ab
Sending ab
Read 10
Sending ab
Sending ab
Sending ab
Read 10
</code></pre>
<p>I know from experience that a read() on any file descriptor returns early if it has read at least one byte and comes to a point where there is no more data immediately available to read, like from a pipe from the stdout output of another program.</p>
<pre><code>READ(2) Linux Programmerâs Manual READ(2)
NAME
read - read from a file descriptor
SYNOPSIS
#include <unistd.h>
ssize_t read(int fd, void *buf, size_t count);
DESCRIPTION
read() attempts to read up to count bytes from file descriptor fd into the buffer starting at buf.
If count is zero, read() returns zero and has no other results. If count is greater than SSIZE_MAX, the result is unspecified.
RETURN VALUE
On success, the number of bytes read is returned (zero indicates end of file), and the file position is advanced by this number. It is not an error if this number is smaller than the number of bytes
requested; this may happen for example because fewer bytes are actually available right now (maybe because we were close to end-of-file, or because we are reading from a pipe, or from a terminal), or
because read() was interrupted by a signal. On error, -1 is returned, and errno is set appropriately. In this case it is left unspecified whether the file position (if any) changes.
</code></pre>
<p>So how do I accomplish this?</p>
| 1
|
2016-09-08T18:55:31Z
| 39,461,317
|
<p>In the event you are able to use Python 3 instead:</p>
<pre><code>while True:
buf = sys.stdin.buffer.raw.read(10)
if buf::
print("Read", len(buf))
</code></pre>
<p>Then:</p>
<pre><code>while (( 1 ))
do
echo Sending ab 1>&2
echo -n ab
sleep 1
done | ./go.py
</code></pre>
<p>gives:</p>
<pre><code>Sending ab
Read 2
Sending ab
Read 2
Sending ab
Read 2
...
</code></pre>
| 0
|
2016-09-13T02:17:47Z
|
[
"python",
"stdin"
] |
How does asyncio.sleep work with negative values?
| 39,398,312
|
<p>I decided to implement sleep sort (<a href="https://rosettacode.org/wiki/Sorting_algorithms/Sleep_sort" rel="nofollow">https://rosettacode.org/wiki/Sorting_algorithms/Sleep_sort</a>) using Python's <code>asyncio</code> when I made a strange discovery: it works with negative values (and returns immediately with 0)!</p>
<p>Here is the code (you can run it here <a href="https://repl.it/DYTZ" rel="nofollow">https://repl.it/DYTZ</a>):</p>
<pre><code>import asyncio
import random
async def sleepy(value):
return await asyncio.sleep(value, result=value)
async def main(input_values):
result = []
for sleeper in asyncio.as_completed(map(sleepy, input_values)):
result.append(await sleeper)
print(result)
if __name__ == '__main__':
loop = asyncio.get_event_loop()
input_values = list(range(-5, 6))
random.shuffle(input_values)
loop.run_until_complete(main(input_values))
</code></pre>
<p>The code takes 5 seconds to execute, as expected, but the result is always <code>[0, -5, -4, -3, -2, -1, 1, 2, 3, 4, 5]</code>. I can understand 0 returning immediately, but how are the negative values coming back in the right order?</p>
| 4
|
2016-09-08T18:56:37Z
| 39,399,052
|
<p>If you take a look at the asyncio source, <code>sleep</code> <a href="https://github.com/python/cpython/blob/ce83a8c892ff17dc5eaba2420854d82589b269cd/Lib/asyncio/tasks.py#L505-L507" rel="nofollow">special cases 0</a> and returns immediately.</p>
<pre><code>if delay == 0:
yield
return result
</code></pre>
<p>If you continue through the source, you'll see that any other value gets passed through to the event loop's <code>call_later</code> method. Looking at how <code>call_later</code> is implemented for the default loop (<code>BaseEventLoop</code>), you'll see that <code>call_later</code> <a href="https://github.com/python/cpython/blob/ce83a8c892ff17dc5eaba2420854d82589b269cd/Lib/asyncio/base_events.py#L463" rel="nofollow">passes a time to <code>call_at</code></a>.</p>
<pre><code>self.call_at(self.time() + delay, callback, *args)
</code></pre>
<p>The reason the values are turned in order is that the times created with negative delays occur before those with positive delays.</p>
| 4
|
2016-09-08T19:45:52Z
|
[
"python",
"sorting",
"sleep",
"python-asyncio"
] |
How does asyncio.sleep work with negative values?
| 39,398,312
|
<p>I decided to implement sleep sort (<a href="https://rosettacode.org/wiki/Sorting_algorithms/Sleep_sort" rel="nofollow">https://rosettacode.org/wiki/Sorting_algorithms/Sleep_sort</a>) using Python's <code>asyncio</code> when I made a strange discovery: it works with negative values (and returns immediately with 0)!</p>
<p>Here is the code (you can run it here <a href="https://repl.it/DYTZ" rel="nofollow">https://repl.it/DYTZ</a>):</p>
<pre><code>import asyncio
import random
async def sleepy(value):
return await asyncio.sleep(value, result=value)
async def main(input_values):
result = []
for sleeper in asyncio.as_completed(map(sleepy, input_values)):
result.append(await sleeper)
print(result)
if __name__ == '__main__':
loop = asyncio.get_event_loop()
input_values = list(range(-5, 6))
random.shuffle(input_values)
loop.run_until_complete(main(input_values))
</code></pre>
<p>The code takes 5 seconds to execute, as expected, but the result is always <code>[0, -5, -4, -3, -2, -1, 1, 2, 3, 4, 5]</code>. I can understand 0 returning immediately, but how are the negative values coming back in the right order?</p>
| 4
|
2016-09-08T18:56:37Z
| 39,399,607
|
<p>Well, looking at the <a href="https://hg.python.org/cpython/file/3.5/Lib/asyncio/tasks.py#l503" rel="nofollow">source</a>:</p>
<ul>
<li><code>delay == 0</code> is special-cased to return immediately, it doesn't even try to sleep.</li>
<li>Non-zero delay calls <code>events.get_event_loop()</code>. Since there are no calls to <code>events.set_event_loop_policy(policy)</code> in <code>asyncio.tasks</code>, it would seem to fall back on the default unless it's already been set somewhere else, and <a href="https://hg.python.org/cpython/file/3.5/Lib/asyncio/events.py#l610" rel="nofollow">the default is <code>asyncio.DefaultEventLoopPolicy</code></a>.</li>
<li>This is not defined in <code>events.py</code>, because it's different on <a href="https://hg.python.org/cpython/file/3.5/Lib/asyncio/windows_events.py" rel="nofollow">Windows</a> from on <a href="https://hg.python.org/cpython/file/3.5/Lib/asyncio/unix_events.py" rel="nofollow">UNIX</a>.</li>
<li>Either way, <code>sleep</code> calls <code>loop.create_future()</code>. That's defined a few inheritances back, over in <a href="https://hg.python.org/cpython/file/3.5/Lib/asyncio/base_events.py#l221" rel="nofollow"><code>base_events.BaseEventLoop</code></a>. It's just a simple call to the <code>Future()</code> constructor, no significant logic.</li>
<li><p>From the instance of <code>Future</code> it delegates back to the loop, as follows:</p>
<pre><code>future._loop.call_later(delay,
futures._set_result_unless_cancelled,
future, result)
</code></pre></li>
<li>That one is also in <code>BaseEventLoop</code>, and still doesn't directly handle the <code>delay</code> number: it calls <code>self.call_at</code>, adding the current time to the delay.</li>
<li><code>call_at</code> schedules and returns an <code>events.TimerHandle</code>, and the callback is to tell the <code>Future</code> it's done. The return value is only relevant if the task is to be cancelled, which it is automatically at the end for cleanup. The scheduling is the important bit.</li>
<li><code>_scheduled</code> is sorted via <code>heapq</code> - everything goes on there in sorted order, and timers sort by their <code>_when</code>. This is key.</li>
<li>Every time it checks, it strips out all cancelled scheduled things, then runs all remaining scheduled callbacks, in order, until it hits one that's not ready.</li>
</ul>
<p><strong>TL;DR:</strong></p>
<p>Sleeping with <code>asyncio</code> for a negative duration schedules tasks to be "ready" in the past. This means that they go to the top of the list of scheduled tasks, and are run as soon as the event loop checks. Effectively, 0 comes first because it doesn't even schedule, but everything else registers to the scheduler as "running late" and is handled immediately in order of how late it is.</p>
| 2
|
2016-09-08T20:25:02Z
|
[
"python",
"sorting",
"sleep",
"python-asyncio"
] |
Using Pycharm virtualenv with preexisting files
| 39,398,318
|
<p>I was sent a bunch of Python files that have various custom dependencies inside nested folders. I used to run the main file from Terminal by first navigating to the main folder, then running <code>python main.py</code>. This worked until I needed to update some modules and ran into permissions problems.</p>
<p>So I downloaded Pycharm and I'm trying to use a virtualenv. I'm stuck though: do I create a new Pycharm project?</p>
<p>Under the project interpreter, I made a new virtualenv with no modules, but when I do <code>pip list</code> in the command window that's below, it lists all my modules.</p>
<p>How can I "import" my existing Python files, put them in a clean virtualenv, and install the modules I need?</p>
| 2
|
2016-09-08T18:56:47Z
| 39,409,962
|
<p>In PyCharm, do File -> Open and point at the directory. It will turn that directory into a "project" (meaning, it will create a .idea subdirectory). Depending on how you named your virtualenv, it will likely detect the virtualenv and assign it the project's interpreter.</p>
| 1
|
2016-09-09T10:54:19Z
|
[
"python",
"pycharm",
"virtualenv"
] |
How Would I Go About Making My Python Scoring System Work?
| 39,398,378
|
<p>I've been learning through an online course and I was trying to come up with ideas for things I could create to "test" myself as it were so I came up with a rock paper scissors game. It was working well so I decided to try and add a way of keeping track of your score vs the computer. Didn't go so well.</p>
<p>Here's what I have:</p>
<pre><code>from random import randint
ai_score = 0
user_score = 0
def newgame():
print('New Game')
try:
while(1):
ai_guess = str(randint(1,3))
print('\n1) Rock \n2) Paper \n3) Scissors')
user_guess = input("Select An Option: ")
if(user_guess == '1'):
print('\nYou Selected Rock')
elif(user_guess == '2'):
print('\nYou Selected Paper')
elif(user_guess == '3'):
print('\nYou Selected Scissors')
else:
print('%s is not an option' % user_guess)
if(user_guess == ai_guess):
print('Draw - Please Try Again')
elif (user_guess == '1' and ai_guess == '2'):
print("AI Selected Paper")
print("Paper Beats Rock")
print("AI Wins!")
ai_score += 1
break
elif (user_guess == '1' and ai_guess == '3'):
print("AI Selected Scissors")
print("Rock Beats Scissors")
print("You Win!")
user_score += 1
break
elif (user_guess == '2' and ai_guess == '1'):
print("AI Selected Rock")
print("Paper Beats Rock")
print("You Win!")
user_score += 1
break
elif (user_guess == '2' and ai_guess == '3'):
print("AI Selected Scissors")
print("Scissors Beats Paper")
print("AI Wins!")
ai_score += 1
break
elif (user_guess == '3' and ai_guess == '1'):
print("AI Selected Rock")
print("Rock Beats Scissors")
print("AI Wins!")
ai_score += 1
break
elif (user_guess == '3' and ai_guess == '2'):
print("AI Selected Paper")
print("Scissors Beats Paper")
print("You Win!")
user_score += 1
break
else:
pass
break
except KeyboardInterrupt:
print("\nKeyboard Interrupt - Exiting...")
exit()
#1 = Rock, 2 = Paper, 3 = Scissors
def main():
while(1):
print("\n1) New Game \n2) View Score \n3) Exit")
try:
option = input("Select An Option: ")
if option == '1':
newgame()
if option == '2':
print("\nScores")
print("Your Score: " + str(user_score))
print("AI Score: " + str(ai_score))
elif option == '3':
print('\nExiting...')
break
else:
print('%s is not an option' % option)
except KeyboardInterrupt:
print("\nKeyboard Interrupt - Exiting...")
exit()
main()
</code></pre>
<p>I read somewhere that global variables can work but are generally frowned upon. Not sure why but then I can't say they're =0 so couldn't get that to work. Putting the ai_score and user_score in the newgame() doesn't work because it sets it to 0 every time you re run. Any help would be much appreciated.</p>
<p>As a quick extra note, the second</p>
<pre><code>else:
print('%s is not an option' % option)
</code></pre>
<p>in main() always seems to execute and always says "1 is not an option" and I have no idea why it does that. I would assume something to do with while loops but I need those to keep it running so an explanation of why and how to fix would be great. At the end of the day, I'm just here to learn more.</p>
| 1
|
2016-09-08T19:01:36Z
| 39,398,484
|
<p>At least one problem is this: your main has if...if...elif...else. The second if probably needs to be an elif. Tip: When you have a flow-of-control problem, put print statements inside each control branch, printing out the control variable and everything else that might possibly be relevant. This tells you which branch is being taken -- in this case, which branches, plural.</p>
<p>You don't say exactly what the problem was with keeping score, but I imagine it was an exception along the lines of variable referenced before assignment. If so, you should put "global ai_score" somewhere at the top of the function. What's going on is that Python can, but doesn't like to, recognize variables outside a function that are being used inside the function. You have to push a little. Consider:</p>
<pre><code>>>> bleem = 0
>>> def incrbleem():
... bleem += 1
...
>>> incrbleem()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<stdin>", line 2, in incrbleem
UnboundLocalError: local variable 'bleem' referenced before assignment
>>> def incrbleem():
... global bleem
... bleem += 1
...
>>> bleem
0
>>> incrbleem()
>>> bleem
1
</code></pre>
<p>By the way, your code isn't bad at all, for a newbie. I've seen much, much worse! For what it's worth, I don't think global variables are bad for a small, throw-away program like this. Once you have two programmers, or two threads, or two months between work sessions on the program, globals can definitely cause problems.</p>
| 0
|
2016-09-08T19:08:33Z
|
[
"python"
] |
How Would I Go About Making My Python Scoring System Work?
| 39,398,378
|
<p>I've been learning through an online course and I was trying to come up with ideas for things I could create to "test" myself as it were so I came up with a rock paper scissors game. It was working well so I decided to try and add a way of keeping track of your score vs the computer. Didn't go so well.</p>
<p>Here's what I have:</p>
<pre><code>from random import randint
ai_score = 0
user_score = 0
def newgame():
print('New Game')
try:
while(1):
ai_guess = str(randint(1,3))
print('\n1) Rock \n2) Paper \n3) Scissors')
user_guess = input("Select An Option: ")
if(user_guess == '1'):
print('\nYou Selected Rock')
elif(user_guess == '2'):
print('\nYou Selected Paper')
elif(user_guess == '3'):
print('\nYou Selected Scissors')
else:
print('%s is not an option' % user_guess)
if(user_guess == ai_guess):
print('Draw - Please Try Again')
elif (user_guess == '1' and ai_guess == '2'):
print("AI Selected Paper")
print("Paper Beats Rock")
print("AI Wins!")
ai_score += 1
break
elif (user_guess == '1' and ai_guess == '3'):
print("AI Selected Scissors")
print("Rock Beats Scissors")
print("You Win!")
user_score += 1
break
elif (user_guess == '2' and ai_guess == '1'):
print("AI Selected Rock")
print("Paper Beats Rock")
print("You Win!")
user_score += 1
break
elif (user_guess == '2' and ai_guess == '3'):
print("AI Selected Scissors")
print("Scissors Beats Paper")
print("AI Wins!")
ai_score += 1
break
elif (user_guess == '3' and ai_guess == '1'):
print("AI Selected Rock")
print("Rock Beats Scissors")
print("AI Wins!")
ai_score += 1
break
elif (user_guess == '3' and ai_guess == '2'):
print("AI Selected Paper")
print("Scissors Beats Paper")
print("You Win!")
user_score += 1
break
else:
pass
break
except KeyboardInterrupt:
print("\nKeyboard Interrupt - Exiting...")
exit()
#1 = Rock, 2 = Paper, 3 = Scissors
def main():
while(1):
print("\n1) New Game \n2) View Score \n3) Exit")
try:
option = input("Select An Option: ")
if option == '1':
newgame()
if option == '2':
print("\nScores")
print("Your Score: " + str(user_score))
print("AI Score: " + str(ai_score))
elif option == '3':
print('\nExiting...')
break
else:
print('%s is not an option' % option)
except KeyboardInterrupt:
print("\nKeyboard Interrupt - Exiting...")
exit()
main()
</code></pre>
<p>I read somewhere that global variables can work but are generally frowned upon. Not sure why but then I can't say they're =0 so couldn't get that to work. Putting the ai_score and user_score in the newgame() doesn't work because it sets it to 0 every time you re run. Any help would be much appreciated.</p>
<p>As a quick extra note, the second</p>
<pre><code>else:
print('%s is not an option' % option)
</code></pre>
<p>in main() always seems to execute and always says "1 is not an option" and I have no idea why it does that. I would assume something to do with while loops but I need those to keep it running so an explanation of why and how to fix would be great. At the end of the day, I'm just here to learn more.</p>
| 1
|
2016-09-08T19:01:36Z
| 39,398,597
|
<pre><code>from random import randint
class newgame():
ai_score = 0
user_score = 0
def __init__(self):
self.ai_score = 0
self.user_score = 0
def playgame(self):
print('New Game')
try:
while(1):
ai_guess = str(randint(1,3))
print('\n1) Rock \n2) Paper \n3) Scissors')
user_guess = input("Select An Option: ")
if(user_guess == '1'):
print('\nYou Selected Rock')
elif(user_guess == '2'):
print('\nYou Selected Paper')
elif(user_guess == '3'):
print('\nYou Selected Scissors')
else:
print('%s is not an option' % user_guess)
if(user_guess == ai_guess):
print('Draw - Please Try Again')
elif (user_guess == '1' and ai_guess == '2'):
print("AI Selected Paper")
print("Paper Beats Rock")
print("AI Wins!")
self.ai_score += 1
break
elif (user_guess == '1' and ai_guess == '3'):
print("AI Selected Scissors")
print("Rock Beats Scissors")
print("You Win!")
self.user_score += 1
break
elif (user_guess == '2' and ai_guess == '1'):
print("AI Selected Rock")
print("Paper Beats Rock")
print("You Win!")
self.user_score += 1
break
elif (user_guess == '2' and ai_guess == '3'):
print("AI Selected Scissors")
print("Scissors Beats Paper")
print("AI Wins!")
self.ai_score += 1
break
elif (user_guess == '3' and ai_guess == '1'):
print("AI Selected Rock")
print("Rock Beats Scissors")
print("AI Wins!")
self.ai_score += 1
break
elif (user_guess == '3' and ai_guess == '2'):
print("AI Selected Paper")
print("Scissors Beats Paper")
print("You Win!")
self.user_score += 1
break
else:
pass
break
except KeyboardInterrupt:
print("\nKeyboard Interrupt - Exiting...")
exit()
#1 = Rock, 2 = Paper, 3 = Scissors
def main():
game_object = newgame()
while(1):
print("\n1) New Game \n2) View Score \n3) Exit")
try:
option = input("Select An Option: ")
if option == '1':
game_object.playgame()
elif option == '2':
print("\nScores")
print("Your Score: " + str(game_object.user_score))
print("AI Score: " + str(game_object.ai_score))
elif option == '3':
print('\nExiting...')
break
else:
print('%s is not an option' % option)
except KeyboardInterrupt:
print("\nKeyboard Interrupt - Exiting...")
exit()
main()
</code></pre>
<p>Classes are wonderful. <code>__init__</code> is the constructor for this class. It basically makes the object instant for the class and sets the variable to what you want. <code>game_object = newgame()</code> makes the class object and stores it into game_object. To get the class variable of the game_object, we use <code>game_object.ai_score</code>. Since you made a class object it's class variables are still in scope of the object you made, even though it might be outside of your function. Generally, if I need to use a variable outside of a function, and is tempted to use Global, I make a class instead. There are some cases where you wouldn't want this, but personally I haven't come across one. Also you might want to look into like what the comments were saying about using dictionary as options. Any other questions? </p>
<p>Edit: </p>
<p>To answer your new question about <code>print('%s is not an option' % option)</code> always running, is because in your code you had <code>if option == '1':</code> and then <code>if option == '2':</code> You want the option 2 to be elif. I fixed it in my code. If statements are in blocks. since you started a new if, the else didn't check first first if to see if it's a valid option. It was out of it's scope in a sense. So since your code was basically saying is option equal to 1? is it equal to 2 or 3 or anything else? notice these are two different questions? </p>
| 2
|
2016-09-08T19:15:23Z
|
[
"python"
] |
How Would I Go About Making My Python Scoring System Work?
| 39,398,378
|
<p>I've been learning through an online course and I was trying to come up with ideas for things I could create to "test" myself as it were so I came up with a rock paper scissors game. It was working well so I decided to try and add a way of keeping track of your score vs the computer. Didn't go so well.</p>
<p>Here's what I have:</p>
<pre><code>from random import randint
ai_score = 0
user_score = 0
def newgame():
print('New Game')
try:
while(1):
ai_guess = str(randint(1,3))
print('\n1) Rock \n2) Paper \n3) Scissors')
user_guess = input("Select An Option: ")
if(user_guess == '1'):
print('\nYou Selected Rock')
elif(user_guess == '2'):
print('\nYou Selected Paper')
elif(user_guess == '3'):
print('\nYou Selected Scissors')
else:
print('%s is not an option' % user_guess)
if(user_guess == ai_guess):
print('Draw - Please Try Again')
elif (user_guess == '1' and ai_guess == '2'):
print("AI Selected Paper")
print("Paper Beats Rock")
print("AI Wins!")
ai_score += 1
break
elif (user_guess == '1' and ai_guess == '3'):
print("AI Selected Scissors")
print("Rock Beats Scissors")
print("You Win!")
user_score += 1
break
elif (user_guess == '2' and ai_guess == '1'):
print("AI Selected Rock")
print("Paper Beats Rock")
print("You Win!")
user_score += 1
break
elif (user_guess == '2' and ai_guess == '3'):
print("AI Selected Scissors")
print("Scissors Beats Paper")
print("AI Wins!")
ai_score += 1
break
elif (user_guess == '3' and ai_guess == '1'):
print("AI Selected Rock")
print("Rock Beats Scissors")
print("AI Wins!")
ai_score += 1
break
elif (user_guess == '3' and ai_guess == '2'):
print("AI Selected Paper")
print("Scissors Beats Paper")
print("You Win!")
user_score += 1
break
else:
pass
break
except KeyboardInterrupt:
print("\nKeyboard Interrupt - Exiting...")
exit()
#1 = Rock, 2 = Paper, 3 = Scissors
def main():
while(1):
print("\n1) New Game \n2) View Score \n3) Exit")
try:
option = input("Select An Option: ")
if option == '1':
newgame()
if option == '2':
print("\nScores")
print("Your Score: " + str(user_score))
print("AI Score: " + str(ai_score))
elif option == '3':
print('\nExiting...')
break
else:
print('%s is not an option' % option)
except KeyboardInterrupt:
print("\nKeyboard Interrupt - Exiting...")
exit()
main()
</code></pre>
<p>I read somewhere that global variables can work but are generally frowned upon. Not sure why but then I can't say they're =0 so couldn't get that to work. Putting the ai_score and user_score in the newgame() doesn't work because it sets it to 0 every time you re run. Any help would be much appreciated.</p>
<p>As a quick extra note, the second</p>
<pre><code>else:
print('%s is not an option' % option)
</code></pre>
<p>in main() always seems to execute and always says "1 is not an option" and I have no idea why it does that. I would assume something to do with while loops but I need those to keep it running so an explanation of why and how to fix would be great. At the end of the day, I'm just here to learn more.</p>
| 1
|
2016-09-08T19:01:36Z
| 39,398,654
|
<p>It seems that the variable <code>option</code> is not '1', right ? Well, that's because the function <code>input</code> does not return a character string, but an <code>integer</code>. You can see this by adding a little trace in this program.</p>
<pre><code>print (option, type (option))
</code></pre>
<p>after a line where the variable option has been set.</p>
<p>This will tell you of what type is the variable option, in that case it is an integer. So first, you need to replace the comparisons against strings (like <code>if option == '1':</code> by comparisons against integers, that is : <code>if option == 1:</code>. </p>
<p>As for the second question : variables declared or assigned inside of a function exist for the scope of that function only. If you need to use a variable inside a function that has outer scope, it should be redeclared inside the function as <code>global</code> (even if they are "frowned upon" -- and there are good reasons for this). At the beginning of <code>def newgame():</code> you need to declare again your global variables : <code>global ai_score, user_score</code>. You can also use classes to get familiar with object oriented programming and write nicer code. There is an other error left in this program, but I'm sure you'll find out.</p>
| 1
|
2016-09-08T19:18:54Z
|
[
"python"
] |
Find group of strings that are anagrams
| 39,398,444
|
<p>This question refers to <a href="http://www.lintcode.com/en/problem/anagrams/" rel="nofollow">this problem on lintcode</a>. I have a working solution, but it takes too long for the huge testcase. I am wondering how can it be improved? Maybe I can decrease the number of comparisons I make in the outer loop.</p>
<pre><code>class Solution:
# @param strs: A list of strings
# @return: A list of strings
def anagrams(self, strs):
# write your code here
ret=set()
for i in range(0,len(strs)):
for j in range(i+1,len(strs)):
if i in ret and j in ret:
continue
if Solution.isanagram(strs[i],strs[j]):
ret.add(i)
ret.add(j)
return [strs[i] for i in list(ret)]
@staticmethod
def isanagram(s, t):
if len(s)!=len(t):
return False
chars={}
for i in s:
if i in chars:
chars[i]+=1
else:
chars[i]=1
for i in t:
if i not in chars:
return False
else:
chars[i]-=1
if chars[i]<0:
return False
for i in chars:
if chars[i]!=0:
return False
return True
</code></pre>
<p><strong>Update:</strong> Just to add, not looking for built-in pythonic solutions such as using <code>Counter</code> which are already optimized. Have added Mike's suggestions, but still exceeding time-limit.</p>
| 4
|
2016-09-08T19:05:43Z
| 39,398,607
|
<p>Why not this? </p>
<pre><code>str1 = "cafe"
str2 = "face"
def isanagram(s1,s2):
return all(sorted(list(str1)) == sorted(list(str2)))
if isanagram(str1, str2):
print "Woo"
</code></pre>
| 0
|
2016-09-08T19:15:46Z
|
[
"python",
"string",
"anagram"
] |
Find group of strings that are anagrams
| 39,398,444
|
<p>This question refers to <a href="http://www.lintcode.com/en/problem/anagrams/" rel="nofollow">this problem on lintcode</a>. I have a working solution, but it takes too long for the huge testcase. I am wondering how can it be improved? Maybe I can decrease the number of comparisons I make in the outer loop.</p>
<pre><code>class Solution:
# @param strs: A list of strings
# @return: A list of strings
def anagrams(self, strs):
# write your code here
ret=set()
for i in range(0,len(strs)):
for j in range(i+1,len(strs)):
if i in ret and j in ret:
continue
if Solution.isanagram(strs[i],strs[j]):
ret.add(i)
ret.add(j)
return [strs[i] for i in list(ret)]
@staticmethod
def isanagram(s, t):
if len(s)!=len(t):
return False
chars={}
for i in s:
if i in chars:
chars[i]+=1
else:
chars[i]=1
for i in t:
if i not in chars:
return False
else:
chars[i]-=1
if chars[i]<0:
return False
for i in chars:
if chars[i]!=0:
return False
return True
</code></pre>
<p><strong>Update:</strong> Just to add, not looking for built-in pythonic solutions such as using <code>Counter</code> which are already optimized. Have added Mike's suggestions, but still exceeding time-limit.</p>
| 4
|
2016-09-08T19:05:43Z
| 39,398,637
|
<p>Skip strings you already placed in the set. Don't test them again.</p>
<pre><code># @param strs: A list of strings
# @return: A list of strings
def anagrams(self, strs):
# write your code here
ret=set()
for i in range(0,len(strs)):
for j in range(i+1,len(strs)):
# If both anagrams exist in set, there is no need to compare them.
if i in ret and j in ret:
continue
if Solution.isanagram(strs[i],strs[j]):
ret.add(i)
ret.add(j)
return [strs[i] for i in list(ret)]
</code></pre>
<p>You can also do a length comparison in your anagram test before iterating through the letters. Whenever the strings aren't the same length, they can't be anagrams anyway. Also, when a counter in <code>chars</code> reaches -1 when comparing values in t, just return false. Don't iterate through <code>chars</code> again.</p>
<pre><code>@staticmethod
def isanagram(s, t):
# Test strings are the same length
if len(s) != len(t):
return False
chars={}
for i in s:
if i in chars:
chars[i]+=1
else:
chars[i]=1
for i in t:
if i not in chars:
return False
else:
chars[i]-=1
# If this is below 0, return false
if chars[i] < 0:
return False
for i in chars:
if chars[i]!=0:
return False
return True
</code></pre>
| 3
|
2016-09-08T19:17:46Z
|
[
"python",
"string",
"anagram"
] |
Find group of strings that are anagrams
| 39,398,444
|
<p>This question refers to <a href="http://www.lintcode.com/en/problem/anagrams/" rel="nofollow">this problem on lintcode</a>. I have a working solution, but it takes too long for the huge testcase. I am wondering how can it be improved? Maybe I can decrease the number of comparisons I make in the outer loop.</p>
<pre><code>class Solution:
# @param strs: A list of strings
# @return: A list of strings
def anagrams(self, strs):
# write your code here
ret=set()
for i in range(0,len(strs)):
for j in range(i+1,len(strs)):
if i in ret and j in ret:
continue
if Solution.isanagram(strs[i],strs[j]):
ret.add(i)
ret.add(j)
return [strs[i] for i in list(ret)]
@staticmethod
def isanagram(s, t):
if len(s)!=len(t):
return False
chars={}
for i in s:
if i in chars:
chars[i]+=1
else:
chars[i]=1
for i in t:
if i not in chars:
return False
else:
chars[i]-=1
if chars[i]<0:
return False
for i in chars:
if chars[i]!=0:
return False
return True
</code></pre>
<p><strong>Update:</strong> Just to add, not looking for built-in pythonic solutions such as using <code>Counter</code> which are already optimized. Have added Mike's suggestions, but still exceeding time-limit.</p>
| 4
|
2016-09-08T19:05:43Z
| 39,398,767
|
<p>As an addition to @Mike's great answer, here is a nice Pythonic way to do it:</p>
<pre><code>import collections
class Solution:
# @param strs: A list of strings
# @return: A list of strings
def anagrams(self, strs):
patterns = Solution.find_anagram_words(strs)
return [word for word in strs if ''.join(sorted(word)) in patterns]
@staticmethod
def find_anagram_words(strs):
anagrams = collections.Counter(''.join(sorted(word)) for word in strs)
return {word for word, times in anagrams.items() if times > 1}
</code></pre>
| 1
|
2016-09-08T19:27:36Z
|
[
"python",
"string",
"anagram"
] |
Find group of strings that are anagrams
| 39,398,444
|
<p>This question refers to <a href="http://www.lintcode.com/en/problem/anagrams/" rel="nofollow">this problem on lintcode</a>. I have a working solution, but it takes too long for the huge testcase. I am wondering how can it be improved? Maybe I can decrease the number of comparisons I make in the outer loop.</p>
<pre><code>class Solution:
# @param strs: A list of strings
# @return: A list of strings
def anagrams(self, strs):
# write your code here
ret=set()
for i in range(0,len(strs)):
for j in range(i+1,len(strs)):
if i in ret and j in ret:
continue
if Solution.isanagram(strs[i],strs[j]):
ret.add(i)
ret.add(j)
return [strs[i] for i in list(ret)]
@staticmethod
def isanagram(s, t):
if len(s)!=len(t):
return False
chars={}
for i in s:
if i in chars:
chars[i]+=1
else:
chars[i]=1
for i in t:
if i not in chars:
return False
else:
chars[i]-=1
if chars[i]<0:
return False
for i in chars:
if chars[i]!=0:
return False
return True
</code></pre>
<p><strong>Update:</strong> Just to add, not looking for built-in pythonic solutions such as using <code>Counter</code> which are already optimized. Have added Mike's suggestions, but still exceeding time-limit.</p>
| 4
|
2016-09-08T19:05:43Z
| 39,398,876
|
<p>Instead of comparing all pairs of strings, you can just create a dictionary (or <a href="https://docs.python.org/3/library/collections.html#collections.defaultdict" rel="nofollow"><code>collections.defaultdict</code></a>) mapping each of the letter-counts to the words having those counts. For getting the letter-counts, you can use <a href="https://docs.python.org/3/library/collections.html#collections.Counter" rel="nofollow"><code>collections.Counter</code></a>. Afterwards, you just have to get the values from that dict. If you want all words that are anagrams of any other words, just merge the lists that have more than one entry.</p>
<pre><code>strings = ["cat", "act", "rat", "hut", "tar", "tact"]
anagrams = defaultdict(list)
for s in strings:
anagrams[frozenset(Counter(s).items())].append(s)
print([v for v in anagrams.values()])
# [['hut'], ['rat', 'tar'], ['cat', 'act'], ['tact']]
print([x for v in anagrams.values() if len(v) > 1 for x in v])
# ['cat', 'act', 'rat', 'tar']
</code></pre>
<p>Of course, if you prefer not to use builtin functionality you can with just a few more lines just as well use a regular <code>dict</code> instead of <code>defaultdict</code> and write your own <code>Counter</code>, similar to what you have in your <code>isanagram</code> method, just without the comparison part.</p>
| 2
|
2016-09-08T19:34:32Z
|
[
"python",
"string",
"anagram"
] |
Find group of strings that are anagrams
| 39,398,444
|
<p>This question refers to <a href="http://www.lintcode.com/en/problem/anagrams/" rel="nofollow">this problem on lintcode</a>. I have a working solution, but it takes too long for the huge testcase. I am wondering how can it be improved? Maybe I can decrease the number of comparisons I make in the outer loop.</p>
<pre><code>class Solution:
# @param strs: A list of strings
# @return: A list of strings
def anagrams(self, strs):
# write your code here
ret=set()
for i in range(0,len(strs)):
for j in range(i+1,len(strs)):
if i in ret and j in ret:
continue
if Solution.isanagram(strs[i],strs[j]):
ret.add(i)
ret.add(j)
return [strs[i] for i in list(ret)]
@staticmethod
def isanagram(s, t):
if len(s)!=len(t):
return False
chars={}
for i in s:
if i in chars:
chars[i]+=1
else:
chars[i]=1
for i in t:
if i not in chars:
return False
else:
chars[i]-=1
if chars[i]<0:
return False
for i in chars:
if chars[i]!=0:
return False
return True
</code></pre>
<p><strong>Update:</strong> Just to add, not looking for built-in pythonic solutions such as using <code>Counter</code> which are already optimized. Have added Mike's suggestions, but still exceeding time-limit.</p>
| 4
|
2016-09-08T19:05:43Z
| 39,399,047
|
<p>Your solution is slow because you're not taking advantage of python's data structures. </p>
<p>Here's a solution that collects results in a dict:</p>
<pre><code>class Solution:
def anagrams(self, strs):
d = {}
for word in strs:
key = tuple(sorted(word))
try:
d[key].append(word)
except KeyError:
d[key] = [word]
return [w for ws in d.values() for w in ws if len(ws) > 1]
</code></pre>
| 1
|
2016-09-08T19:45:28Z
|
[
"python",
"string",
"anagram"
] |
Grouping of items based on criteria
| 39,398,539
|
<p>I have a list of items:</p>
<pre><code>ShelvesToPack = [{'ShelfLength': 2278.0, 'ShelfWidth': 356.0, 'ShelfArea': 759152.0, 'ItemNames': 1},
{'ShelfLength': 1220.0, 'ShelfWidth': 610.0, 'ShelfArea': 372100.0, 'ItemNames': 2},
{'ShelfLength': 2310.0, 'ShelfWidth': 762.0, 'ShelfArea': 1760220.0, 'ItemNames': 3},
{'ShelfLength': 610.0, 'ShelfWidth': 610.0, 'ShelfArea': 1450435.0, 'ItemNames': 4}]
</code></pre>
<p>I need the program that tells how many minimum number of groups one can have and how the items are grouped.</p>
<p>I would like to form groups of these items such that sum of shelflength of items <= max length or sum of shelfwidth of items <= max width and sum of ShelfArea of items <= Max Area. In this case, if we look at the logic we can have all the items packed in minimum 2 groups - item 1 and 3 will form one group and item 2 & 4 will form other group. I would like to have the answer in the format:</p>
<pre><code>[[{'ShelfLength': 2278.0, 'ShelfWidth': 356.0, 'ShelfArea': 759152.0, 'ItemNames': 1} ,
{'ShelfLength': 2310.0, 'ShelfWidth': 762.0, 'ShelfArea': 1760220.0, 'ItemNames': 3}],
[{'ShelfLength': 1220.0, 'ShelfWidth': 610.0, 'ShelfArea': 372100.0, 'ItemNames': 2},
, {'ShelfLength': 610.0, 'ShelfWidth': 610.0, 'ShelfArea': 1450435.0, 'ItemNames': 4}]]
</code></pre>
<p>I have written a code but it does not give the result I wanted.</p>
<pre><code>ShelvesToPack_sorted = sorted(ShelvesToPack, key = itemgetter('ShelfWidth'), reverse = True)
AreaOfObject = 2972897.28
current_width = 0
current_length = 0
current_area = 0
ply =[]
plywoods=[]
for item in ShelvesToPack_sorted:
if (current_width + item['ShelfWidth'] <= 1219.2 or current_length + item['ShelfLength'] <= 2438.5) and current_area + item['ShelfArea'] <= AreaOfObject:
ply.append(item)
current_width += item['ShelfWidth']
current_length += item['ShelfLength']
current_area += item['ShelfArea']
else:
plywoods.append(ply)
if (item['ShelfWidth'] <= 1219.2 or item['ShelfLength'] <= 2438.5) and item['ShelfArea'] <= AreaOfObject:
ply = [item]
current_width = item['ShelfWidth']
current_length = item['ShelfLength']
current_area = item['ShelfArea']
else:
ply = []
current_width = 0
current_length = 0
current_area = 0
if ply:
plywoods.append(ply)
print(plywoods)
</code></pre>
<p>I have got the following output which is not quite right and I am unable to do the correct grouping.</p>
<pre><code>[[{'ItemNames': 3, 'ShelfWidth': 762.0, 'ShelfLength': 310.0, 'ShelfArea': 1760220.0}],
[{'ItemNames': 2, 'ShelfWidth': 610.0, 'ShelfLength': 1220.0, 'ShelfArea': 372100.0},
{'ItemNames': 4, 'ShelfWidth': 610.0, 'ShelfLength': 610.0, 'ShelfArea': 1450435.0}],
[{'ItemNames': 1, 'ShelfWidth': 356.0, 'ShelfLength': 2278.0, 'ShelfArea': 759152.0}]]
</code></pre>
<p>Can anyone please suggest?</p>
| 0
|
2016-09-08T19:11:43Z
| 39,398,866
|
<p>Here is a simplified version of your code:</p>
<pre><code>data = [{'ShelfLength': 2278.0, 'ShelfWidth': 356.0, 'ShelfArea': 759152.0, 'ItemNames': 1},
{'ShelfLength': 1220.0, 'ShelfWidth': 610.0, 'ShelfArea': 372100.0, 'ItemNames': 2},
{'ShelfLength': 2310.0, 'ShelfWidth': 762.0, 'ShelfArea': 1760220.0, 'ItemNames': 3},
{'ShelfLength': 610.0, 'ShelfWidth': 610.0, 'ShelfArea': 1450435.0, 'ItemNames': 4}]
SL = 'ShelfLength'
SW = 'ShelfWidth'
SA = 'ShelfArea'
IN = 'ItemNames'
max_width = 1219.2
max_len = 2438.5
max_area = 2972897.28
grouped_data = [[], []]
for record in data:
if (record[SL] <= max_len or record[SW] <= max_width) and record[SA] <= max_area:
grouped_data[0].append(record)
else:
grouped_data[1].append(record)
print(grouped_data)
</code></pre>
<p>This gives the same result you got, which is correct given that all elements satisfy the condition you mentioned.</p>
| 0
|
2016-09-08T19:33:51Z
|
[
"python",
"python-3.x"
] |
Grouping of items based on criteria
| 39,398,539
|
<p>I have a list of items:</p>
<pre><code>ShelvesToPack = [{'ShelfLength': 2278.0, 'ShelfWidth': 356.0, 'ShelfArea': 759152.0, 'ItemNames': 1},
{'ShelfLength': 1220.0, 'ShelfWidth': 610.0, 'ShelfArea': 372100.0, 'ItemNames': 2},
{'ShelfLength': 2310.0, 'ShelfWidth': 762.0, 'ShelfArea': 1760220.0, 'ItemNames': 3},
{'ShelfLength': 610.0, 'ShelfWidth': 610.0, 'ShelfArea': 1450435.0, 'ItemNames': 4}]
</code></pre>
<p>I need the program that tells how many minimum number of groups one can have and how the items are grouped.</p>
<p>I would like to form groups of these items such that sum of shelflength of items <= max length or sum of shelfwidth of items <= max width and sum of ShelfArea of items <= Max Area. In this case, if we look at the logic we can have all the items packed in minimum 2 groups - item 1 and 3 will form one group and item 2 & 4 will form other group. I would like to have the answer in the format:</p>
<pre><code>[[{'ShelfLength': 2278.0, 'ShelfWidth': 356.0, 'ShelfArea': 759152.0, 'ItemNames': 1} ,
{'ShelfLength': 2310.0, 'ShelfWidth': 762.0, 'ShelfArea': 1760220.0, 'ItemNames': 3}],
[{'ShelfLength': 1220.0, 'ShelfWidth': 610.0, 'ShelfArea': 372100.0, 'ItemNames': 2},
, {'ShelfLength': 610.0, 'ShelfWidth': 610.0, 'ShelfArea': 1450435.0, 'ItemNames': 4}]]
</code></pre>
<p>I have written a code but it does not give the result I wanted.</p>
<pre><code>ShelvesToPack_sorted = sorted(ShelvesToPack, key = itemgetter('ShelfWidth'), reverse = True)
AreaOfObject = 2972897.28
current_width = 0
current_length = 0
current_area = 0
ply =[]
plywoods=[]
for item in ShelvesToPack_sorted:
if (current_width + item['ShelfWidth'] <= 1219.2 or current_length + item['ShelfLength'] <= 2438.5) and current_area + item['ShelfArea'] <= AreaOfObject:
ply.append(item)
current_width += item['ShelfWidth']
current_length += item['ShelfLength']
current_area += item['ShelfArea']
else:
plywoods.append(ply)
if (item['ShelfWidth'] <= 1219.2 or item['ShelfLength'] <= 2438.5) and item['ShelfArea'] <= AreaOfObject:
ply = [item]
current_width = item['ShelfWidth']
current_length = item['ShelfLength']
current_area = item['ShelfArea']
else:
ply = []
current_width = 0
current_length = 0
current_area = 0
if ply:
plywoods.append(ply)
print(plywoods)
</code></pre>
<p>I have got the following output which is not quite right and I am unable to do the correct grouping.</p>
<pre><code>[[{'ItemNames': 3, 'ShelfWidth': 762.0, 'ShelfLength': 310.0, 'ShelfArea': 1760220.0}],
[{'ItemNames': 2, 'ShelfWidth': 610.0, 'ShelfLength': 1220.0, 'ShelfArea': 372100.0},
{'ItemNames': 4, 'ShelfWidth': 610.0, 'ShelfLength': 610.0, 'ShelfArea': 1450435.0}],
[{'ItemNames': 1, 'ShelfWidth': 356.0, 'ShelfLength': 2278.0, 'ShelfArea': 759152.0}]]
</code></pre>
<p>Can anyone please suggest?</p>
| 0
|
2016-09-08T19:11:43Z
| 39,401,609
|
<p>Here's something that appears to work correctly. Since the order of shelves in a combination doesn't matter, it simply does things using a brute-force approach that checks every possible combination of the shelves. Because there may be a very large number of them to process, it's important to write code which is fairly efficient. There are probably more optimal algorithms that would accomplishing this fasterâin what is essentially a searching or path-finding problem. </p>
<p>With that in mind and to make accessing the fields in each shelf dictionary easier, it first converts them all into an equivalent list of <code>namedtuple</code> instances which are used by the code from that point on. (If you really need them in dictionary form, it would be simple to keep a copy of it around or recreate it when necessary.)</p>
<p>Once the conversion is done, it then checks all combinations of shelf items in the range from 1 to all of them. It stores these it finds in a list named <code>groups</code>. <code>group[N]</code> will contain a sublist of the combinations of <code>N</code> items that met the criteria. (And the length of that sublist is the number of combinations of that number of shelves found, of course).</p>
<p>So, for example, in the output far below, it shows that there were 4 groups of 2 items that meet the criteria. (It doesn't print each combination of them, however.)</p>
<pre><code>from collections import namedtuple
from itertools import combinations
from operator import itemgetter
shelves_to_pack = [
{'ShelfLength': 2278.0, 'ShelfWidth': 356.0, 'ShelfArea': 759152.0, 'ItemNames': 1},
{'ShelfLength': 1220.0, 'ShelfWidth': 610.0, 'ShelfArea': 372100.0, 'ItemNames': 2},
{'ShelfLength': 2310.0, 'ShelfWidth': 762.0, 'ShelfArea': 1760220.0, 'ItemNames': 3},
{'ShelfLength': 610.0, 'ShelfWidth': 610.0, 'ShelfArea': 1450435.0, 'ItemNames': 4}]
# convert dict list into a namedtuple list to simplify field access
Shelf = namedtuple('Shelf',
'length width area item_names')
shelves = [Shelf(length=shelf['ShelfLength'], width=shelf['ShelfWidth'],
area=shelf['ShelfArea'], item_names=shelf['ItemNames'])
for shelf in shelves_to_pack]
MAX_WIDTH, MAX_LENGTH = 1219.2, 2438.5
MAX_AREA = 2972897.28
def meets_criteria(shelf_combo):
""" Determine if shelf combination meets criteria. """
return (sum(shelf.area for shelf in shelf_combo) <= MAX_AREA
and (sum(shelf.length for shelf in shelf_combo) <= MAX_LENGTH
or sum(shelf.width for shelf in shelf_combo) <= MAX_WIDTH))
groups = [[]] # first group representing zero shelf items is always empty
for num_shelves in range(1, len(shelves)+1):
groups.append([combo for combo in combinations(shelves, num_shelves)
if meets_criteria(combo)])
for i, group in enumerate(groups):
print('groups of {} items: size {}'.format(i, len(group)))
</code></pre>
<p>This is the output generated from the code and input data above:</p>
<pre class="lang-none prettyprint-override"><code>groups of 0 items: size 0
groups of 1 items: size 4
groups of 2 items: size 4
groups of 3 items: size 0
groups of 4 items: size 0
</code></pre>
| 0
|
2016-09-08T23:28:01Z
|
[
"python",
"python-3.x"
] |
Python: Get javascript file from href tag of html
| 39,398,592
|
<p>Consider a website similar to this one:</p>
<p><a href="http://a810-bisweb.nyc.gov/bisweb/COsByLocationServlet?requestid=1&allbin=3055311" rel="nofollow">http://a810-bisweb.nyc.gov/bisweb/COsByLocationServlet?requestid=1&allbin=3055311</a></p>
<p>As one can see, the website contains links to pdf files referenced by an href tag in the page source, e.g.:</p>
<pre><code><a href="javascript:$('form_cofo_pdf_view_B000114563.PDF').submit();">B000114563.PDF</a>
</code></pre>
<p>I would like to open the underlying file using python, effectively scraping the results.</p>
<pre><code>req = urllib2.Request("link.com")
page = urllib2.urlopen(req)
soup = BeautifulSoup(page)
links = []
for link in soup.findAll('a'):
links.append(link.get("href"))
</code></pre>
<p>Normally I would just connect the base url with the href url to get the documents, but here, they are referenced with javascript. Hence I am not entirely sure how to access the files.</p>
<p>I would prefer to use urrlib2 and BeautifulSoup and not switch to Selenium to click on links. Does anyone have an idea to accomplish that? It would be greatly appreciated.</p>
| 1
|
2016-09-08T19:14:59Z
| 39,399,333
|
<p>I downloaded few files and compared direct link with its name and all elements required in link you have in filename</p>
<p>Filename:</p>
<pre><code>form_cofo_pdf_view_B000114563.PDF
</code></pre>
<p>Direct link:</p>
<pre><code>http://a810-bisweb.nyc.gov/bisweb/CofoDocumentContentServlet
?passjobnumber=null
&cofomatadata1=cofo
&cofomatadata2=B
&cofomatadata3=000
&cofomatadata4=114000
&cofomatadata5=B000114563.PDF
</code></pre>
<p>So you can create direct link when you get filename from string <code>javascript:$('form_cofo_pdf_view_B000114563.PDF').submit();</code></p>
<p>Working code: <a href="http://pastebin.com/kt72GSyYa" rel="nofollow">http://pastebin.com/kt72GSyYa</a></p>
| 0
|
2016-09-08T20:05:08Z
|
[
"javascript",
"python",
"html",
"web",
"web-scraping"
] |
In pycharm can I run every file for django?
| 39,398,625
|
<p>I'm new to Django. My localhost site is running fine. Since I am using pycharm it is easy to run any file. I decided to run each file in my django project, and came across several errors, such as this one in my views.py:</p>
<pre><code>django.core.exceptions.ImproperlyConfigured: Requested setting DEFAULT_INDEX_TABLESPACE, but settings are not configured. You must either define the environment variable DJANGO_SETTINGS_MODULE or call settings.configure() before accessing settings.
</code></pre>
<p>Even though the site is running, what seems like properly, I'm seeing this message. What is causing this, and is it typical?</p>
| 1
|
2016-09-08T19:17:06Z
| 39,399,060
|
<p>You cannot run each file present in your django project individual.</p>
<p>No matter those are file with <code>.py</code> extension. They depend on the django framework to get the project running.</p>
<p>The reason you might be seeing that error is because you might be using the attributes present in the <code>settings.py</code> file which in turn requires django to set the application running like starting the <code>WSGI</code> server, getting all the dependencies and the <code>installed apps</code> ready before you actually use anything. </p>
<p>Understand that Django is a Framework and it relies on many underlying components to actually work. Even thought you can technically run any file in any manner, you cannot start the application itself. </p>
<p>There are other ways to do it if you like to test the application like using <code>django shell</code> by <code>python manage.py shell</code> to check and test the application, which would be a better way of doing individual testing rather than running each file standalone. </p>
| 0
|
2016-09-08T19:46:16Z
|
[
"python",
"django",
"pycharm"
] |
In pycharm can I run every file for django?
| 39,398,625
|
<p>I'm new to Django. My localhost site is running fine. Since I am using pycharm it is easy to run any file. I decided to run each file in my django project, and came across several errors, such as this one in my views.py:</p>
<pre><code>django.core.exceptions.ImproperlyConfigured: Requested setting DEFAULT_INDEX_TABLESPACE, but settings are not configured. You must either define the environment variable DJANGO_SETTINGS_MODULE or call settings.configure() before accessing settings.
</code></pre>
<p>Even though the site is running, what seems like properly, I'm seeing this message. What is causing this, and is it typical?</p>
| 1
|
2016-09-08T19:17:06Z
| 39,604,322
|
<p><strong>You can run</strong> any individual <strong>python file</strong> in a Django Project with Django, but keep in mind that the settings for Django must be supplied. This is not a good practise to run individual file with Django but for debugging purposes you may use it (<em>for example. to test a parser that you wrote with database access</em>),</p>
<p>You have to configure the Django settings before you can do anything with Django on a single file.</p>
<pre><code>from django.conf import settings
settings.configure()
def do_something_with_a_model():
# does something with a model
print "I am done!!"
</code></pre>
<p>Note that relative imports may break when running on single files.</p>
<p>(<em>for example. imports like <code>from .another_module import AnotherClass</code> may break when working with single file.</em>)</p>
| 0
|
2016-09-20T21:54:42Z
|
[
"python",
"django",
"pycharm"
] |
Pandas groupby datatime index, possible bug
| 39,398,821
|
<p>I have a Pandas DataFrame with a column that is a tz-aware TimeStamp and I tried to groupby(level=0).first(). I get an incorrect result. Am I missing something or is it a pandas bug?</p>
<pre><code>x = pd.DataFrame(index = [1,1,2,2,2], data = pd.date_range("7:00", "9:00", freq="30min", tz = 'US/Eastern'))
In [58]: x
Out[58]:
0
1 2016-09-08 07:00:00-04:00
1 2016-09-08 07:30:00-04:00
2 2016-09-08 08:00:00-04:00
2 2016-09-08 08:30:00-04:00
2 2016-09-08 09:00:00-04:00
In [59]: x.groupby(level=0).first()
Out[59]:
0
1 2016-09-08 11:00:00-04:00
2 2016-09-08 12:00:00-04:00
</code></pre>
| 3
|
2016-09-08T19:31:01Z
| 39,408,751
|
<p>I don't believe that it is a bug. If you go through the <a href="http://pytz.sourceforge.net/" rel="nofollow"><code>pytz</code></a> docs, it is clearly indicated that for timezone US/Eastern, there is no way to specify before / after the end-of-daylight-saving-time transition. </p>
<p>In such cases, sticking with UTC seems to be the best option.</p>
<p>Excerpt from the <a href="http://pandas.pydata.org/pandas-docs/stable/io.html#caveats" rel="nofollow"><code>docs</code></a>:</p>
<blockquote>
<pre><code> Be aware that timezones (e.g., pytz.timezone('US/Eastern')) are not
necessarily equal across timezone versions. So if data is localized to
a specific timezone in the HDFStore using one version of a timezone
library and that data is updated with another version, the data will
be converted to UTC since these timezones are not considered equal.
Either use the same version of timezone library or use tz_convert with
the updated timezone definition.
</code></pre>
</blockquote>
<p>The conversion can be done as follows:</p>
<p><strong>A:</strong> using <code>tz_localize</code> method to localize naive/time-aware datetime to UTC</p>
<pre><code>data = pd.date_range("7:00", "9:00", freq="30min").tz_localize('UTC')
</code></pre>
<p><strong>B:</strong> using <code>tz_convert</code> method to convert pandas objects to convert
tz aware data to another time zone.</p>
<pre><code>df = pd.DataFrame(index=[1,1,2,2,2], data=data.tz_convert('US/Eastern'))
df.groupby(level=0).first()
</code></pre>
<p>which results in:</p>
<pre><code> 0
1 2016-09-09 07:00:00-04:00
2 2016-09-09 08:00:00-04:00
#0 datetime64[ns, US/Eastern]
#dtype: object
</code></pre>
| 1
|
2016-09-09T09:51:54Z
|
[
"python",
"pandas",
"timestamp"
] |
Pandas groupby datatime index, possible bug
| 39,398,821
|
<p>I have a Pandas DataFrame with a column that is a tz-aware TimeStamp and I tried to groupby(level=0).first(). I get an incorrect result. Am I missing something or is it a pandas bug?</p>
<pre><code>x = pd.DataFrame(index = [1,1,2,2,2], data = pd.date_range("7:00", "9:00", freq="30min", tz = 'US/Eastern'))
In [58]: x
Out[58]:
0
1 2016-09-08 07:00:00-04:00
1 2016-09-08 07:30:00-04:00
2 2016-09-08 08:00:00-04:00
2 2016-09-08 08:30:00-04:00
2 2016-09-08 09:00:00-04:00
In [59]: x.groupby(level=0).first()
Out[59]:
0
1 2016-09-08 11:00:00-04:00
2 2016-09-08 12:00:00-04:00
</code></pre>
| 3
|
2016-09-08T19:31:01Z
| 39,411,972
|
<p>This is actually a pandas bug reported here:</p>
<p><a href="https://github.com/pydata/pandas/issues/10668" rel="nofollow">https://github.com/pydata/pandas/issues/10668</a></p>
| 0
|
2016-09-09T12:47:21Z
|
[
"python",
"pandas",
"timestamp"
] |
Permute list of lists with mixed elements (np.random.permutation() fails with ValueError)
| 39,398,877
|
<p>I'm trying to permute a list composed of sublists with mixed-type elements:</p>
<pre><code>import numpy as np
a0 = ['122', 877.503017, 955.471176, [21.701201, 1.315585]]
a1 = ['176', 1134.076908, 1125.504758, [19.436181, 0.9987899]]
a2 = ['177', 1038.686843, 1018.987868, [19.539959, 1.183997]]
a3 = ['178', 878.999081, 1022.050447, [19.6448771, 1.1867719]]
a = [a0, a1, a2, a3]
b = np.random.permutation(a)
</code></pre>
<p>This will fail with:</p>
<pre><code>ValueError: cannot set an array element with a sequence
</code></pre>
<p>Is there a built in function that will allow me to generate such permutation?</p>
<p>I need to generate a single random permutation, I'm not trying to obtain all the possible permutations.</p>
<hr>
<p>I checked the three answers given with:</p>
<pre><code>import time
import random
# np.random.permutation()
start = time.time()
for _ in np.arange(100000):
b = np.random.permutation([np.array(i, dtype='object') for i in a])
print(time.time() - start)
# np.random.shuffle()
start = time.time()
for _ in np.arange(100000):
b = a[:]
np.random.shuffle(b)
print(time.time() - start)
# random.shuffle()
start = time.time()
for _ in np.arange(100000):
random.shuffle(a)
print(time.time() - start)
</code></pre>
<p>The results are:</p>
<pre><code>1.47580695152
0.11471414566
0.26300907135
</code></pre>
<p>so the <code>np.random.shuffle()</code> solution is about 10x faster than <code>np.random.permutation()</code> and 2x faster than <code>random.shuffle()</code>.</p>
| 2
|
2016-09-08T19:34:35Z
| 39,398,973
|
<p>You need to convert your list to numpy arrays with with type <code>object()</code>, so that <code>random.permutation()</code> can interpret the lists as numpy types rather than sequence:</p>
<pre><code>>>> a = [np.array(i, dtype='object') for i in a]
>>>
>>> np.random.permutation(a)
array([['122', 877.503017, 955.471176, [21.701201, 1.315585]],
['177', 1038.686843, 1018.987868, [19.539959, 1.183997]],
['178', 878.999081, 1022.050447, [19.6448771, 1.1867719]],
['176', 1134.076908, 1125.504758, [19.436181, 0.9987899]]], dtype=object)
</code></pre>
<p>You can also use create a uniqe array from your lists using <code>numpy.array()</code> instead of using a list comprehension:</p>
<pre><code>>>> a = np.array((a0, a1, a2, a3), dtype='object')
>>> a
array([['122', 877.503017, 955.471176, [21.701201, 1.315585]],
['176', 1134.076908, 1125.504758, [19.436181, 0.9987899]],
['177', 1038.686843, 1018.987868, [19.539959, 1.183997]],
['178', 878.999081, 1022.050447, [19.6448771, 1.1867719]]], dtype=object)
>>> np.random.permutation(a)
array([['122', 877.503017, 955.471176, [21.701201, 1.315585]],
['177', 1038.686843, 1018.987868, [19.539959, 1.183997]],
['176', 1134.076908, 1125.504758, [19.436181, 0.9987899]],
['178', 878.999081, 1022.050447, [19.6448771, 1.1867719]]], dtype=object)
>>> np.random.permutation(a)
array([['177', 1038.686843, 1018.987868, [19.539959, 1.183997]],
['176', 1134.076908, 1125.504758, [19.436181, 0.9987899]],
['178', 878.999081, 1022.050447, [19.6448771, 1.1867719]],
['122', 877.503017, 955.471176, [21.701201, 1.315585]]], dtype=object)
</code></pre>
| 1
|
2016-09-08T19:40:22Z
|
[
"python",
"numpy",
"permutation"
] |
Permute list of lists with mixed elements (np.random.permutation() fails with ValueError)
| 39,398,877
|
<p>I'm trying to permute a list composed of sublists with mixed-type elements:</p>
<pre><code>import numpy as np
a0 = ['122', 877.503017, 955.471176, [21.701201, 1.315585]]
a1 = ['176', 1134.076908, 1125.504758, [19.436181, 0.9987899]]
a2 = ['177', 1038.686843, 1018.987868, [19.539959, 1.183997]]
a3 = ['178', 878.999081, 1022.050447, [19.6448771, 1.1867719]]
a = [a0, a1, a2, a3]
b = np.random.permutation(a)
</code></pre>
<p>This will fail with:</p>
<pre><code>ValueError: cannot set an array element with a sequence
</code></pre>
<p>Is there a built in function that will allow me to generate such permutation?</p>
<p>I need to generate a single random permutation, I'm not trying to obtain all the possible permutations.</p>
<hr>
<p>I checked the three answers given with:</p>
<pre><code>import time
import random
# np.random.permutation()
start = time.time()
for _ in np.arange(100000):
b = np.random.permutation([np.array(i, dtype='object') for i in a])
print(time.time() - start)
# np.random.shuffle()
start = time.time()
for _ in np.arange(100000):
b = a[:]
np.random.shuffle(b)
print(time.time() - start)
# random.shuffle()
start = time.time()
for _ in np.arange(100000):
random.shuffle(a)
print(time.time() - start)
</code></pre>
<p>The results are:</p>
<pre><code>1.47580695152
0.11471414566
0.26300907135
</code></pre>
<p>so the <code>np.random.shuffle()</code> solution is about 10x faster than <code>np.random.permutation()</code> and 2x faster than <code>random.shuffle()</code>.</p>
| 2
|
2016-09-08T19:34:35Z
| 39,399,013
|
<p>What about using <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.random.shuffle.html" rel="nofollow">np.random.shuffle</a>?</p>
<pre><code># if you want the result in another list, otherwise just apply shuffle to a
b = a[:]
# shuffle the elements
np.random.shuffle(b)
# see the result of the shuffling
print(b)
</code></pre>
<p>See <a href="http://stackoverflow.com/a/15474335/4063051">this answer</a> for the difference between <code>shuffle</code> and <code>permutation</code></p>
| 2
|
2016-09-08T19:43:02Z
|
[
"python",
"numpy",
"permutation"
] |
Permute list of lists with mixed elements (np.random.permutation() fails with ValueError)
| 39,398,877
|
<p>I'm trying to permute a list composed of sublists with mixed-type elements:</p>
<pre><code>import numpy as np
a0 = ['122', 877.503017, 955.471176, [21.701201, 1.315585]]
a1 = ['176', 1134.076908, 1125.504758, [19.436181, 0.9987899]]
a2 = ['177', 1038.686843, 1018.987868, [19.539959, 1.183997]]
a3 = ['178', 878.999081, 1022.050447, [19.6448771, 1.1867719]]
a = [a0, a1, a2, a3]
b = np.random.permutation(a)
</code></pre>
<p>This will fail with:</p>
<pre><code>ValueError: cannot set an array element with a sequence
</code></pre>
<p>Is there a built in function that will allow me to generate such permutation?</p>
<p>I need to generate a single random permutation, I'm not trying to obtain all the possible permutations.</p>
<hr>
<p>I checked the three answers given with:</p>
<pre><code>import time
import random
# np.random.permutation()
start = time.time()
for _ in np.arange(100000):
b = np.random.permutation([np.array(i, dtype='object') for i in a])
print(time.time() - start)
# np.random.shuffle()
start = time.time()
for _ in np.arange(100000):
b = a[:]
np.random.shuffle(b)
print(time.time() - start)
# random.shuffle()
start = time.time()
for _ in np.arange(100000):
random.shuffle(a)
print(time.time() - start)
</code></pre>
<p>The results are:</p>
<pre><code>1.47580695152
0.11471414566
0.26300907135
</code></pre>
<p>so the <code>np.random.shuffle()</code> solution is about 10x faster than <code>np.random.permutation()</code> and 2x faster than <code>random.shuffle()</code>.</p>
| 2
|
2016-09-08T19:34:35Z
| 39,399,107
|
<p>random.shuffle() changes the list in place.</p>
<p>Python API methods that alter a structure in-place generally return None.</p>
<p>Please try <code>random.sample(a,len(a))</code></p>
<p>The code would look like:</p>
<pre><code>a = a[:]
b = random.sample(a,len(a))
</code></pre>
| 0
|
2016-09-08T19:50:12Z
|
[
"python",
"numpy",
"permutation"
] |
Permute list of lists with mixed elements (np.random.permutation() fails with ValueError)
| 39,398,877
|
<p>I'm trying to permute a list composed of sublists with mixed-type elements:</p>
<pre><code>import numpy as np
a0 = ['122', 877.503017, 955.471176, [21.701201, 1.315585]]
a1 = ['176', 1134.076908, 1125.504758, [19.436181, 0.9987899]]
a2 = ['177', 1038.686843, 1018.987868, [19.539959, 1.183997]]
a3 = ['178', 878.999081, 1022.050447, [19.6448771, 1.1867719]]
a = [a0, a1, a2, a3]
b = np.random.permutation(a)
</code></pre>
<p>This will fail with:</p>
<pre><code>ValueError: cannot set an array element with a sequence
</code></pre>
<p>Is there a built in function that will allow me to generate such permutation?</p>
<p>I need to generate a single random permutation, I'm not trying to obtain all the possible permutations.</p>
<hr>
<p>I checked the three answers given with:</p>
<pre><code>import time
import random
# np.random.permutation()
start = time.time()
for _ in np.arange(100000):
b = np.random.permutation([np.array(i, dtype='object') for i in a])
print(time.time() - start)
# np.random.shuffle()
start = time.time()
for _ in np.arange(100000):
b = a[:]
np.random.shuffle(b)
print(time.time() - start)
# random.shuffle()
start = time.time()
for _ in np.arange(100000):
random.shuffle(a)
print(time.time() - start)
</code></pre>
<p>The results are:</p>
<pre><code>1.47580695152
0.11471414566
0.26300907135
</code></pre>
<p>so the <code>np.random.shuffle()</code> solution is about 10x faster than <code>np.random.permutation()</code> and 2x faster than <code>random.shuffle()</code>.</p>
| 2
|
2016-09-08T19:34:35Z
| 39,399,202
|
<p>If you just want to create a random permutation of <code>a = [a0, a1, a2, a3]</code>, might I suggest permuting the indices instead?</p>
<pre><code>>>> random_indices = np.random.permutation(np.arange(len(a)))
>>> a_perm = [a[i] for i in random_indices]
... # Or just use the indices as you see fit...
</code></pre>
<p>If you're using numpy <em>just</em> for this, skip numpy altogether instead and just use <a href="https://docs.python.org/3.5/library/random.html#random.shuffle" rel="nofollow"><code>random.shuffle</code></a> to effect the same:</p>
<pre><code>>>> import random
>>> random.shuffle(a)
</code></pre>
| 2
|
2016-09-08T19:56:21Z
|
[
"python",
"numpy",
"permutation"
] |
Difficulty accessing multi-dimensional array from JSON data
| 39,398,913
|
<p>Here is the JSON data in question:</p>
<pre><code>{
"result_index": 0,
"results": [
{
"alternatives": [
{
"confidence": 0.994,
"transcript": "thunderstorms could produce large hail isolated tornadoes and heavy rain "
}
],
"final": true
}
]
</code></pre>
<p>}</p>
<p>Here is how I am attemption to access it.</p>
<pre><code>parsed = json.loads(data)
print(parsed['results']['alternatives']['transcript'])
</code></pre>
<p>This results in the following error:</p>
<pre><code>TypeError: list indices must be integers or slices, not str
</code></pre>
<p>It seems as though results is just an array with a single entry that is a string, and I am a bit confused how to access the individual elements within it.</p>
| 0
|
2016-09-08T19:36:19Z
| 39,398,950
|
<p>Your <code>results</code> and <code>alternatives</code> are not objects; but arrays of objects.</p>
<pre><code>print(parsed['results'][0]['alternatives'][0]['transcript'])
</code></pre>
| 1
|
2016-09-08T19:38:38Z
|
[
"python",
"json"
] |
Error setting dtypes of an array
| 39,398,933
|
<p>I was attempting to make a 1x5 numpy array with the following code</p>
<pre><code>testArray = np.array([19010913, "Hershey", "Bar", "Birthday", 12.34])
</code></pre>
<p>but encountered the unwanted result that</p>
<pre><code>testArray.dtype
dtype("<U8")
</code></pre>
<p>I want each column to be a specific data type, so I attempted to input this</p>
<pre><code>testArray = np.array([19010913, "Hershey", "Bar", "Birthday", 12.34],
dtype=[('f0','<i8'),('f1','<U64'),('f2','<U64'),('f3','<U64'),('f4','<f10')] )
</code></pre>
<p>but got the error</p>
<pre><code>/usr/local/lib/python3.4/dist-packages/ipykernel/__main__.py:1:
DeprecationWarning: Specified size is invalid for this data type.
Size will be ignored in NumPy 1.7 but may throw an exception in
future versions. if __name__ == '__main__':
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-11-d2c44d88c8a5> in <module>()
----> 1 testArray = np.array([19840913, "Hershey", "Bar",
"Birthday", 64.25], dtype=[('f0','<i8'),('f1','<U64'),('f2','<U64'), ('f3','<U64'),('f4','<f10')] )
TypeError: 'int' does not support the buffer interface
</code></pre>
| 2
|
2016-09-08T19:37:36Z
| 39,399,470
|
<p>First off, I am not sure if <code>f10</code> is something known. </p>
<p>Note that structured arrays need to be defined as "list of tuples". Try the following:</p>
<pre><code>testArray = np.array([(19010913, "Hershey", "Bar", "Birthday", 12.34)],
dtype=[('f0','<i8'),('f1','<U64'),('f2','<U64'),('f3','<U64'),('f4','<f8')])
</code></pre>
<p>See also <a href="http://docs.scipy.org/doc/numpy/reference/arrays.dtypes.html" rel="nofollow">this</a> and <a href="http://docs.scipy.org/doc/numpy-1.10.1/user/basics.rec.html" rel="nofollow">this</a> for different ways of defining <code>np.dtype</code>s and structured arrays.</p>
<p>Edit:</p>
<p>For multiple rows in the same structure, define each row of your array as a separate tuple in the list.</p>
<pre><code>dt = np.dtype([('f0','<i8'),('f1','<U64'),('f2','<U64'),('f3','<Uâ64'),('f4','<f8')])
testArray = np.array([(19010913, "Hershey", "Bar", "Birthday", 12.34), (123, "a", "b", "c", 56.78)], dtype=dt)
</code></pre>
| 1
|
2016-09-08T20:15:31Z
|
[
"python",
"arrays",
"numpy"
] |
How to preprocess and load a "big data" tsv file into a python dataframe?
| 39,398,986
|
<p>I am currently trying to import the following large tab-delimited file into a dataframe-like structure within Python---naturally I am using <code>pandas</code> dataframe, though I am open to other options. </p>
<p>This file is several GB in size, and is not a standard <code>tsv</code> file---it is broken, i.e. the rows have a different number of columns. One row may have 25 columns, another has 21. </p>
<p>Here is an example of the data:</p>
<pre><code>Col_01: 14 .... Col_20: 25 Col_21: 23432 Col_22: 639142
Col_01: 8 .... Col_20: 25 Col_22: 25134 Col_23: 243344
Col_01: 17 .... Col_21: 75 Col_23: 79876 Col_25: 634534 Col_22: 5 Col_24: 73453
Col_01: 19 .... Col_20: 25 Col_21: 32425 Col_23: 989423
Col_01: 12 .... Col_20: 25 Col_21: 23424 Col_22: 342421 Col_23: 7 Col_24: 13424 Col_25: 67
Col_01: 3 .... Col_20: 95 Col_21: 32121 Col_25: 111231
</code></pre>
<p>As you can see, some of these columns are not in the correct order...</p>
<p>Now, I think the correct way to import this file into a dataframe is to preprocess the data such that you can output a dataframe with <code>NaN</code> values, e.g. </p>
<pre><code>Col_01 .... Col_20 Col_21 Col22 Col23 Col24 Col25
8 .... 25 NaN 25134 243344 NaN NaN
17 .... NaN 75 2 79876 73453 634534
19 .... 25 32425 NaN 989423 NaN NaN
12 .... 25 23424 342421 7 13424 67
3 .... 95 32121 NaN NaN NaN 111231
</code></pre>
<p>To make this even more complicated, this is a very large file, several GB in size. </p>
<p>Normally, I try to process the data in chunks, e.g. </p>
<pre><code>import pandas as pd
for chunk in pd.read_table(FILE_PATH, header=None, sep='\t', chunksize=10**6):
# place chunks into a dataframe or HDF
</code></pre>
<p>However, I see no way to "preprocess" the data first in chunks, and then use chunks to read the data into <code>pandas.read_table()</code>. How would you do this? What sort of preprocessing tools are available---perhaps <code>sed</code>? <code>awk</code>? </p>
<p>This is a challenging problem, due to the size of the data and the formatting that must be done before loading into a dataframe. Any help appreciated. </p>
| 2
|
2016-09-08T19:41:31Z
| 39,399,727
|
<pre><code>$ cat > pandas.awk
BEGIN {
PROCINFO["sorted_in"]="@ind_str_asc" # traversal order for for(i in a)
}
NR==1 { # the header cols is in the beginning of data file
# FORGET THIS: header cols from another file replace NR==1 with NR==FNR and see * below
split($0,a," ") # mkheader a[1]=first_col ...
for(i in a) { # replace with a[first_col]="" ...
a[a[i]]
printf "%6s%s", a[i], OFS # output the header
delete a[i] # remove a[1], a[2], ...
}
# next # FORGET THIS * next here if cols from another file UNTESTED
}
{
gsub(/: /,"=") # replace key-value separator ": " with "="
split($0,b,FS) # split record from ","
for(i in b) {
split(b[i],c,"=") # split key=value to c[1]=key, c[2]=value
b[c[1]]=c[2] # b[key]=value
}
for(i in a) # go thru headers in a[] and printf from b[]
printf "%6s%s", (i in b?b[i]:"NaN"), OFS; print ""
}
</code></pre>
<p>Data sample (<code>pandas.txt</code>):</p>
<pre><code>Col_01 Col_20 Col_21 Col_22 Col_23 Col_25
Col_01: 14 Col_20: 25 Col_21: 23432 Col_22: 639142
Col_01: 8 Col_20: 25 Col_22: 25134 Col_23: 243344
Col_01: 17 Col_21: 75 Col_23: 79876 Col_25: 634534 Col_22: 5 Col_24: 73453
Col_01: 19 Col_20: 25 Col_21: 32425 Col_23: 989423
Col_01: 12 Col_20: 25 Col_21: 23424 Col_22: 342421 Col_23: 7 Col_24: 13424 Col_25: 67
Col_01: 3 Col_20: 95 Col_21: 32121 Col_25: 111231
$ awk -f pandas.awk -pandas.txt
Col_01 Col_20 Col_21 Col_22 Col_23 Col_25
14 25 23432 639142 NaN NaN
8 25 NaN 25134 243344 NaN
17 NaN 75 5 79876 634534
19 25 32425 NaN 989423 NaN
12 25 23424 342421 7 67
3 95 32121 NaN NaN 111231
</code></pre>
<p>All needed cols should be in the data file header. It's probably not a big job to collect the headers while processing, just keep the data in arrays and print in the end, maybe in version 3.</p>
<p>If you read the headers from a different file (<code>cols.txt</code>) than the data file (<code>pandas.txt</code>), execute the script (<code>pandas.awk</code>):</p>
<pre><code>$ awk -F pandas.awk cols.txt pandas.txt
</code></pre>
| 4
|
2016-09-08T20:33:00Z
|
[
"python",
"pandas",
"awk",
"sed",
"dataframe"
] |
How to preprocess and load a "big data" tsv file into a python dataframe?
| 39,398,986
|
<p>I am currently trying to import the following large tab-delimited file into a dataframe-like structure within Python---naturally I am using <code>pandas</code> dataframe, though I am open to other options. </p>
<p>This file is several GB in size, and is not a standard <code>tsv</code> file---it is broken, i.e. the rows have a different number of columns. One row may have 25 columns, another has 21. </p>
<p>Here is an example of the data:</p>
<pre><code>Col_01: 14 .... Col_20: 25 Col_21: 23432 Col_22: 639142
Col_01: 8 .... Col_20: 25 Col_22: 25134 Col_23: 243344
Col_01: 17 .... Col_21: 75 Col_23: 79876 Col_25: 634534 Col_22: 5 Col_24: 73453
Col_01: 19 .... Col_20: 25 Col_21: 32425 Col_23: 989423
Col_01: 12 .... Col_20: 25 Col_21: 23424 Col_22: 342421 Col_23: 7 Col_24: 13424 Col_25: 67
Col_01: 3 .... Col_20: 95 Col_21: 32121 Col_25: 111231
</code></pre>
<p>As you can see, some of these columns are not in the correct order...</p>
<p>Now, I think the correct way to import this file into a dataframe is to preprocess the data such that you can output a dataframe with <code>NaN</code> values, e.g. </p>
<pre><code>Col_01 .... Col_20 Col_21 Col22 Col23 Col24 Col25
8 .... 25 NaN 25134 243344 NaN NaN
17 .... NaN 75 2 79876 73453 634534
19 .... 25 32425 NaN 989423 NaN NaN
12 .... 25 23424 342421 7 13424 67
3 .... 95 32121 NaN NaN NaN 111231
</code></pre>
<p>To make this even more complicated, this is a very large file, several GB in size. </p>
<p>Normally, I try to process the data in chunks, e.g. </p>
<pre><code>import pandas as pd
for chunk in pd.read_table(FILE_PATH, header=None, sep='\t', chunksize=10**6):
# place chunks into a dataframe or HDF
</code></pre>
<p>However, I see no way to "preprocess" the data first in chunks, and then use chunks to read the data into <code>pandas.read_table()</code>. How would you do this? What sort of preprocessing tools are available---perhaps <code>sed</code>? <code>awk</code>? </p>
<p>This is a challenging problem, due to the size of the data and the formatting that must be done before loading into a dataframe. Any help appreciated. </p>
| 2
|
2016-09-08T19:41:31Z
| 39,516,975
|
<p>Another version which takes a separate column file as parameter or uses the first record. Run either way:</p>
<pre><code>awk -f pandas2.awk pandas.txt # first record as header
awk -f pandas2.awk cols.txt pandas.txt # first record from cols.txt
awk -v cols="cols.txt" -f pandas2.awk pandas.txt # read cols from cols.txt
</code></pre>
<p>Or even:</p>
<pre><code>awk -v cols="pandas.txt" -f pandas2.awk pandas.txt # separates keys from pandas.txt for header
</code></pre>
<p>Code:</p>
<pre><code>$ cat > pandas2.awk
BEGIN {
PROCINFO["sorted_in"]="@ind_str_asc" # traversal order for for(i in a)
if(cols) { # if -v cols="column_file.txt" or even "pandas.txt"
while ((getline line< cols)>0) { # read it in line by line
gsub(/: [^ ]+/,"",line) # remove values from "key: value"
split(line,a) # split to temp array
for(i in a) # collect keys to column array
col[a[i]]
}
for(i in col) # output columns
printf "%6s%s", i, OFS
print ""
}
}
NR==1 && cols=="" { # if the header cols are in the beginning of data file
# if not, -v cols="column_file.txt"
split($0,a," +") # split header record by spaces
for(i in a) {
col[a[i]] # set them to array col
printf "%6s%s", a[i], OFS # output the header
}
print ""
}
NR==1 {
next
}
{
gsub(/: /,"=") # replace key-value separator ": " with "="
split($0,b,FS) # split record from separator FS
for(i in b) {
split(b[i],c,"=") # split key=value to c[1]=key, c[2]=value
b[c[1]]=c[2] # b[key]=value
}
for(i in col) # go thru headers in col[] and printf from b[]
printf "%6s%s", (i in b?b[i]:"NaN"), OFS; print ""
}
</code></pre>
| 3
|
2016-09-15T17:24:51Z
|
[
"python",
"pandas",
"awk",
"sed",
"dataframe"
] |
How to preprocess and load a "big data" tsv file into a python dataframe?
| 39,398,986
|
<p>I am currently trying to import the following large tab-delimited file into a dataframe-like structure within Python---naturally I am using <code>pandas</code> dataframe, though I am open to other options. </p>
<p>This file is several GB in size, and is not a standard <code>tsv</code> file---it is broken, i.e. the rows have a different number of columns. One row may have 25 columns, another has 21. </p>
<p>Here is an example of the data:</p>
<pre><code>Col_01: 14 .... Col_20: 25 Col_21: 23432 Col_22: 639142
Col_01: 8 .... Col_20: 25 Col_22: 25134 Col_23: 243344
Col_01: 17 .... Col_21: 75 Col_23: 79876 Col_25: 634534 Col_22: 5 Col_24: 73453
Col_01: 19 .... Col_20: 25 Col_21: 32425 Col_23: 989423
Col_01: 12 .... Col_20: 25 Col_21: 23424 Col_22: 342421 Col_23: 7 Col_24: 13424 Col_25: 67
Col_01: 3 .... Col_20: 95 Col_21: 32121 Col_25: 111231
</code></pre>
<p>As you can see, some of these columns are not in the correct order...</p>
<p>Now, I think the correct way to import this file into a dataframe is to preprocess the data such that you can output a dataframe with <code>NaN</code> values, e.g. </p>
<pre><code>Col_01 .... Col_20 Col_21 Col22 Col23 Col24 Col25
8 .... 25 NaN 25134 243344 NaN NaN
17 .... NaN 75 2 79876 73453 634534
19 .... 25 32425 NaN 989423 NaN NaN
12 .... 25 23424 342421 7 13424 67
3 .... 95 32121 NaN NaN NaN 111231
</code></pre>
<p>To make this even more complicated, this is a very large file, several GB in size. </p>
<p>Normally, I try to process the data in chunks, e.g. </p>
<pre><code>import pandas as pd
for chunk in pd.read_table(FILE_PATH, header=None, sep='\t', chunksize=10**6):
# place chunks into a dataframe or HDF
</code></pre>
<p>However, I see no way to "preprocess" the data first in chunks, and then use chunks to read the data into <code>pandas.read_table()</code>. How would you do this? What sort of preprocessing tools are available---perhaps <code>sed</code>? <code>awk</code>? </p>
<p>This is a challenging problem, due to the size of the data and the formatting that must be done before loading into a dataframe. Any help appreciated. </p>
| 2
|
2016-09-08T19:41:31Z
| 39,523,179
|
<p>You can do this more cleanly completely in Pandas.</p>
<p>Suppose you have two independent data frames with only one overlapping column:</p>
<pre><code>>>> df1
A B
0 1 2
>>> df2
B C
1 3 4
</code></pre>
<p>You can use <a href="http://pandas.pydata.org/pandas-docs/stable/merging.html#concatenating-objects" rel="nofollow">.concat</a> to concatenate them together:</p>
<pre><code>>>> pd.concat([df1, df2])
A B C
0 1 2 NaN
1 NaN 3 4
</code></pre>
<p>You can see <code>NaN</code> is created for row values that do not exist. </p>
<p>This can easily be applied to your example data without preprocessing at all:</p>
<pre><code>import pandas as pd
df=pd.DataFrame()
with open(fn) as f_in:
for i, line in enumerate(f_in):
line_data=pd.DataFrame({k.strip():v.strip()
for k,_,v in (e.partition(':')
for e in line.split('\t'))}, index=[i])
df=pd.concat([df, line_data])
>>> df
Col_01 Col_20 Col_21 Col_22 Col_23 Col_24 Col_25
0 14 25 23432 639142 NaN NaN NaN
1 8 25 NaN 25134 243344 NaN NaN
2 17 NaN 75 5 79876 73453 634534
3 19 25 32425 NaN 989423 NaN NaN
4 12 25 23424 342421 7 13424 67
5 3 95 32121 NaN NaN NaN 111231
</code></pre>
<hr>
<p>Alternatively, if your main issue is establishing the desired order of the columns in a multi chunk add of columns, just read all the column value first (not tested):</p>
<pre><code># based on the alpha numeric sort of the example of:
# [ALPHA]_[NUM]
headers=set()
with open(fn) as f:
for line in f:
for record in line.split('\t'):
head,_,datum=record.partition(":")
headers.add(head)
# sort as you wish:
cols=sorted(headers, key=lambda e: int(e.partition('_')[2]))
</code></pre>
<p>Pandas will use the order of the list for the column order if given in the initial creation of the DataFrame.</p>
| 1
|
2016-09-16T03:04:31Z
|
[
"python",
"pandas",
"awk",
"sed",
"dataframe"
] |
My loss with fit_generator is 0.0000e+00 (using Keras)
| 39,399,029
|
<p>I am trying to use Keras on a âlargeâ dataset for my GPU. To do so, I make use of fit_generator, the problem is that my loss is 0.0000e+00 every time.</p>
<p>My print class and generator function:</p>
<pre><code>class printbatch(callbacks.Callback):
def on_batch_end(self, batch, logs={}):
if batch%10 == 0:
print "Batch " + str(batch) + " ends"
def on_epoch_begin(self, epoch, logs={}):
print(logs)
def on_epoch_end(self, epoch, logs={}):
print(logs)
def simpleGenerator():
X_train = f.get('X_train')
y_train = f.get('y_train')
total_examples = len(X_train)
examples_at_a_time = 6
range_examples = int(total_examples/examples_at_a_time)
while 1:
for i in range(range_examples): # samples
yield X_train[i*examples_at_a_time:(i+1)*examples_at_a_time], y_train[i*examples_at_a_time:(i+1)*examples_at_a_time]
</code></pre>
<p>This is how I use them:</p>
<pre><code>f = h5py.File(cache_file, 'r')
pb = printbatch()
sg = simpleGenerator()
class_weighting = [0.2595, 0.1826, 4.5640, 0.1417, 0.5051, 0.3826, 9.6446, 1.8418, 6.6823, 6.2478, 3.0, 7.3614]
history = autoencoder.fit_generator(sg, samples_per_epoch=366, nb_epoch=10, verbose=2, show_accuracy=True, callbacks=[pb], validation_data=None, class_weight=class_weighting)
</code></pre>
<p>This is (a part of) my output:</p>
<pre><code>{}
Epoch 1/100
Batch 0 ends
Batch 10 ends
Batch 20 ends
Batch 30 ends
Batch 40 ends
Batch 50 ends
Batch 60 ends
{'loss': 0.0}
120s - loss: 0.0000e+00
[â¦]
{}
Epoch 9/10
Batch 0 ends
Batch 10 ends
Batch 20 ends
Batch 30 ends
Batch 40 ends
Batch 50 ends
Batch 60 ends
{'loss': 0.0}
124s - loss: 0.0000e+00
{}
Epoch 10/10
Batch 0 ends
Batch 10 ends
Batch 20 ends
Batch 30 ends
Batch 40 ends
Batch 50 ends
Batch 60 ends
{'loss': 0.0}
127s - loss: 0.0000e+00
Training completed in 1263.76883411 seconds
</code></pre>
<p>X_train and y_train shapes are:</p>
<pre><code>X_train.shape
Out[5]: (366, 3, 360, 480)
y_train.shape
Out[6]: (366, 172800, 12)
</code></pre>
<p>So my question is, how could I solve the 'loss: 0.0000e+00' issue?</p>
<p>Thank you for your time.</p>
<p>Edit: the model, the original comes from pradyu1993.github.io/2016/03/08/segnet-post.html by Pradyumna.</p>
<pre><code>class UnPooling2D(Layer):
"""A 2D Repeat layer"""
def __init__(self, poolsize=(2, 2)):
super(UnPooling2D, self).__init__()
self.poolsize = poolsize
@property
def output_shape(self):
input_shape = self.input_shape
return (input_shape[0], input_shape[1],
self.poolsize[0] * input_shape[2],
self.poolsize[1] * input_shape[3])
def get_output(self, train):
X = self.get_input(train)
s1 = self.poolsize[0]
s2 = self.poolsize[1]
output = X.repeat(s1, axis=2).repeat(s2, axis=3)
return output
def get_config(self):
return {"name":self.__class__.__name__,
"poolsize":self.poolsize}
def create_encoding_layers():
kernel = 3
filter_size = 64
pad = 1
pool_size = 2
return [
ZeroPadding2D(padding=(pad,pad)),
Convolution2D(filter_size, kernel, kernel, border_mode='valid'),
BatchNormalization(),
Activation('relu'),
MaxPooling2D(pool_size=(pool_size, pool_size)),
ZeroPadding2D(padding=(pad,pad)),
Convolution2D(128, kernel, kernel, border_mode='valid'),
BatchNormalization(),
Activation('relu'),
MaxPooling2D(pool_size=(pool_size, pool_size)),
ZeroPadding2D(padding=(pad,pad)),
Convolution2D(256, kernel, kernel, border_mode='valid'),
BatchNormalization(),
Activation('relu'),
MaxPooling2D(pool_size=(pool_size, pool_size)),
ZeroPadding2D(padding=(pad,pad)),
Convolution2D(512, kernel, kernel, border_mode='valid'),
BatchNormalization(),
Activation('relu'),
]
def create_decoding_layers():
kernel = 3
filter_size = 64
pad = 1
pool_size = 2
return[
ZeroPadding2D(padding=(pad,pad)),
Convolution2D(512, kernel, kernel, border_mode='valid'),
BatchNormalization(),
UpSampling2D(size=(pool_size,pool_size)),
ZeroPadding2D(padding=(pad,pad)),
Convolution2D(256, kernel, kernel, border_mode='valid'),
BatchNormalization(),
UpSampling2D(size=(pool_size,pool_size)),
ZeroPadding2D(padding=(pad,pad)),
Convolution2D(128, kernel, kernel, border_mode='valid'),
BatchNormalization(),
UpSampling2D(size=(pool_size,pool_size)),
ZeroPadding2D(padding=(pad,pad)),
Convolution2D(filter_size, kernel, kernel, border_mode='valid'),
BatchNormalization(),
]
</code></pre>
<p>And:</p>
<pre><code>autoencoder = models.Sequential()
autoencoder.add(Layer(input_shape=(3, img_rows, img_cols)))
autoencoder.encoding_layers = create_encoding_layers()
autoencoder.decoding_layers = create_decoding_layers()
for l in autoencoder.encoding_layers:
autoencoder.add(l)
for l in autoencoder.decoding_layers:
autoencoder.add(l)
autoencoder.add(Convolution2D(12, 1, 1, border_mode='valid',))
autoencoder.add(Reshape((12,img_rows*img_cols), input_shape=(12,img_rows,img_cols)))
autoencoder.add(Permute((2, 1)))
autoencoder.add(Activation('softmax'))
autoencoder.compile(loss="categorical_crossentropy", optimizer='adadelta')
</code></pre>
| 0
|
2016-09-08T19:44:15Z
| 39,437,121
|
<p>I solved this issue. The problem was that in '.theanorc' I had float16: this is not enough, so I changed it to float64 and now it works. </p>
<p>This is my '.theanorc' at the moment:</p>
<pre><code>[global]
device = gpu
floatX = float64
optimizer_including=cudnn
[lib]
cnmem=0.90
[blas]
ldflags = -L/usr/local/lib -lopenblas
[nvcc]
fastmath = True
[cuda]
root = /usr/local/cuda/
</code></pre>
| 0
|
2016-09-11T14:31:29Z
|
[
"python",
"deep-learning",
"keras",
"autoencoder"
] |
Python3 script not showing same result as MySQL engine for same query
| 39,399,033
|
<p>My python3 script is not generating the same result as MySQL. My query returns those rows whose value have changed over the week.</p>
<p>Python script:</p>
<pre><code>query = "SELECT cw.opportunityid, cw.probability, pw.probability, cw.stage, pw.stage, cw.amount, pw.amount, " \
"cw.closedate, pw.closedate " \
"FROM opty_data cw " \
"LEFT JOIN opty_data pw ON cw.opportunityid = pw.opportunityid " \
"AND pw.Week = \"{0}\" " \
"WHERE cw.Week = \"{1}\" " \
"AND IF(pw.opportunityid IS NULL, TRUE, ((cw.probability <> pw.probability) OR (cw.stage <> pw.stage) " \
"OR (cw.Amount<>pw.Amount) OR (cw.CloseDate <> pw.CloseDate)))".format(Prev_Week,Curr_Week)
cursor.execute(query)
results = dictfetchall(cursor)
print(results)
</code></pre>
<p>Output: </p>
<pre><code>[{
'opportunityid' : '1',
'probability' : '50',
'amount' : Decimal('30.35'),
'closedate' : datetime.date(2016, 8, 22),
'stage' : 'Proposal'
}, {
'opportunityid' : '2',
'probability' : '50',
'amount' : Decimal('115.00'),
'closedate' : datetime.date(2016, 6, 30),
'stage' : 'Proposal'
}, {
'opportunityid' : '3',
'probability' : '50',
'amount' : Decimal('200.00'),
'closedate' : datetime.date(2016, 8, 29),
'stage' : 'Proposal'
}
]
</code></pre>
<p>Query:</p>
<pre><code>SELECT cw.opportunityid, cw.probability, pw.probability, cw.stage, pw.stage, cw.amount, pw.amount, cw.closedate, pw.closedate FROM opty_data cw
LEFT JOIN opty_data pw
ON cw.opportunityid = pw.opportunityid AND pw.Week = "2016-35" WHERE cw.Week = "2016-36"
AND IF(pw.opportunityid IS NULL, TRUE, ((cw.probability <> pw.probability) OR (cw.stage <> pw.stage) OR (cw.Amount<>pw.Amount) OR (cw.CloseDate <> pw.CloseDate)))
</code></pre>
<p>Expected Output shown correctly by MySQL:</p>
<pre><code>+-----------------+-------------+-------------+----------------+----------------
+------------+-------------+------------+------------+
| opportunityid | probability | probability | stage | stage
| amount | amount | closedate | closedate |
+-----------------+-------------+-------------+----------------+----------------
+------------+-------------+------------+------------+
| 1 | 50 | 50 | Proposal | Proposal
| 20.35 | 30.35 | 2016-08-22 | 2016-08-22 |
| 2 | 50 | 50 | Proposal | Proposal
| 113.00 | 115.00 | 2016-09-06 | 2016-06-30 |
| 3 | 0 | 50 | Drop | Proposal
| 200.00 | 200.00 | 2016-08-29 | 2016-08-29 |
</code></pre>
| 0
|
2016-09-08T19:44:27Z
| 39,399,465
|
<p>A Python dictionary consists of unique key value pairs, so one key can just appear once in a dictionary. As your raw SQL query returns two distinct values from the same column in a single row, the second occurrence of the column overwrites the first in the dictionary. However, you can easily fix this by specifying aliases for the columns using the AS keyword. Check out the following example:</p>
<pre><code>SELECT
p1.name AS p1name,
p2.name AS p2name
FROM
p1, p2
WHERE
p1.id != p2.id
</code></pre>
| 1
|
2016-09-08T20:15:29Z
|
[
"python",
"mysql",
"sql",
"python-3.x"
] |
Django bulk_create a list of lists
| 39,399,049
|
<p>As the title indicates, is there a way to bulk_create list of lists. Like right now I bulk_create it like - </p>
<pre><code>for i in range(len(x))
arr1 = []
for m in range(len(y)):
arr1.append(DataModel(foundation=foundation, date=dates[m], price=price[m]))
DataModel.objects.bulk_create(arr1)
</code></pre>
<p>Now this will bulk create till the length of x.</p>
<p>Can it be done like so -</p>
<pre><code>arr = []
for i in range(len(x))
arr1 = []
for m in range(len(y)):
arr1.append(DataModel(foundation=foundation, date=dates[m], price=price[m]))
arr.append(arr1)
DataModel.objects.bulk_create(arr)
</code></pre>
<p>If not, what else can be done to store data faster?</p>
| 0
|
2016-09-08T19:45:32Z
| 39,399,177
|
<p>Append your object to <code>arr</code>, not <code>arr1</code>.<br>
Or you can make flat list before <code>bulk_create</code>:</p>
<pre><code>import itertools
arr = list(itertools.chain(*arr))
</code></pre>
| 0
|
2016-09-08T19:54:59Z
|
[
"python",
"django"
] |
Django bulk_create a list of lists
| 39,399,049
|
<p>As the title indicates, is there a way to bulk_create list of lists. Like right now I bulk_create it like - </p>
<pre><code>for i in range(len(x))
arr1 = []
for m in range(len(y)):
arr1.append(DataModel(foundation=foundation, date=dates[m], price=price[m]))
DataModel.objects.bulk_create(arr1)
</code></pre>
<p>Now this will bulk create till the length of x.</p>
<p>Can it be done like so -</p>
<pre><code>arr = []
for i in range(len(x))
arr1 = []
for m in range(len(y)):
arr1.append(DataModel(foundation=foundation, date=dates[m], price=price[m]))
arr.append(arr1)
DataModel.objects.bulk_create(arr)
</code></pre>
<p>If not, what else can be done to store data faster?</p>
| 0
|
2016-09-08T19:45:32Z
| 39,399,222
|
<p>Try this....</p>
<pre><code>arr = []
for i in range(len(x))
arr1 = []
for m in range(len(y)):
arr1.append(DataModel(foundation=foundation, date=dates[m], price=price[m]))
#instead of appending the list, add list together to make one
arr = arr + arr1
DataModel.objects.bulk_create(arr)
</code></pre>
<p>this will produce a single list of items process-able by <code>bulk_create()</code> method</p>
<p>reference: <a href="http://stackoverflow.com/questions/2022031/python-append-vs-operator-on-lists-why-do-these-give-different-results">Python append() vs. + operator on lists, why do these give different results?</a></p>
| 0
|
2016-09-08T19:57:49Z
|
[
"python",
"django"
] |
strip left and right in php
| 39,399,069
|
<p>I am converting the following python into php</p>
<p>the aim is to remove scores from a string like "Liverpool 1 v 0 Everton"</p>
<pre><code>home, away = event_data.get("desc").split(' v ')
# remove scores from event desc
if home.rsplit(' ', 1)[1].isdigit() and away.split(' ', 1)[0].isdigit():
event_name = home.rsplit(' ', 1)[0] + " v " + away.split(' ', 1)[1]
</code></pre>
<p>in php so far</p>
<pre><code>$nameArray = explode(' v ', $value['name']);
$home = $nameArray[0];
$away = $nameArray[1];
$event_name = $home . ' v ' . $away;
</code></pre>
<p>im struggling with the stipping the scores, any tips?</p>
| 2
|
2016-09-08T19:46:53Z
| 39,399,183
|
<p>Using <code>preg_repalce</code> in PHP you can replace digits around <code>" v "</code>:</p>
<pre><code>$str = "Liverpool 1 v 0 Everton";
$event_name = preg_replace('/\h+\d+\h+v\h+\d+\h+/', ' v ', $str);
echo $event_name . "\n";
//=> Liverpool v Everton
</code></pre>
| 2
|
2016-09-08T19:55:19Z
|
[
"php",
"python",
"regex"
] |
Python+Selenium, can't click the 'button' wrapped by span
| 39,399,266
|
<p>I am new to selenium here. I am trying to use selenium to click a 'more' button to expand the review section everytime after refreshing the page. </p>
<p>The website is TripAdvisor. The logic of <code>more</code> button is, as long as you click on the first <code>more</code> button, it will automatically expand all the review sections for you. In other words, you just need to click on the first 'more' button. </p>
<p>All buttons have a similar class name. An example is like <code>taLnk.hvrIE6.tr415411081.moreLink.ulBlueLinks</code>. Only the numbers part changes everytime. </p>
<p>The full element look like this:</p>
<pre><code><span class="taLnk hvrIE6 tr413756996 moreLink ulBlueLinks" onclick=" var options = {
flow: 'CORE_COMBINED',
pid: 39415,
onSuccess: function() { ta.util.cookie.setPIDCookie(2247); ta.call('ta.servlet.Reviews.expandReviews', {type: 'dummy'}, ta.id('review_413756996'), 'review_413756996', '1', 2247);; window.location.hash = 'review_413756996'; }
};
ta.call('ta.registration.RegOverlay.show', {type: 'dummy'}, ta.id('review_413756996'), options);
return false;
">
More&nbsp; </span>
</code></pre>
<p>I have tried several ways to get the button click. But since it is an onclick event wrapped by span, I can't successfully get it clicked. </p>
<p>My last version looks like this:</p>
<pre><code>driver = webdriver.Firefox()
driver.get(newurl)
page_source = driver.page_source
soup = BeautifulSoup(page_source)
moreID = soup.find("span", class_=re.compile(r'.*\bmoreLink\b.*'))['class']
moreID = '.'.join(moreID[0:(len(moreID)+1)])
moreButton = 'span.' + moreID
button = driver.find_element_by_css_selector(moreButton)
button.click()
time.sleep(10)
</code></pre>
<p>However, I keep getting the error message like this: </p>
<blockquote>
<p>WebDriverException: Message: Element is not clickable at point (318.5,
7.100006103515625). Other element would receive the click....</p>
</blockquote>
<p>Can you advise me on how to fix the problem? Any help will be appreciated!</p>
| 1
|
2016-09-08T20:00:31Z
| 39,399,403
|
<p>Try using an <a href="http://selenium-python.readthedocs.io/api.html#module-selenium.webdriver.common.action_chains" rel="nofollow"><code>ActionChains</code></a>:</p>
<pre><code>from selenium.webdriver.common.action_chains import ActionChains
# Your existing code here
# Minus the `button.click()` line
ActionChains(driver).move_to_element(button).cliââck().perform()
</code></pre>
<p>I have used this technique when I need to click on a <code><div></code> or a <code><span></code> element, rather than an actual button or link.</p>
| 0
|
2016-09-08T20:11:07Z
|
[
"python",
"selenium",
"onclick",
"css-selectors",
"webdriver"
] |
Python+Selenium, can't click the 'button' wrapped by span
| 39,399,266
|
<p>I am new to selenium here. I am trying to use selenium to click a 'more' button to expand the review section everytime after refreshing the page. </p>
<p>The website is TripAdvisor. The logic of <code>more</code> button is, as long as you click on the first <code>more</code> button, it will automatically expand all the review sections for you. In other words, you just need to click on the first 'more' button. </p>
<p>All buttons have a similar class name. An example is like <code>taLnk.hvrIE6.tr415411081.moreLink.ulBlueLinks</code>. Only the numbers part changes everytime. </p>
<p>The full element look like this:</p>
<pre><code><span class="taLnk hvrIE6 tr413756996 moreLink ulBlueLinks" onclick=" var options = {
flow: 'CORE_COMBINED',
pid: 39415,
onSuccess: function() { ta.util.cookie.setPIDCookie(2247); ta.call('ta.servlet.Reviews.expandReviews', {type: 'dummy'}, ta.id('review_413756996'), 'review_413756996', '1', 2247);; window.location.hash = 'review_413756996'; }
};
ta.call('ta.registration.RegOverlay.show', {type: 'dummy'}, ta.id('review_413756996'), options);
return false;
">
More&nbsp; </span>
</code></pre>
<p>I have tried several ways to get the button click. But since it is an onclick event wrapped by span, I can't successfully get it clicked. </p>
<p>My last version looks like this:</p>
<pre><code>driver = webdriver.Firefox()
driver.get(newurl)
page_source = driver.page_source
soup = BeautifulSoup(page_source)
moreID = soup.find("span", class_=re.compile(r'.*\bmoreLink\b.*'))['class']
moreID = '.'.join(moreID[0:(len(moreID)+1)])
moreButton = 'span.' + moreID
button = driver.find_element_by_css_selector(moreButton)
button.click()
time.sleep(10)
</code></pre>
<p>However, I keep getting the error message like this: </p>
<blockquote>
<p>WebDriverException: Message: Element is not clickable at point (318.5,
7.100006103515625). Other element would receive the click....</p>
</blockquote>
<p>Can you advise me on how to fix the problem? Any help will be appreciated!</p>
| 1
|
2016-09-08T20:00:31Z
| 39,399,507
|
<blockquote>
<p>WebDriverException: Message: Element is not clickable at point (318.5, 7.100006103515625). Other element would receive the click....</p>
</blockquote>
<p>This error to be occur when element is not in the view port and selenium couldn't click due to some other overlay element on it. In this case you should try one of these following solution :- </p>
<ul>
<li><p>You can try using <code>ActionChains</code> to reach that element before click as below :-</p>
<pre><code>from selenium.webdriver.common.action_chains import ActionChains
button = driver.find_element_by_css_selector(moreButton)
ActionChains(button).move_to_element(element).click().perform()
</code></pre></li>
<li><p>You can try using <code>execute_script()</code> to reach that element before click as :-</p>
<pre><code>driver.execute_script("arguments[0].scrollIntoView(true)", button)
button.click()
</code></pre></li>
<li><p>You can try using <code>JavaScript::click()</code> with <code>execute_script()</code> but this <code>JavaScript::click()</code> defeats the purpose of the test. First because it doesn't generate all the events like a real click (focus, blur, mousedown, mouseup...) and second because it doesn't guarantee that a real user can interact with the element. But to get rid from this issues you can consider it as an alternate solution.</p>
<pre><code>driver.execute_script("arguments[0].click()", button)
</code></pre></li>
</ul>
<p><strong>Note</strong>:- Before using these options make sure you're trying to interact with correct element using with correct locator, otherwise <code>WebElement.click()</code> would work well after wait until element visible and clickable using <code>WebDriverWait</code>.</p>
| 1
|
2016-09-08T20:18:25Z
|
[
"python",
"selenium",
"onclick",
"css-selectors",
"webdriver"
] |
Converting stdout stream to html (add <br> on linebreaks)
| 39,399,281
|
<p>I'm trying to take some console output text and render it through django/js in a modal on my site. When printing the console output the line breaks work fine, but when rendered on the site it shows them all as one line. I tried replacing all the \n with <code><br></code> but it didn't seem to have any effect. The <code><br></code> are shown in line as plain text. Any thoughts on a better way to do this/why this isn't working in the first place?</p>
<pre><code>import sys
from io import StringIO
# Save the old stdout
old_stdout = sys.stdout
# Save the stdout to variable
sys.stdout = mystdout = StringIO()
... # Do some processing that generates console text
# Reset the to the old stdout
sys.stdout = old_stdout
# Get the stdout
processing_std_out = mystdout.getvalue()
# Replace all the linebreaks with <br>
# This is the important part
processing_std_out = processing_std_out.replace("\n","<br>")
# return HttpResponse(json.dumps({'console_output':processing_std_out}), content_type="application/json")
</code></pre>
<p>The js is this:</p>
<pre><code>input_modal.find('.modal-body').text('Analysis complete'+response.console_output)
</code></pre>
| 0
|
2016-09-08T20:01:48Z
| 39,401,918
|
<p>Simple mistake, I should have been passing the HTML, not the text. Also, adding pre tags to the text is much simpler than replacing all \n</p>
<pre><code>input_modal.find('.modal-body').html('Analysis complete'+response.console_output)
</code></pre>
| 0
|
2016-09-09T00:09:03Z
|
[
"javascript",
"python",
"html",
"django"
] |
Why is it considered bad practice to hardcode the name of a class inside that class's methods?
| 39,399,372
|
<p>In python, why is it a bad thing to do something like this:</p>
<pre><code>class Circle:
pi = 3.14159 # class variable
def __init__(self, r = 1):
self.radius = r
def area(self):
return Circle.pi * squared(self.radius)
def squared(base): return pow(base, 2)
</code></pre>
<p>The area method could be defined as follows:</p>
<pre><code>def area(self): return self.__class__.pi * squared(self.radius)
</code></pre>
<p>which is, unless I'm very much mistaken, considered a better way to reference a class variable. The question is why? Intuitively, I don't like it but I don't seem to completely understand this.</p>
| 4
|
2016-09-08T20:08:46Z
| 39,399,506
|
<p>Because in case you subclass the class it will no longer refer to the class, but its parent. In your case it really doesn't make a difference, but in many cases it does:</p>
<pre><code>class Rectangle(object):
name = "Rectangle"
def print_name(self):
print(self.__class__.name) # or print(type(self).name)
class Square(Rectangle):
name = "Square"
</code></pre>
<p>If you instantiate <code>Square</code> and then call its <code>print_name</code> method, it'll print "Square". If you'd use <code>Rectangle.name</code> instead of <code>self.__class__.name</code> (or <code>type(self).name</code>), it'd print "Rectangle".</p>
| 4
|
2016-09-08T20:18:24Z
|
[
"python",
"oop"
] |
Why is it considered bad practice to hardcode the name of a class inside that class's methods?
| 39,399,372
|
<p>In python, why is it a bad thing to do something like this:</p>
<pre><code>class Circle:
pi = 3.14159 # class variable
def __init__(self, r = 1):
self.radius = r
def area(self):
return Circle.pi * squared(self.radius)
def squared(base): return pow(base, 2)
</code></pre>
<p>The area method could be defined as follows:</p>
<pre><code>def area(self): return self.__class__.pi * squared(self.radius)
</code></pre>
<p>which is, unless I'm very much mistaken, considered a better way to reference a class variable. The question is why? Intuitively, I don't like it but I don't seem to completely understand this.</p>
| 4
|
2016-09-08T20:08:46Z
| 39,399,604
|
<p>I can name here two reasons </p>
<p>Inheritance</p>
<pre><code>class WeirdCircle(Circle):
pi = 4
c = WeirdCircle()
print(c.area())
# returning 4 with self.__class__.pi
# and 3.14159 with Circle.pi
</code></pre>
<p>When you want to rename the class, there is only one spot to modify. </p>
| 2
|
2016-09-08T20:24:51Z
|
[
"python",
"oop"
] |
Why is it considered bad practice to hardcode the name of a class inside that class's methods?
| 39,399,372
|
<p>In python, why is it a bad thing to do something like this:</p>
<pre><code>class Circle:
pi = 3.14159 # class variable
def __init__(self, r = 1):
self.radius = r
def area(self):
return Circle.pi * squared(self.radius)
def squared(base): return pow(base, 2)
</code></pre>
<p>The area method could be defined as follows:</p>
<pre><code>def area(self): return self.__class__.pi * squared(self.radius)
</code></pre>
<p>which is, unless I'm very much mistaken, considered a better way to reference a class variable. The question is why? Intuitively, I don't like it but I don't seem to completely understand this.</p>
| 4
|
2016-09-08T20:08:46Z
| 39,399,877
|
<blockquote>
<p>Why is it considered bad practice to hardcode the name of a class inside that class's methods?</p>
</blockquote>
<p><strong>It's not.</strong> I don't know why you think it is.</p>
<p>There are plenty of good reasons to hardcode the name of a class inside its methods. For example, using <code>super</code> on Python 2:</p>
<pre><code>super(ClassName, self).whatever()
</code></pre>
<p>People often try to replace this with <code>super(self.__class__, self).whatever()</code>, and they are <strong>dead wrong</strong> to do so. The first argument <strong>must</strong> be the actual class the <code>super</code> call occurs in, not <code>self.__class__</code>, or the lookup will find the wrong method.</p>
<p>Another reason to hardcode the class name is to avoid overrides. For example, say you've implemented one method using another, as follows:</p>
<pre><code>class Foo(object):
def big_complicated_calculation(self):
return # some horrible mess
def slightly_different_calculation(self):
return self.big_complicated_calculation() + 2
</code></pre>
<p>If you want <code>slightly_different_calculation</code> to be independent of overrides of <code>big_complicated_calculation</code>, you can explicitly refer to <code>Foo.big_complicated_calculation</code>:</p>
<pre><code>def slightly_different_calculation(self):
return Foo.big_complicated_calculation(self) + 2
</code></pre>
<p>Even when you <em>do</em> want to pick up overrides, it's usually better to change <code>ClassName.whatever</code> to <code>self.whatever</code> instead of <code>self.__class__.whatever</code>.</p>
| 2
|
2016-09-08T20:41:29Z
|
[
"python",
"oop"
] |
Why is it considered bad practice to hardcode the name of a class inside that class's methods?
| 39,399,372
|
<p>In python, why is it a bad thing to do something like this:</p>
<pre><code>class Circle:
pi = 3.14159 # class variable
def __init__(self, r = 1):
self.radius = r
def area(self):
return Circle.pi * squared(self.radius)
def squared(base): return pow(base, 2)
</code></pre>
<p>The area method could be defined as follows:</p>
<pre><code>def area(self): return self.__class__.pi * squared(self.radius)
</code></pre>
<p>which is, unless I'm very much mistaken, considered a better way to reference a class variable. The question is why? Intuitively, I don't like it but I don't seem to completely understand this.</p>
| 4
|
2016-09-08T20:08:46Z
| 39,399,961
|
<p>Zen of python says keep your code as simple as possible to make it readable. Why to get into using the class name or super. If you just use self then it will refer the relevant class and print its relevant variable. Refer below code.</p>
<pre><code>class Rectangle(object):
self.name = "Rectangle"
def print_name(self):
print(self.name)
class Square(Rectangle):
name = 'square'
sq = Square()
sq.print_name
</code></pre>
| 0
|
2016-09-08T20:48:01Z
|
[
"python",
"oop"
] |
Fabric runs bash script that ask for sudo password - How to send this password
| 39,399,393
|
<p>I want to use fabric to deploy some application on remote machines. For this, I use fabric to retrieve a bash script from a VCS (bitbucket or github) and execute it. However, the first step of my script is to add the current user to the sudoers, so I am requested for a password.</p>
<p>Is it possible to send this password in the fabfile or within the fab command or.... ?</p>
<p>A portion of code:</p>
<p><strong>bash</strong></p>
<pre><code>sudo tee /etc/sudoers.d/$USER <<END
END
file=/usr/share/MyCompagny/mybashscript.sh
sudo touch $file
sudo echo 'blablabla' >> $file
sudo /bin/rm /etc/sudoers.d/$USER
sudo -k
</code></pre>
<p><strong>fabfile</strong></p>
<pre><code>def deploy():
env.hosts = ['192.168.100.160']
source_folder = '/home/username/src'
branch = 'dev'
puts('Pulling changes from branch <{}>'.format(branch))
projects = ['data', 'report']
for project in projects:
current_path = os.path.join(source_folder, 'package.{}'.format(project))
with cd(current_path):
puts('Current path: {}'.format(current_path))
# Discard all pending changes
run('git checkout -- .')
# Checkout the right branch
run('git checkout {}'.format(branch))
# Pull changes
run('git pull origin_ssh {}'.format(branch))
puts('Install with bash script')
with cd(source_folder):
run('./mybashscript.sh')
</code></pre>
| 0
|
2016-09-08T20:10:22Z
| 39,437,636
|
<p>Use fabric's "<strong>sudo</strong>" function instead of "<strong>run</strong>" function. Script won't prompt for password since it will be running with sudo privilege.</p>
<pre><code>def deploy():
env.hosts = ['192.168.100.160']
source_folder = '/home/username/src'
branch = 'dev'
puts('Pulling changes from branch <{}>'.format(branch))
projects = ['data', 'report']
for project in projects:
current_path = os.path.join(source_folder, 'package.{}'.format(project))
with cd(current_path):
puts('Current path: {}'.format(current_path))
# Discard all pending changes
sudo('git checkout -- .')
# Checkout the right branch
sudo('git checkout {}'.format(branch))
# Pull changes
sudo('git pull origin_ssh {}'.format(branch))
puts('Install with bash script')
with cd(source_folder):
sudo('./mybashscript.sh')
</code></pre>
<p>Hope it helps !!</p>
| 1
|
2016-09-11T15:23:47Z
|
[
"python",
"bash",
"fabric"
] |
Prime Sieve/pairs in a range
| 39,399,396
|
<p>I am trying to write a prime sieve generator that I convert to a list for printing and then print the primes in a given range. I'm pretty sure my number of pairs is correct but for some reason I am getting some extra values in my list of primes that aren't prime. (I caught this right away because my last value in the output was 3599 which is not prime).
I'm not really sure if I have some kind of logical error so any help would be awesome</p>
<pre><code>def sieve(n):
a = [True] * (n)
a[0] = a[1] = False
for (i, isPrime) in enumerate(a):
if isPrime:
yield i
for n in range(i*i, n, i):
a[n] = False
def pairs(li):
pair = 0
for i, x in enumerate(li):
if i < len(li)-1:
if li[i] + 2 == li[i+1]:
pair += 1
return pair
p_3600 = list(sieve(3600))
ans = [vals for vals in p_3600 if vals > 1600]
print ans
print "pairs:", pairs(ans)
</code></pre>
| 0
|
2016-09-08T20:10:37Z
| 39,399,682
|
<p>your sieve function is incorrect. You mark all numbers as not prime, starting by "2".
You need to start by the next multiple of the prime which is <code>prime*prime</code></p>
<p>Hence, you have to start at <code>i*i</code> not <code>i</code> (I used <code>i*2</code> which works but is redundant because already covered by the first loop when <code>i==2</code>)</p>
<pre><code>def sieve(n):
a = [True] * n
a[0] = a[1] = False
for (i, isPrime) in enumerate(a):
if isPrime:
yield i
for j in range(i*i, n, i):
a[j] = False
</code></pre>
<p>to test your list, I suggest you add a prime test so you are sure:</p>
<pre><code># make sure it works: time-costly but reassuring
import math
for i in ans:
si = int(math.sqrt(i))+1
for j in range(2,si):
if i%j==0:
raise Exception("%d is not prime (%d)" % (i,j))
</code></pre>
| 0
|
2016-09-08T20:29:56Z
|
[
"python",
"primes",
"sieve-of-eratosthenes",
"sieve"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.