content stringlengths 85 101k | title stringlengths 0 150 | question stringlengths 15 48k | answers list | answers_scores list | non_answers list | non_answers_scores list | tags list | name stringlengths 35 137 |
|---|---|---|---|---|---|---|---|---|
Q:
Help me with my backprop implementation in Python
EDIT2:
New training set...
Inputs:
[
[0.0, 0.0],
[0.0, 1.0],
[0.0, 2.0],
[0.0, 3.0],
[0.0, 4.0],
[1.0, 0.0],
[1.0, 1.0],
[1.0, 2.0],
[1.0, 3.0],
[1.0, 4.0],
[2.0, 0.0],
[2.0, 1.0],
[2.0, 2.0],
[2.0, 3.0],
[2.0, 4.0],
[3.0, 0.0],
[3.0, 1.0],
[3.0, 2.0],
[3.0, 3.0],
[3.0, 4.0],
[4.0, 0.0],
[4.0, 1.0],
[4.0, 2.0],
[4.0, 3.0],
[4.0, 4.0]
]
Outputs:
[
[0.0],
[0.0],
[0.0],
[0.0],
[0.0],
[0.0],
[0.0],
[0.0],
[0.0],
[0.0],
[0.0],
[0.0],
[0.0],
[0.0],
[0.0],
[0.0],
[0.0],
[0.0],
[1.0],
[1.0],
[0.0],
[0.0],
[0.0],
[1.0],
[1.0]
]
EDIT1:
I have updated the question with my latest code. I fixed few minor issues but I am still getting the same output for all input combinations after the network has learned.
Here is the backprop algorithm explained: Backprop algorithm
Yes, this is a homework, to make this clear right at the beginning.
I am supposed to implement a simple backpropagation algorithm on a simple neural network.
I have chosen Python as a language of choice for this task and I have chosen a neural network like this:
3 layers: 1 input, 1 hidden, 1 output layer:
O O
O
O O
There is an integer on both inptut neurons and 1 or 0 on an output neuron.
Here is my entire implementation (a bit long). Bellow it I will choose just shorter relevant snippets where I think an error could be located at:
import os
import math
import Image
import random
from random import sample
#------------------------------ class definitions
class Weight:
def __init__(self, fromNeuron, toNeuron):
self.value = random.uniform(-0.5, 0.5)
self.fromNeuron = fromNeuron
self.toNeuron = toNeuron
fromNeuron.outputWeights.append(self)
toNeuron.inputWeights.append(self)
self.delta = 0.0 # delta value, this will accumulate and after each training cycle used to adjust the weight value
def calculateDelta(self, network):
self.delta += self.fromNeuron.value * self.toNeuron.error
class Neuron:
def __init__(self):
self.value = 0.0 # the output
self.idealValue = 0.0 # the ideal output
self.error = 0.0 # error between output and ideal output
self.inputWeights = []
self.outputWeights = []
def activate(self, network):
x = 0.0;
for weight in self.inputWeights:
x += weight.value * weight.fromNeuron.value
# sigmoid function
if x < -320:
self.value = 0
elif x > 320:
self.value = 1
else:
self.value = 1 / (1 + math.exp(-x))
class Layer:
def __init__(self, neurons):
self.neurons = neurons
def activate(self, network):
for neuron in self.neurons:
neuron.activate(network)
class Network:
def __init__(self, layers, learningRate):
self.layers = layers
self.learningRate = learningRate # the rate at which the network learns
self.weights = []
for hiddenNeuron in self.layers[1].neurons:
for inputNeuron in self.layers[0].neurons:
self.weights.append(Weight(inputNeuron, hiddenNeuron))
for outputNeuron in self.layers[2].neurons:
self.weights.append(Weight(hiddenNeuron, outputNeuron))
def setInputs(self, inputs):
self.layers[0].neurons[0].value = float(inputs[0])
self.layers[0].neurons[1].value = float(inputs[1])
def setExpectedOutputs(self, expectedOutputs):
self.layers[2].neurons[0].idealValue = expectedOutputs[0]
def calculateOutputs(self, expectedOutputs):
self.setExpectedOutputs(expectedOutputs)
self.layers[1].activate(self) # activation function for hidden layer
self.layers[2].activate(self) # activation function for output layer
def calculateOutputErrors(self):
for neuron in self.layers[2].neurons:
neuron.error = (neuron.idealValue - neuron.value) * neuron.value * (1 - neuron.value)
def calculateHiddenErrors(self):
for neuron in self.layers[1].neurons:
error = 0.0
for weight in neuron.outputWeights:
error += weight.toNeuron.error * weight.value
neuron.error = error * neuron.value * (1 - neuron.value)
def calculateDeltas(self):
for weight in self.weights:
weight.calculateDelta(self)
def train(self, inputs, expectedOutputs):
self.setInputs(inputs)
self.calculateOutputs(expectedOutputs)
self.calculateOutputErrors()
self.calculateHiddenErrors()
self.calculateDeltas()
def learn(self):
for weight in self.weights:
weight.value += self.learningRate * weight.delta
def calculateSingleOutput(self, inputs):
self.setInputs(inputs)
self.layers[1].activate(self)
self.layers[2].activate(self)
#return round(self.layers[2].neurons[0].value, 0)
return self.layers[2].neurons[0].value
#------------------------------ initialize objects etc
inputLayer = Layer([Neuron() for n in range(2)])
hiddenLayer = Layer([Neuron() for n in range(100)])
outputLayer = Layer([Neuron() for n in range(1)])
learningRate = 0.5
network = Network([inputLayer, hiddenLayer, outputLayer], learningRate)
# just for debugging, the real training set is much larger
trainingInputs = [
[0.0, 0.0],
[1.0, 0.0],
[2.0, 0.0],
[0.0, 1.0],
[1.0, 1.0],
[2.0, 1.0],
[0.0, 2.0],
[1.0, 2.0],
[2.0, 2.0]
]
trainingOutputs = [
[0.0],
[1.0],
[1.0],
[0.0],
[1.0],
[0.0],
[0.0],
[0.0],
[1.0]
]
#------------------------------ let's train
for i in range(500):
for j in range(len(trainingOutputs)):
network.train(trainingInputs[j], trainingOutputs[j])
network.learn()
#------------------------------ let's check
for pattern in trainingInputs:
print network.calculateSingleOutput(pattern)
Now, the problem is that after learning the network seems to be returning a float number very close to 0.0 for all input combinations, even those that should be close to 1.0.
I train the network in 100 cycles, in each cycle I do:
For every set of inputs in the training set:
Set network inputs
Calculate outputs by using a sigmoid function
Calculate errors in the output layer
Calculate errors in the hidden layer
Calculate weights' deltas
Then I adjust the weights based on the learning rate and the accumulated deltas.
Here is my activation function for neurons:
def activationFunction(self, network):
"""
Calculate an activation function of a neuron which is a sum of all input weights * neurons where those weights start
"""
x = 0.0;
for weight in self.inputWeights:
x += weight.value * weight.getFromNeuron(network).value
# sigmoid function
self.value = 1 / (1 + math.exp(-x))
This how I calculate the deltas:
def calculateDelta(self, network):
self.delta += self.getFromNeuron(network).value * self.getToNeuron(network).error
This is a general flow of my algorithm:
for i in range(numberOfIterations):
for k,expectedOutput in trainingSet.iteritems():
coordinates = k.split(",")
network.setInputs((float(coordinates[0]), float(coordinates[1])))
network.calculateOutputs([float(expectedOutput)])
network.calculateOutputErrors()
network.calculateHiddenErrors()
network.calculateDeltas()
oldWeights = network.weights
network.adjustWeights()
network.resetDeltas()
print "Iteration ", i
j = 0
for weight in network.weights:
print "Weight W", weight.i, weight.j, ": ", oldWeights[j].value, " ............ Adjusted value : ", weight.value
j += j
The last two lines of the output are:
0.552785449458 # this should be close to 1
0.552785449458 # this should be close to 0
It actually returns the output number for all input combinations.
Am I missing something?
A:
Looks like what you get is nearly the initial state of Neuron (nearly self.idealValue). Maybe you should not initialize this Neuron before having actual data to provide ?
EDIT: OK, I looked a bit deeper in the code and simplified it a bit (will post simplified version below). Basically your code has two minor errors (looks like things you just overlooked), but that leads to a network that definitely won't work.
you forgot to set value of expectedOutput in output layer while in learning phase. Without that the network definitely can't learn anything and will always be stuck with initial idealValue. (That is the bahavior that I spotted at first reading). This one could even have been spotted in your description of the training steps (and probably would have if you hadn't posted the code, this is one of the rare case I know where actually posting the code was hiding the error instead of making it obvious). You fixed this one after your EDIT1.
when activating network in calculateSingleOutputs, you forgot to activate the hidden layer.
Obviously any of these two problems will lead to a disfonctional network.
Once corrected, it works (well, it does in my simplified version of your code).
The errors were not easy to spot because the initial code was much too complicated. You should think twice before introducing new classes or new methods. Not creating enough methods or classes will make code hard to read and to maintain, but creating too many may make it even harder to read and maintain. You have to find the right balance. My personal method to find this balance is to follow code smells and refactoring techniques wherever they lead me. Sometimes adding methods or creating classes, sometimes removing them. It's certainly not perfect but that's what works for me.
Below is my version of code after some refactoring applied. I spent about one hour changing your code but always keeping it functionaly equivalent. I took that as a good refactoring exercise, as the initial code was really horrible to read. After refactoring it just took 5 minutes to spot the problems.
import os
import math
"""
A simple backprop neural network. It has 3 layers:
Input layer: 2 neurons
Hidden layer: 2 neurons
Output layer: 1 neuron
"""
class Weight:
"""
Class representing a weight between two neurons
"""
def __init__(self, value, from_neuron, to_neuron):
self.value = value
self.from_neuron = from_neuron
from_neuron.outputWeights.append(self)
self.to_neuron = to_neuron
to_neuron.inputWeights.append(self)
# delta value, this will accumulate and after each training cycle
# will be used to adjust the weight value
self.delta = 0.0
class Neuron:
"""
Class representing a neuron.
"""
def __init__(self):
self.value = 0.0 # the output
self.idealValue = 0.0 # the ideal output
self.error = 0.0 # error between output and ideal output
self.inputWeights = [] # weights that end in the neuron
self.outputWeights = [] # weights that starts in the neuron
def activate(self):
"""
Calculate an activation function of a neuron which is
a sum of all input weights * neurons where those weights start
"""
x = 0.0;
for weight in self.inputWeights:
x += weight.value * weight.from_neuron.value
# sigmoid function
self.value = 1 / (1 + math.exp(-x))
class Network:
"""
Class representing a whole neural network. Contains layers.
"""
def __init__(self, layers, learningRate, weights):
self.layers = layers
self.learningRate = learningRate # the rate at which the network learns
self.weights = weights
def training(self, entries, expectedOutput):
for i in range(len(entries)):
self.layers[0][i].value = entries[i]
for i in range(len(expectedOutput)):
self.layers[2][i].idealValue = expectedOutput[i]
for layer in self.layers[1:]:
for n in layer:
n.activate()
for n in self.layers[2]:
error = (n.idealValue - n.value) * n.value * (1 - n.value)
n.error = error
for n in self.layers[1]:
error = 0.0
for w in n.outputWeights:
error += w.to_neuron.error * w.value
n.error = error
for w in self.weights:
w.delta += w.from_neuron.value * w.to_neuron.error
def updateWeights(self):
for w in self.weights:
w.value += self.learningRate * w.delta
def calculateSingleOutput(self, entries):
"""
Calculate a single output for input values.
This will be used to debug the already learned network after training.
"""
for i in range(len(entries)):
self.layers[0][i].value = entries[i]
# activation function for output layer
for layer in self.layers[1:]:
for n in layer:
n.activate()
print self.layers[2][0].value
#------------------------------ initialize objects etc
neurons = [Neuron() for n in range(5)]
w1 = Weight(-0.79, neurons[0], neurons[2])
w2 = Weight( 0.51, neurons[0], neurons[3])
w3 = Weight( 0.27, neurons[1], neurons[2])
w4 = Weight(-0.48, neurons[1], neurons[3])
w5 = Weight(-0.33, neurons[2], neurons[4])
w6 = Weight( 0.09, neurons[3], neurons[4])
weights = [w1, w2, w3, w4, w5, w6]
inputLayer = [neurons[0], neurons[1]]
hiddenLayer = [neurons[2], neurons[3]]
outputLayer = [neurons[4]]
learningRate = 0.3
network = Network([inputLayer, hiddenLayer, outputLayer], learningRate, weights)
# just for debugging, the real training set is much larger
trainingSet = [([0.0,0.0],[0.0]),
([1.0,0.0],[1.0]),
([2.0,0.0],[1.0]),
([0.0,1.0],[0.0]),
([1.0,1.0],[1.0]),
([2.0,1.0],[0.0]),
([0.0,2.0],[0.0]),
([1.0,2.0],[0.0]),
([2.0,2.0],[1.0])]
#------------------------------ let's train
for i in range(100): # training iterations
for entries, expectedOutput in trainingSet:
network.training(entries, expectedOutput)
network.updateWeights()
#network has learned, let's check
network.calculateSingleOutput((1, 0)) # this should be close to 1
network.calculateSingleOutput((0, 0)) # this should be close to 0
By the way there is still a third problem I didn't corrected (but easy to correct). If x is too big or too small (> 320 or < -320) math.exp() will raise an exception. This will occur if you apply for training iterations, say a few thousands. The most simple way to correct that I see is to check for value of x and if it's too big or too small set Neuron's value to 0 or 1 depending on the case, which is the limit value.
| Help me with my backprop implementation in Python | EDIT2:
New training set...
Inputs:
[
[0.0, 0.0],
[0.0, 1.0],
[0.0, 2.0],
[0.0, 3.0],
[0.0, 4.0],
[1.0, 0.0],
[1.0, 1.0],
[1.0, 2.0],
[1.0, 3.0],
[1.0, 4.0],
[2.0, 0.0],
[2.0, 1.0],
[2.0, 2.0],
[2.0, 3.0],
[2.0, 4.0],
[3.0, 0.0],
[3.0, 1.0],
[3.0, 2.0],
[3.0, 3.0],
[3.0, 4.0],
[4.0, 0.0],
[4.0, 1.0],
[4.0, 2.0],
[4.0, 3.0],
[4.0, 4.0]
]
Outputs:
[
[0.0],
[0.0],
[0.0],
[0.0],
[0.0],
[0.0],
[0.0],
[0.0],
[0.0],
[0.0],
[0.0],
[0.0],
[0.0],
[0.0],
[0.0],
[0.0],
[0.0],
[0.0],
[1.0],
[1.0],
[0.0],
[0.0],
[0.0],
[1.0],
[1.0]
]
EDIT1:
I have updated the question with my latest code. I fixed few minor issues but I am still getting the same output for all input combinations after the network has learned.
Here is the backprop algorithm explained: Backprop algorithm
Yes, this is a homework, to make this clear right at the beginning.
I am supposed to implement a simple backpropagation algorithm on a simple neural network.
I have chosen Python as a language of choice for this task and I have chosen a neural network like this:
3 layers: 1 input, 1 hidden, 1 output layer:
O O
O
O O
There is an integer on both inptut neurons and 1 or 0 on an output neuron.
Here is my entire implementation (a bit long). Bellow it I will choose just shorter relevant snippets where I think an error could be located at:
import os
import math
import Image
import random
from random import sample
#------------------------------ class definitions
class Weight:
def __init__(self, fromNeuron, toNeuron):
self.value = random.uniform(-0.5, 0.5)
self.fromNeuron = fromNeuron
self.toNeuron = toNeuron
fromNeuron.outputWeights.append(self)
toNeuron.inputWeights.append(self)
self.delta = 0.0 # delta value, this will accumulate and after each training cycle used to adjust the weight value
def calculateDelta(self, network):
self.delta += self.fromNeuron.value * self.toNeuron.error
class Neuron:
def __init__(self):
self.value = 0.0 # the output
self.idealValue = 0.0 # the ideal output
self.error = 0.0 # error between output and ideal output
self.inputWeights = []
self.outputWeights = []
def activate(self, network):
x = 0.0;
for weight in self.inputWeights:
x += weight.value * weight.fromNeuron.value
# sigmoid function
if x < -320:
self.value = 0
elif x > 320:
self.value = 1
else:
self.value = 1 / (1 + math.exp(-x))
class Layer:
def __init__(self, neurons):
self.neurons = neurons
def activate(self, network):
for neuron in self.neurons:
neuron.activate(network)
class Network:
def __init__(self, layers, learningRate):
self.layers = layers
self.learningRate = learningRate # the rate at which the network learns
self.weights = []
for hiddenNeuron in self.layers[1].neurons:
for inputNeuron in self.layers[0].neurons:
self.weights.append(Weight(inputNeuron, hiddenNeuron))
for outputNeuron in self.layers[2].neurons:
self.weights.append(Weight(hiddenNeuron, outputNeuron))
def setInputs(self, inputs):
self.layers[0].neurons[0].value = float(inputs[0])
self.layers[0].neurons[1].value = float(inputs[1])
def setExpectedOutputs(self, expectedOutputs):
self.layers[2].neurons[0].idealValue = expectedOutputs[0]
def calculateOutputs(self, expectedOutputs):
self.setExpectedOutputs(expectedOutputs)
self.layers[1].activate(self) # activation function for hidden layer
self.layers[2].activate(self) # activation function for output layer
def calculateOutputErrors(self):
for neuron in self.layers[2].neurons:
neuron.error = (neuron.idealValue - neuron.value) * neuron.value * (1 - neuron.value)
def calculateHiddenErrors(self):
for neuron in self.layers[1].neurons:
error = 0.0
for weight in neuron.outputWeights:
error += weight.toNeuron.error * weight.value
neuron.error = error * neuron.value * (1 - neuron.value)
def calculateDeltas(self):
for weight in self.weights:
weight.calculateDelta(self)
def train(self, inputs, expectedOutputs):
self.setInputs(inputs)
self.calculateOutputs(expectedOutputs)
self.calculateOutputErrors()
self.calculateHiddenErrors()
self.calculateDeltas()
def learn(self):
for weight in self.weights:
weight.value += self.learningRate * weight.delta
def calculateSingleOutput(self, inputs):
self.setInputs(inputs)
self.layers[1].activate(self)
self.layers[2].activate(self)
#return round(self.layers[2].neurons[0].value, 0)
return self.layers[2].neurons[0].value
#------------------------------ initialize objects etc
inputLayer = Layer([Neuron() for n in range(2)])
hiddenLayer = Layer([Neuron() for n in range(100)])
outputLayer = Layer([Neuron() for n in range(1)])
learningRate = 0.5
network = Network([inputLayer, hiddenLayer, outputLayer], learningRate)
# just for debugging, the real training set is much larger
trainingInputs = [
[0.0, 0.0],
[1.0, 0.0],
[2.0, 0.0],
[0.0, 1.0],
[1.0, 1.0],
[2.0, 1.0],
[0.0, 2.0],
[1.0, 2.0],
[2.0, 2.0]
]
trainingOutputs = [
[0.0],
[1.0],
[1.0],
[0.0],
[1.0],
[0.0],
[0.0],
[0.0],
[1.0]
]
#------------------------------ let's train
for i in range(500):
for j in range(len(trainingOutputs)):
network.train(trainingInputs[j], trainingOutputs[j])
network.learn()
#------------------------------ let's check
for pattern in trainingInputs:
print network.calculateSingleOutput(pattern)
Now, the problem is that after learning the network seems to be returning a float number very close to 0.0 for all input combinations, even those that should be close to 1.0.
I train the network in 100 cycles, in each cycle I do:
For every set of inputs in the training set:
Set network inputs
Calculate outputs by using a sigmoid function
Calculate errors in the output layer
Calculate errors in the hidden layer
Calculate weights' deltas
Then I adjust the weights based on the learning rate and the accumulated deltas.
Here is my activation function for neurons:
def activationFunction(self, network):
"""
Calculate an activation function of a neuron which is a sum of all input weights * neurons where those weights start
"""
x = 0.0;
for weight in self.inputWeights:
x += weight.value * weight.getFromNeuron(network).value
# sigmoid function
self.value = 1 / (1 + math.exp(-x))
This how I calculate the deltas:
def calculateDelta(self, network):
self.delta += self.getFromNeuron(network).value * self.getToNeuron(network).error
This is a general flow of my algorithm:
for i in range(numberOfIterations):
for k,expectedOutput in trainingSet.iteritems():
coordinates = k.split(",")
network.setInputs((float(coordinates[0]), float(coordinates[1])))
network.calculateOutputs([float(expectedOutput)])
network.calculateOutputErrors()
network.calculateHiddenErrors()
network.calculateDeltas()
oldWeights = network.weights
network.adjustWeights()
network.resetDeltas()
print "Iteration ", i
j = 0
for weight in network.weights:
print "Weight W", weight.i, weight.j, ": ", oldWeights[j].value, " ............ Adjusted value : ", weight.value
j += j
The last two lines of the output are:
0.552785449458 # this should be close to 1
0.552785449458 # this should be close to 0
It actually returns the output number for all input combinations.
Am I missing something?
| [
"Looks like what you get is nearly the initial state of Neuron (nearly self.idealValue). Maybe you should not initialize this Neuron before having actual data to provide ?\nEDIT: OK, I looked a bit deeper in the code and simplified it a bit (will post simplified version below). Basically your code has two minor err... | [
7
] | [] | [] | [
"algorithm",
"math",
"neural_network",
"python"
] | stackoverflow_0003988238_algorithm_math_neural_network_python.txt |
Q:
Matplotlib: Curve touches axis boudary. How to solve this?
I drew this graph using matplotlib using the following code.
import matplotlib
import matplotlib.pyplot as plt
x = [450.0, 450.0, 438.0, 450.0, 420.0, 432.0, 416.0, 406.0, 432.0, 400.0]
y = [328.90000000000003, 327.60000000000031, 305.90000000000146, 285.2000000000013, 276.0, 264.0, 244.0, 236.0, 233.5, 236.0]
z = [2,4,6,8,10,12,14,16,18,20]
plt.plot(z,x,'-',lw=3)
plt.plot(z,y,'--',lw=3)
plt.show()
As you can see the graph of x touches the axis boundary and does not look good. How can I change this?
A:
Use axis:
plt.plot(z,x,'-',lw=3)
plt.plot(z,y,'--',lw=3)
plt.axis([2,20,100,500])
plt.show()
Or, use ylim:
plt.ylim([100,500])
A:
I tried your code and the graph did not overlap. Anyway, try to add small margins to the plot:
plt.margins(0,0.02)
Also you may try to add argument clip_on=True to plot function call (but it should be set to True by default).
| Matplotlib: Curve touches axis boudary. How to solve this? |
I drew this graph using matplotlib using the following code.
import matplotlib
import matplotlib.pyplot as plt
x = [450.0, 450.0, 438.0, 450.0, 420.0, 432.0, 416.0, 406.0, 432.0, 400.0]
y = [328.90000000000003, 327.60000000000031, 305.90000000000146, 285.2000000000013, 276.0, 264.0, 244.0, 236.0, 233.5, 236.0]
z = [2,4,6,8,10,12,14,16,18,20]
plt.plot(z,x,'-',lw=3)
plt.plot(z,y,'--',lw=3)
plt.show()
As you can see the graph of x touches the axis boundary and does not look good. How can I change this?
| [
"Use axis:\nplt.plot(z,x,'-',lw=3)\nplt.plot(z,y,'--',lw=3)\nplt.axis([2,20,100,500])\nplt.show()\n\nOr, use ylim:\nplt.ylim([100,500])\n\n",
"I tried your code and the graph did not overlap. Anyway, try to add small margins to the plot:\nplt.margins(0,0.02)\nAlso you may try to add argument clip_on=True to plot ... | [
1,
0
] | [] | [] | [
"matplotlib",
"python"
] | stackoverflow_0003988183_matplotlib_python.txt |
Q:
django template "file name too long"
I'm very new to Python and Django so maybe someone can point me in the right direction.
I have the following url.py line
url(r'^$', direct_to_template,
{'template':'index.html',
'extra_context':{'featured_actors': lambda: User.objects
.annotate(avatars_nb=Count('avatar'))
.filter(actor_profile__is_featured=True, avatars_nb__gt=0)
.order_by('?')[:4]},
}, name='index'),
All this was working perfectly fine for a long time but for no reason that I can see all of a sudden I'm getting this template error.
TemplateSyntaxError at /
Caught an exception while rendering: (36, 'File name too long')
On line 70
66 {% if featured_actors|length %}
67 <div id="featured">
68 <h2>Featured Actors: </h2>
69 <ul>
70 {% for actor in featured_actors %}
71 <li>
72 <a href="{% url public_profile actor.username %}">
73 <img src="{% avatar_itself_url actor.avatar_set.all.0 200 %}" alt="{{ actor.profile.firstname }} {{ actor.profile.lastname }}" style="max-width:140px" height="200"/>
74 </a>
75 </li>
76 {% endfor %}
What is the best way to debug this?
UPDATE
126 def avatar_url(self, size):
127 return self.avatar.storage.url(self.avatar_name(size))
I think I found a bit of the problem, one of the user profiles is also giving the same error. So I think it must be a avatar/image path for him that is too long. I'm trying to narrow it down...
A:
It's possible that the image path {% avatar_itself_url actor.avatar_set.all.0 200 %} is too long. Can you delete the line with <img ... and see if the template renders?
If the above renders, from the python manage.py shell, can you verify the length of your image path? Is the length greater than 255 chars?
ANSWER TO COMMENT
Your image path is too long, by that I mean:
<img src="/this/is/a/very/long/path/which/exceeds/255/characters/something.png" />
The above is not 255 chars long but you get the idea. The above src path might be very long. Try to find out what that path is and calculate its length. What does the implementation of avatar_itself_url look like? How about the unicode of Avatar? What does it return? Do you have an Avatar with a very long name?
Replicating the error msg
Here's how you can replicate the error msg from python. Run the following in a python script:
long_filename = 'a' * 256
fp = open(long_filename, 'w')
fp.close()
The above should return the msg: IOError: [Errno 36] File name too long:.
Displaying the image path
Can you replace the img tag from the html by just its content: {% avatar_itself_url actor.avatar_set.all.0 200 %}? Instead of seeing the image, you should see the path of the image. Eyeballing the one longer than 256 chars shouldn't be an issue.
| django template "file name too long" | I'm very new to Python and Django so maybe someone can point me in the right direction.
I have the following url.py line
url(r'^$', direct_to_template,
{'template':'index.html',
'extra_context':{'featured_actors': lambda: User.objects
.annotate(avatars_nb=Count('avatar'))
.filter(actor_profile__is_featured=True, avatars_nb__gt=0)
.order_by('?')[:4]},
}, name='index'),
All this was working perfectly fine for a long time but for no reason that I can see all of a sudden I'm getting this template error.
TemplateSyntaxError at /
Caught an exception while rendering: (36, 'File name too long')
On line 70
66 {% if featured_actors|length %}
67 <div id="featured">
68 <h2>Featured Actors: </h2>
69 <ul>
70 {% for actor in featured_actors %}
71 <li>
72 <a href="{% url public_profile actor.username %}">
73 <img src="{% avatar_itself_url actor.avatar_set.all.0 200 %}" alt="{{ actor.profile.firstname }} {{ actor.profile.lastname }}" style="max-width:140px" height="200"/>
74 </a>
75 </li>
76 {% endfor %}
What is the best way to debug this?
UPDATE
126 def avatar_url(self, size):
127 return self.avatar.storage.url(self.avatar_name(size))
I think I found a bit of the problem, one of the user profiles is also giving the same error. So I think it must be a avatar/image path for him that is too long. I'm trying to narrow it down...
| [
"It's possible that the image path {% avatar_itself_url actor.avatar_set.all.0 200 %} is too long. Can you delete the line with <img ... and see if the template renders?\nIf the above renders, from the python manage.py shell, can you verify the length of your image path? Is the length greater than 255 chars?\nANS... | [
2
] | [] | [] | [
"django",
"django_urls",
"python"
] | stackoverflow_0003988165_django_django_urls_python.txt |
Q:
weird range value in the colorbar, matplotlib
I am a new user to the python & matplotlib, this might be a simple question but I searched the internet for hours and couldn't find a solution for this.
I am plotting precipitation data from which is in the NetCDF format. What I find weird is that, the data doesn't have any negative values in it.(I checked that many times,just to make sure). But the value in the colorbar starts with a negative value (like -0.0000312 etc). It doesnt make sense because I dont do any manipulations to the data, other that just selecting a part of the data from the big file and plotting it.
So my code doesn't much to it. Here is the code:
from mpl_toolkits.basemap import Basemap
import numpy as np
import matplotlib.pyplot as plt
from netCDF4 import Dataset
cd progs
f=Dataset('V21_GPCP.1979-2009.nc')
lats=f.variables['lat'][:]
lons=f.variables['lon'][:]
prec=f.variables['PREC'][:]
la=lats[31:52]
lo=lons[18:83]
pre=prec[0,31:52,18:83]
m = Basemap(width=06.e6,height=05.e6,projection='gnom',lat_0=15.,lon_0=80.)
x, y = m(*np.meshgrid(lo,la))
m.drawcoastlines()
m.drawmapboundary(fill_color='lightblue')
m.drawparallels(np.arange(-90.,120.,5.),labels=[1,0,0,0])
m.drawmeridians(np.arange(0.,420.,5.),labels=[0,0,0,1])
cs=m.contourf(x,y,pre,50,cmap=plt.cm.jet)
plt.colorbar()
The output that I got for that code was a beautiful plot, with the colorbar starting from the value -0.00001893, and the rest are positive values, and I believe are correct. Its just the minimum value thats bugging me.
I would like to know:
Is there anything wrong in my code? cos I know that the data is right.
Is there a way to manually change the value to 0?
Is it right for the values in the colorbar to change everytime we run the code, cos for the same data, the next time I run the code, the values go like this " -0.00001893, 2.00000000, 4.00000000, 6.00000000 etc"
I want to customize them to "0.0, 2.0, 4.0, 6.0 etc"
Thanks,
Vaishu
A:
Yes, you can manually format everything about the colorbar. See this:
import matplotlib.colors as mc
import matplotlib.pyplot as plt
plt.imshow(X, norm=mc.Normalize(vmin=0))
plt.colorbar(ticks=[0,2,4,6], format='%0.2f')
Many plotting functions including imshow, contourf, and others include a norm argument that takes a Normalize object. You can set the vmin or vmax attributes of that object to adjust the corresponding values of the colorbar.
colorbar takes the ticks and format arguments to adjust which ticks to display and how to display them.
| weird range value in the colorbar, matplotlib | I am a new user to the python & matplotlib, this might be a simple question but I searched the internet for hours and couldn't find a solution for this.
I am plotting precipitation data from which is in the NetCDF format. What I find weird is that, the data doesn't have any negative values in it.(I checked that many times,just to make sure). But the value in the colorbar starts with a negative value (like -0.0000312 etc). It doesnt make sense because I dont do any manipulations to the data, other that just selecting a part of the data from the big file and plotting it.
So my code doesn't much to it. Here is the code:
from mpl_toolkits.basemap import Basemap
import numpy as np
import matplotlib.pyplot as plt
from netCDF4 import Dataset
cd progs
f=Dataset('V21_GPCP.1979-2009.nc')
lats=f.variables['lat'][:]
lons=f.variables['lon'][:]
prec=f.variables['PREC'][:]
la=lats[31:52]
lo=lons[18:83]
pre=prec[0,31:52,18:83]
m = Basemap(width=06.e6,height=05.e6,projection='gnom',lat_0=15.,lon_0=80.)
x, y = m(*np.meshgrid(lo,la))
m.drawcoastlines()
m.drawmapboundary(fill_color='lightblue')
m.drawparallels(np.arange(-90.,120.,5.),labels=[1,0,0,0])
m.drawmeridians(np.arange(0.,420.,5.),labels=[0,0,0,1])
cs=m.contourf(x,y,pre,50,cmap=plt.cm.jet)
plt.colorbar()
The output that I got for that code was a beautiful plot, with the colorbar starting from the value -0.00001893, and the rest are positive values, and I believe are correct. Its just the minimum value thats bugging me.
I would like to know:
Is there anything wrong in my code? cos I know that the data is right.
Is there a way to manually change the value to 0?
Is it right for the values in the colorbar to change everytime we run the code, cos for the same data, the next time I run the code, the values go like this " -0.00001893, 2.00000000, 4.00000000, 6.00000000 etc"
I want to customize them to "0.0, 2.0, 4.0, 6.0 etc"
Thanks,
Vaishu
| [
"Yes, you can manually format everything about the colorbar. See this:\nimport matplotlib.colors as mc\nimport matplotlib.pyplot as plt\nplt.imshow(X, norm=mc.Normalize(vmin=0))\nplt.colorbar(ticks=[0,2,4,6], format='%0.2f')\n\n\nMany plotting functions including imshow, contourf, and others include a norm argument... | [
2
] | [] | [] | [
"colorbar",
"matplotlib",
"netcdf",
"python"
] | stackoverflow_0003970072_colorbar_matplotlib_netcdf_python.txt |
Q:
Registering a Python program as a Windows Server 2003 R2 Service
I wrote a Python script that I need to have installed on Windows Server 2003 R2, that will run when a specific directory is changed in any way (new files, deleted files, etc). I believe I need to register this as a system service, in order to listen for that, but I'm really not sure.
So, my question is this: does such a script need to be registered as a service, and if so, how do I go about doing that?
Thanks.
A:
I believe your program will have to watch the directory for changes and act according. Alternatively, you could have a separate program watch the directory and then invoke your script, but this is essentially the same.
Tim Golden has an article here which discusses directory watching using python and the win32 api.
After you get that working, it would be very appropriate to have your program run as a service. This has been covered before.
| Registering a Python program as a Windows Server 2003 R2 Service | I wrote a Python script that I need to have installed on Windows Server 2003 R2, that will run when a specific directory is changed in any way (new files, deleted files, etc). I believe I need to register this as a system service, in order to listen for that, but I'm really not sure.
So, my question is this: does such a script need to be registered as a service, and if so, how do I go about doing that?
Thanks.
| [
"I believe your program will have to watch the directory for changes and act according. Alternatively, you could have a separate program watch the directory and then invoke your script, but this is essentially the same.\nTim Golden has an article here which discusses directory watching using python and the win32 a... | [
1
] | [] | [] | [
"python",
"windows_server_2003",
"windows_services"
] | stackoverflow_0003988599_python_windows_server_2003_windows_services.txt |
Q:
Python: Regular expression to change image filename
I've got a string looks like this
String a is ACdA(a = %b, ccc= 2r2)
String b is \ewfsd\ss.jpg
Expected outputs:
ACdA(a = %b, ccc= 2r2, b_holder = \ewfsd\ss.jpg)
It adds the string b to the end of string a, that's it! But be careful of the ")"
"b_holder " is hard coded string, it's absolutly same in all cases, won't be changed.
Update: If regular expression is not a best choice here, please suggest a best way to do.
A:
Is
a = "ACdA(a = %b, ccc= 2r2)"
b = "\ewfsd\ss.jpg"
print a[:-1] + ', b_holder = ' + b + ')'
what you had in mind?
Most days of the week, I personally prefer
print '%s, b_holder = %s)' % (a[:-1], b)
I recognize I'm likely in the minority in this particular regard.
There certainly are other implementations, some of them RE-based. I favor the ones above, given what the original questioner has expressed.
| Python: Regular expression to change image filename | I've got a string looks like this
String a is ACdA(a = %b, ccc= 2r2)
String b is \ewfsd\ss.jpg
Expected outputs:
ACdA(a = %b, ccc= 2r2, b_holder = \ewfsd\ss.jpg)
It adds the string b to the end of string a, that's it! But be careful of the ")"
"b_holder " is hard coded string, it's absolutly same in all cases, won't be changed.
Update: If regular expression is not a best choice here, please suggest a best way to do.
| [
"Is\na = \"ACdA(a = %b, ccc= 2r2)\"\nb = \"\\ewfsd\\ss.jpg\"\nprint a[:-1] + ', b_holder = ' + b + ')'\n\nwhat you had in mind?\nMost days of the week, I personally prefer\nprint '%s, b_holder = %s)' % (a[:-1], b)\n\nI recognize I'm likely in the minority in this particular regard.\nThere certainly are other implem... | [
1
] | [] | [] | [
"python",
"regex"
] | stackoverflow_0003988920_python_regex.txt |
Q:
Python set and the "in" keyword usage issue
I'm trying to work with a set of django models in an external script. I'm querying the database at a preset interval to get the items that meet the query. When my external script is processing them, it takes a while and I may get the same results in the query if the processing hasn't updated the model yet. I figured I could use a set or list to store the items processing and check each model from the query result to ensure it isn't currently processing. When trying this though, it seems the in keyword always returns True. Any thoughts?
(Python 2.6 on ubuntu 10.10)
>>> t
<SomeDjangoModel: Title1>
>>> v
<SomeDjangoModel: Title2>
>>> x
<SomeDjangoModel: Title3>
>>> items
set([<SomeDjangoModel: Title3>, <SomeDjangoModel: Title1>])
>>> t in items
True
>>> x in items
True
>>> v in items
True
>>> items
set([<SomeDjangoModel: Title3>, <SomeDjangoModel: Title1>])
>>>
A:
Python sets require that objects implement __eq__ and __hash__ appropriately.
I looked at django.db.models.base.Model (link) and saw that it defines these methods in terms of the model's PK:
def __eq__(self, other):
return isinstance(other, self.__class__) and self._get_pk_val() == other._get_pk_val()
def __ne__(self, other):
return not self.__eq__(other)
def __hash__(self):
return hash(self._get_pk_val())
So it's not a surprise that seemingly distinct objects are considered "equal", it is because they have their PKs initialized to some default value (e.g. None).
| Python set and the "in" keyword usage issue | I'm trying to work with a set of django models in an external script. I'm querying the database at a preset interval to get the items that meet the query. When my external script is processing them, it takes a while and I may get the same results in the query if the processing hasn't updated the model yet. I figured I could use a set or list to store the items processing and check each model from the query result to ensure it isn't currently processing. When trying this though, it seems the in keyword always returns True. Any thoughts?
(Python 2.6 on ubuntu 10.10)
>>> t
<SomeDjangoModel: Title1>
>>> v
<SomeDjangoModel: Title2>
>>> x
<SomeDjangoModel: Title3>
>>> items
set([<SomeDjangoModel: Title3>, <SomeDjangoModel: Title1>])
>>> t in items
True
>>> x in items
True
>>> v in items
True
>>> items
set([<SomeDjangoModel: Title3>, <SomeDjangoModel: Title1>])
>>>
| [
"Python sets require that objects implement __eq__ and __hash__ appropriately.\nI looked at django.db.models.base.Model (link) and saw that it defines these methods in terms of the model's PK:\n def __eq__(self, other):\n return isinstance(other, self.__class__) and self._get_pk_val() == other._get_pk_val... | [
2
] | [] | [] | [
"django",
"python",
"set"
] | stackoverflow_0003988204_django_python_set.txt |
Q:
Copy from a file until a certain marker string is found
I am trying to write some code which will open List1.txt and copy the contents up until it sees the string 'John smith' to List2.txt.
This is what I have so far:
F=open('C:\T\list.txt','r').readlines()
B=open('C:\T\list2.txt','w')
BB=open('C:\T\list2.txt','r').readlines()
while BB.readlines() == 'John smith':
B.writelines(F)
Here is an example of what List1.txt could contain:
Natly molar
Jone rock
marin seena
shan lra
John smith
Barry Bloe
Sara bloe`
However, it doesn't seem to be working. What am I doing wrong?
A:
from itertools import takewhile
with open('List1.txt') as fin, open('List2.txt', 'w') as fout:
lines = takewhile(lambda x : x != 'John smith\n', fin)
fout.writelines(lines)
A:
F=open('C:\T\list1.txt','r')
B=open('C:\T\list2.txt','w')
for l in F: #for each line in list1.txt
if l.strip() == 'John Smith': #l includes newline, so strip it
break
B.write(l)
F.close()
B.close()
| Copy from a file until a certain marker string is found | I am trying to write some code which will open List1.txt and copy the contents up until it sees the string 'John smith' to List2.txt.
This is what I have so far:
F=open('C:\T\list.txt','r').readlines()
B=open('C:\T\list2.txt','w')
BB=open('C:\T\list2.txt','r').readlines()
while BB.readlines() == 'John smith':
B.writelines(F)
Here is an example of what List1.txt could contain:
Natly molar
Jone rock
marin seena
shan lra
John smith
Barry Bloe
Sara bloe`
However, it doesn't seem to be working. What am I doing wrong?
| [
"from itertools import takewhile\n\nwith open('List1.txt') as fin, open('List2.txt', 'w') as fout:\n lines = takewhile(lambda x : x != 'John smith\\n', fin)\n fout.writelines(lines)\n\n",
"F=open('C:\\T\\list1.txt','r')\nB=open('C:\\T\\list2.txt','w')\nfor l in F: #for each line in list1.txt\n if l.strip... | [
3,
1
] | [] | [] | [
"file_io",
"python"
] | stackoverflow_0003989302_file_io_python.txt |
Q:
Terminate a sockert.recv in a thread in Python
To implement a simple protocol in python, I've used a thread to monitor acknowledgements send by the receiver. To do this, I use a thread in a function
def ackListener(self):
while self.waitingforack:
ack = self.socket_agent.recv(4)
...
exit()
where self.waitingforack is a boolean I set to False when I've received an acknowledgement for every messages I've sent.
The problem is that my code is blocking at the self.socket_agent.recv(4) operation, the modification of the value of waitingforack is done too late
Is there a way to force the Thread to terminate when I'm not waiting for ack anymore ?
Thank you
A:
Remember that these operations like recv are blocking operations in your case.
So calling this function blocks unless you have received the data.
There are two ways to go about it:
Make socket non blocking
socket.setblocking(0)
Set a timeout value
see : http://docs.python.org/library/socket.html#socket.socket.settimeout
socket.settimeout(value)
Use asynchronous approach
http://docs.python.org/library/asyncore.html
| Terminate a sockert.recv in a thread in Python | To implement a simple protocol in python, I've used a thread to monitor acknowledgements send by the receiver. To do this, I use a thread in a function
def ackListener(self):
while self.waitingforack:
ack = self.socket_agent.recv(4)
...
exit()
where self.waitingforack is a boolean I set to False when I've received an acknowledgement for every messages I've sent.
The problem is that my code is blocking at the self.socket_agent.recv(4) operation, the modification of the value of waitingforack is done too late
Is there a way to force the Thread to terminate when I'm not waiting for ack anymore ?
Thank you
| [
"Remember that these operations like recv are blocking operations in your case.\nSo calling this function blocks unless you have received the data.\nThere are two ways to go about it:\nMake socket non blocking\nsocket.setblocking(0)\n\nSet a timeout value\nsee : http://docs.python.org/library/socket.html#socket.soc... | [
4
] | [] | [] | [
"multithreading",
"python",
"sockets"
] | stackoverflow_0003990101_multithreading_python_sockets.txt |
Q:
XML "not well-formed (invalid token)" error from FlickrApi
First of all : I'm using the well known (and tested I suppose) flickrapi. I was testing synchronization of flickr photos with my project and everything worked fine till I reached some specific files. Then python's xml parser failed to parse xml to string (and error from topic). Debug gave me line and column in the xml, so I've exported it to a file :
<exif tagspace="IFD0" tagspaceid="0" tag="Copyright" label="Copyright">
<raw>©Etienne-Follet.com</raw>
<clean>©Etienne-Follet.com</clean>
</exif>
Error is in line <clean>©Etienne-Follet in column <cleanerror>©... . Can anyone see anything strange in this line ? What more, every single photo from this set/author crashes. Maybe it is somehow connected with the special characters ? Here's the link to sample set that fails to parse :
http://www.flickr.com/photos/rte-france/sets/72157623592737564/
A:
Unsolvable: http://bitbucket.org/sybren/flickrapi/issue/11/encoding-issues . Looks like this is a flickr's side issue and they're not going to solve it quickly.
A:
I suppose that you have to encode everthing in UTF-8, so make sure it is so.
| XML "not well-formed (invalid token)" error from FlickrApi | First of all : I'm using the well known (and tested I suppose) flickrapi. I was testing synchronization of flickr photos with my project and everything worked fine till I reached some specific files. Then python's xml parser failed to parse xml to string (and error from topic). Debug gave me line and column in the xml, so I've exported it to a file :
<exif tagspace="IFD0" tagspaceid="0" tag="Copyright" label="Copyright">
<raw>©Etienne-Follet.com</raw>
<clean>©Etienne-Follet.com</clean>
</exif>
Error is in line <clean>©Etienne-Follet in column <cleanerror>©... . Can anyone see anything strange in this line ? What more, every single photo from this set/author crashes. Maybe it is somehow connected with the special characters ? Here's the link to sample set that fails to parse :
http://www.flickr.com/photos/rte-france/sets/72157623592737564/
| [
"Unsolvable: http://bitbucket.org/sybren/flickrapi/issue/11/encoding-issues . Looks like this is a flickr's side issue and they're not going to solve it quickly.\n",
"I suppose that you have to encode everthing in UTF-8, so make sure it is so. \n"
] | [
1,
0
] | [] | [] | [
"flickr",
"python",
"xml",
"xml_parsing"
] | stackoverflow_0003987032_flickr_python_xml_xml_parsing.txt |
Q:
Insert data into texctrl
To insert data into a textctrl i do this :
num_items = self.lc_sources.GetItemCount()
self.lc_sources.SetStringItem(num_items, 1, data)
But the problem is the insertion is done after the treatment of data, well and i need to do insertion in real time.
Please how can i do this
A:
I usually do something like this when I need to update a ListCtrl:
self.list_ctrl.InsertStringItem(index, data)
self.list_ctrl.SetStringItem(index, 1, moreData)
Lately, I've been using ObjectListView instead of the ListCtrl because I think it's just easier to use and more flexible too: http://objectlistview.sourceforge.net/python/
| Insert data into texctrl | To insert data into a textctrl i do this :
num_items = self.lc_sources.GetItemCount()
self.lc_sources.SetStringItem(num_items, 1, data)
But the problem is the insertion is done after the treatment of data, well and i need to do insertion in real time.
Please how can i do this
| [
"I usually do something like this when I need to update a ListCtrl:\nself.list_ctrl.InsertStringItem(index, data)\nself.list_ctrl.SetStringItem(index, 1, moreData)\nLately, I've been using ObjectListView instead of the ListCtrl because I think it's just easier to use and more flexible too: http://objectlistview.sou... | [
1
] | [] | [] | [
"python",
"wxpython",
"wxwidgets"
] | stackoverflow_0003989989_python_wxpython_wxwidgets.txt |
Q:
How to pass object to a function in this scenario
I have the following piece of code:
NameX.functionA(functionB(Dictionary["___"]))
Instead of _ I would like to make a reference to NameX in the form of a string, so that the program interprets it as
NameX.functionA(functionB(Dictionary["NameX"]))
How can I do this? I tried to use str(self), but it is clearly wrong.
Thanks
A:
Is NameX.__name__ perhaps what you want?
A:
You can use
Name.__name__
on an uninitialized object and
Name.__class__.__name__
on an initialized object.
A:
Abusive but it works:
>>> def getvarname(var):
d = globals()
for n in d:
if d[n] is var:
return n
return None
>>> class NameX: pass
>>> getvarname(NameX)
'NameX'
Works on things that aren't just classes, too:
>>> inst1 = NameX()
>>> getvarname(inst1)
'inst1'
You might be shot if this ends up in real code, though.
| How to pass object to a function in this scenario | I have the following piece of code:
NameX.functionA(functionB(Dictionary["___"]))
Instead of _ I would like to make a reference to NameX in the form of a string, so that the program interprets it as
NameX.functionA(functionB(Dictionary["NameX"]))
How can I do this? I tried to use str(self), but it is clearly wrong.
Thanks
| [
"Is NameX.__name__ perhaps what you want?\n",
"You can use\nName.__name__\n\non an uninitialized object and\nName.__class__.__name__\n\non an initialized object.\n",
"Abusive but it works:\n>>> def getvarname(var):\n d = globals()\n for n in d:\n if d[n] is var:\n return n\n return No... | [
3,
1,
0
] | [] | [] | [
"python"
] | stackoverflow_0003990562_python.txt |
Q:
python sqlite question - Insert method
In the code below row is a tuple of 200 elements (numbers) and listOfVars is a tuple of 200 strings that are variable names in the testTable. tupleForm is a list of tuples, 200 elements in each tuple.
The following code does not work. It returns a syntax error:
for row in tupleForm:
cmd = '''INSERT INTO testTable listOfVars values row'''
cur.execute(cmd)
However, the following works fine. Can someone explain why? I am finding sqlite so non-intuitive.
for row in tupleForm:
cmd = '''INSERT INTO testTable %s values %s'''%(listOfVars, row)
cur.execute(cmd)
Thanks for your help.
A:
insert_query = '''INSERT INTO testTable ({0}) VALUES ({1})'''.format(
(','.join(listOfVars)), ','.join('?'*len(listOfVars)))
cur.executemany(insert_query, tupleForm)
Please do not use normal string interpolation for database queries.
We are fortunate to have the DB-API which explains in detail how to do what you require.
Edit:
A couple of quick reasons why this method is better than your 2nd attempt:
by using the built-in string-escaping tools we avoid SQL injection attacks, and
executemany accepts a list of tuples as an argument.
A:
The first attempts to use the strings as the values, which is wrong. The second substitutes the string value of the tuples into the string, which results in (...) in both cases.
But both forms are incorrect, since the second doesn't escape values at all. What you should be doing is using a parametrized query.
| python sqlite question - Insert method | In the code below row is a tuple of 200 elements (numbers) and listOfVars is a tuple of 200 strings that are variable names in the testTable. tupleForm is a list of tuples, 200 elements in each tuple.
The following code does not work. It returns a syntax error:
for row in tupleForm:
cmd = '''INSERT INTO testTable listOfVars values row'''
cur.execute(cmd)
However, the following works fine. Can someone explain why? I am finding sqlite so non-intuitive.
for row in tupleForm:
cmd = '''INSERT INTO testTable %s values %s'''%(listOfVars, row)
cur.execute(cmd)
Thanks for your help.
| [
"insert_query = '''INSERT INTO testTable ({0}) VALUES ({1})'''.format(\n (','.join(listOfVars)), ','.join('?'*len(listOfVars)))\ncur.executemany(insert_query, tupleForm)\n\nPlease do not use normal string interpolation for database queries.\nWe are fortunate to have the DB-API which explains in detail... | [
7,
0
] | [] | [] | [
"python",
"sqlite"
] | stackoverflow_0003990769_python_sqlite.txt |
Q:
How two merge several .csv files horizontally with python?
I've several .csv files (~10) and need to merge them together into a single file horizontally. Each file has the same number of rows (~300) and 4 header lines which are not necessarily identical, but should not be merged (only take the header lines from the first .csv file). The tokens in the lines are comma separated with no spaces in between.
As a python noob I've not come up with a solution, though I'm sure there's a simple solution to this problem. Any help is welcome.
A:
You can load the CSV files using the csv module in Python. Please refer to the documentation of this module for the loading code, I cannot remember it but it is really easy. Something like:
import csv
reader = csv.reader(open("some.csv", "rb"))
csvContent = list(reader)
After that, when you have the CSV files loaded in such form (a list of tuples):
[ ("header1", "header2", "header3", "header4"),
("value01", "value12", "value13", "value14"),
("value11", "value12", "value13", "value14"),
...
]
You can merge two such lists line-by-line:
result = [a+b for (a,b) in zip(csvList1, csvList2)]
To save such a result, you can use:
writer = csv.writer(open("some.csv", "wb"))
writer.writerows(result)
A:
The csv module is your friend.
A:
If you don't necessarily have to use Python, you can use shell tools like paste/gawk etc
$ paste file1 file2 file3 file4 .. | awk 'NR>4'
The above will put them horizontally without the headers. If you want the headers, just get them from file1
$ ( head -4 file ; paste file[1-4] | awk 'NR>4' ) > output
A:
You dont need to use csv module for this. You can just use
file1 = open(file1)
After opening all your files you can do this
from itertools import izip_longest
foo=[]
for new_line in izip_longest(file1,fil2,file3....,fillvalue=''):
foo.append(new_line)
This will give you this structure (which kon has already told you)..It will also work if you have different number of lines in each file
[ ("line10", "line20", "line30", "line40"),
("line11", "line21", "line31", "line41"),
...
]
After this you can just write it to a new file taking 1 list at a time
for listx in foo:
new_file.write(','.join(j for j in listx))
PS: more about izip_longest here
A:
You learn by doing (and trying, even). So, I'll just give you a few hints. Use the following functions:
To open a file: open()
To read all the lines in a file: IOBase.readlines()
To split a string according to a series of splitting tokents: str.split()
If you really don't know what to do, I recommend you read the tutorial and Dive Into Python 3. (Depending on how much Python you know, you'll either have to read through the first few chapters or cut straight to the file IO chapters.)
A:
Purely for learning purposes
A simple approach that does not take advantage of csv module:
# open file to write
file_to_write = open(filename, 'w')
# your list of csv files
csv_files = [file1, file2, ...]
headers = True
# iterate through your list
for filex in csv_files:
# mark the lines that are header lines
header_count = 0
# open the csv file and read line by line
filex_f = open(filex, 'r')
for line in filex_f:
# write header only once
if headers:
file_to_write.write(line+"\n")
if header_count > 3: headers = False
# Write all other lines to the file
if header_count > 3:
file_to_write.write(line+"\n")
# count lines
header_count = header_count + 1
# close file
filex_f.close()
file_to_write.close()
| How two merge several .csv files horizontally with python? | I've several .csv files (~10) and need to merge them together into a single file horizontally. Each file has the same number of rows (~300) and 4 header lines which are not necessarily identical, but should not be merged (only take the header lines from the first .csv file). The tokens in the lines are comma separated with no spaces in between.
As a python noob I've not come up with a solution, though I'm sure there's a simple solution to this problem. Any help is welcome.
| [
"You can load the CSV files using the csv module in Python. Please refer to the documentation of this module for the loading code, I cannot remember it but it is really easy. Something like:\nimport csv\nreader = csv.reader(open(\"some.csv\", \"rb\"))\ncsvContent = list(reader)\n\nAfter that, when you have the CSV ... | [
6,
2,
1,
1,
0,
0
] | [] | [] | [
"csv",
"file",
"python"
] | stackoverflow_0003986353_csv_file_python.txt |
Q:
Is python time module reliable enough to use to measure response time?
My question was not specific enough last time, and so this is second question about this topic.
I'm running some experiments and I need to precisely measure participants' response time to questions in millisecond unit.
I know how to do this with the time module, but I was wondering if this is reliable enough or I should be careful using it. I was wondering if there are possibilities of some other random CPU load will interfere with the measuring of time.
So my question is, will the response time measure with time module be very accurate or there will be some noise associate with it?
Thank you,
Joon
A:
CPU load will affect timing. If your application is startved of a slice of CPU time, then timing would get affected. You can not help that much. You can be as precise and no more. Ensure that your program gets a health slice of cpu time and the result will be accurate. In most cases, the results should be accurate to milliseconds.
A:
If you benchmark on a *nix system (Linux most probably), time.clock() will return CPU time in seconds. On its own, it's not very informative, but as a difference of results (i.e. t0 = time.clock(); some_process(); t = time.clock() - t0), you'd have a much more load-independent timing than with time.time().
| Is python time module reliable enough to use to measure response time? | My question was not specific enough last time, and so this is second question about this topic.
I'm running some experiments and I need to precisely measure participants' response time to questions in millisecond unit.
I know how to do this with the time module, but I was wondering if this is reliable enough or I should be careful using it. I was wondering if there are possibilities of some other random CPU load will interfere with the measuring of time.
So my question is, will the response time measure with time module be very accurate or there will be some noise associate with it?
Thank you,
Joon
| [
"CPU load will affect timing. If your application is startved of a slice of CPU time, then timing would get affected. You can not help that much. You can be as precise and no more. Ensure that your program gets a health slice of cpu time and the result will be accurate. In most cases, the results should be accurate... | [
2,
1
] | [] | [] | [
"python",
"time"
] | stackoverflow_0003989952_python_time.txt |
Q:
Numpy.array indexing question
I am trying to create a 'mask' of a numpy.array by specifying certain criteria. Python even has nice syntax for something like this:
>> A = numpy.array([1,2,3,4,5])
>> A > 3
array([False, False, False, True, True])
But if I have a list of criteria instead of a range:
>> A = numpy.array([1,2,3,4,5])
>> crit = [1,3,5]
I can't do this:
>> A in crit
I have to do something based on list comprehensions, like this:
>> [a in crit for a in A]
array([True, False, True, False, True])
Which is correct.
Now, the problem is that I am working with large arrays and the above code is very slow. Is there a more natural way of doing this operation that might speed it up?
EDIT: I was able to get a small speedup by making crit into a set.
EDIT2: For those who are interested:
Jouni's approach:
1000 loops, best of 3: 102 µs per loop
numpy.in1d:
1000 loops, best of 3: 1.33 ms per loop
EDIT3: Just tested again with B = randint(10,size=100)
Jouni's approach:
1000 loops, best of 3: 2.96 ms per loop
numpy.in1d:
1000 loops, best of 3: 1.34 ms per loop
Conclusion: Use numpy.in1d() unless B is very small.
A:
I think that the numpy function in1d is what you are looking for:
>>> A = numpy.array([1,2,3,4,5])
>>> B = [1,3,5]
>>> numpy.in1d(A,crit)
array([ True, False, True, False, True], dtype=bool)
as stated in its docstring, "in1d(a, b) is roughly equivalent to np.array([item in b for item in a])"
Admittedly, I haven't done any speed tests, but it sounds like what you are looking for.
Another faster way
Here's another way to do it which is faster. Sort the B array first(containing the elements you are looking to find in A), turn it into a numpy array, and then do:
B[B.searchsorted(A)] == A
though if you have elements in A that are larger than the largest in B, you will need to do:
inds = B.searchsorted(A)
inds[inds == len(B)] = 0
mask = B[inds] == A
It may not be faster for small arrays (especially for B being small), but before long it will definitely be faster. Why? Because this is a O(N log M) algorithm, where N is the number of elements in A and M is the number of elements in M, putting together a bunch of individual masks is O(N * M). I tested it with N = 10000 and M = 14 and it was already faster. Anyway, just thought that you might like to know, especially if you are truly planning on using this on very large arrays.
A:
Combine several comparisons with "or":
A = randint(10,size=10000)
mask = (A == 1) | (A == 3) | (A == 5)
Or if you have a list B and want to create the mask dynamically:
B = [1, 3, 5]
mask = zeros((10000,),dtype=bool)
for t in B: mask = mask | (A == t)
A:
Create a mask and use the compress function of the numpy array. It should be much faster.
If you have a complex criteria, remember to construct it based on math of the arrays.
a = numpy.array([3,1,2,4,5])
mask = a > 3
b = a.compress(mask)
or
a = numpy.random.random_integers(1,5,100000)
c=a.compress((a<=4)*(a>=2)) ## numbers between n<=4 and n>=2
d=a.compress(~((a<=4)*(a>=2))) ## numbers either n>4 or n<2
Ok, if you want a mask that has all a in [1,3,5] you can do something like
a = numpy.random.random_integers(1,5,100000)
mask=(a==1)+(a==3)+(a==5)
or
a = numpy.random.random_integers(1,5,100000)
mask = numpy.zeros(len(a), dtype=bool)
for num in [1,3,5]:
mask += (a==num)
| Numpy.array indexing question | I am trying to create a 'mask' of a numpy.array by specifying certain criteria. Python even has nice syntax for something like this:
>> A = numpy.array([1,2,3,4,5])
>> A > 3
array([False, False, False, True, True])
But if I have a list of criteria instead of a range:
>> A = numpy.array([1,2,3,4,5])
>> crit = [1,3,5]
I can't do this:
>> A in crit
I have to do something based on list comprehensions, like this:
>> [a in crit for a in A]
array([True, False, True, False, True])
Which is correct.
Now, the problem is that I am working with large arrays and the above code is very slow. Is there a more natural way of doing this operation that might speed it up?
EDIT: I was able to get a small speedup by making crit into a set.
EDIT2: For those who are interested:
Jouni's approach:
1000 loops, best of 3: 102 µs per loop
numpy.in1d:
1000 loops, best of 3: 1.33 ms per loop
EDIT3: Just tested again with B = randint(10,size=100)
Jouni's approach:
1000 loops, best of 3: 2.96 ms per loop
numpy.in1d:
1000 loops, best of 3: 1.34 ms per loop
Conclusion: Use numpy.in1d() unless B is very small.
| [
"I think that the numpy function in1d is what you are looking for:\n>>> A = numpy.array([1,2,3,4,5])\n>>> B = [1,3,5]\n>>> numpy.in1d(A,crit)\narray([ True, False, True, False, True], dtype=bool)\n\nas stated in its docstring, \"in1d(a, b) is roughly equivalent to np.array([item in b for item in a])\"\nAdmittedly... | [
6,
3,
0
] | [] | [] | [
"arrays",
"numpy",
"python"
] | stackoverflow_0003989990_arrays_numpy_python.txt |
Q:
Maximum recursion depth exceeded when providing unique id
I wanted to provide unique ID for different categories of models in my db. So I've introduced a dummy model :
class GUUID(models.Model):
guuid = models.PositiveSmallIntegerField(_(u"Dummy GUUID"), default=1)
and in model that I want to have unique ID:
class Event(models.Model):
unique = models.IntegerField(blank=False, editable=False)
def save(self):
guuid = GUUID()
guuid.save()
self.unique = guuid.id
self.save()
But when saving my model I'm getting:
maximum recursion depth exceeded while calling a Python object and 997 QUUID obects in db. Why is that happening ?
A:
I think you want to replace self.save() with super(Event, self).save(). Also might not be a bad idea to grab the parameters from the Event save method and pass them up:
def save(self, *args, **kwargs):
#... other code here
super(Event, self).save(*args, **kwargs)
| Maximum recursion depth exceeded when providing unique id | I wanted to provide unique ID for different categories of models in my db. So I've introduced a dummy model :
class GUUID(models.Model):
guuid = models.PositiveSmallIntegerField(_(u"Dummy GUUID"), default=1)
and in model that I want to have unique ID:
class Event(models.Model):
unique = models.IntegerField(blank=False, editable=False)
def save(self):
guuid = GUUID()
guuid.save()
self.unique = guuid.id
self.save()
But when saving my model I'm getting:
maximum recursion depth exceeded while calling a Python object and 997 QUUID obects in db. Why is that happening ?
| [
"I think you want to replace self.save() with super(Event, self).save(). Also might not be a bad idea to grab the parameters from the Event save method and pass them up:\ndef save(self, *args, **kwargs):\n #... other code here\n super(Event, self).save(*args, **kwargs)\n\n"
] | [
6
] | [] | [] | [
"django",
"django_models",
"python",
"recursion",
"uniqueidentifier"
] | stackoverflow_0003991008_django_django_models_python_recursion_uniqueidentifier.txt |
Q:
why python nose unittest teardown fixture failed
I'm using nose test framework. When running a test module, the teardown function defined in it failed. the error raised says the fixture is locked by another process. here is my test module, test_my_module.py:
... ...
def teardown():
if os.path.exists(test_output_dir):
shutil.rmtree(test_output_dir)
... ...
@with_setup(init_test_db, destroy_test_db)
def test_foo1():
eq_(foo1(),1)
@with_setup(init_test_db, destroy_test_db)
def test_foo2():
eq_(foo2(),2)
... ...
There is a db(sqlite3) file in the test_output_dir which been used as fixture. Actually it is that db file that cannot be remove by the teardown, due to it be locked by other process. To my understand, a teardown will always run after all test functions finished running. So why does that happen ? why can those test functions still lock the db file? Is it a sqlite3 issue or it's some wrong in my test code?
A:
You could try explicitly closing the sqlite connection in the teardown before removing test_output_dir.
A:
I believe I did have the same issue in my c# unit tests.
I solved it by using calling SqliteConnection.ClearAllPools() before deleting the database file, so it is related to connection pooling.
Maybe there is an equivalent method in python? I really don't know.
| why python nose unittest teardown fixture failed | I'm using nose test framework. When running a test module, the teardown function defined in it failed. the error raised says the fixture is locked by another process. here is my test module, test_my_module.py:
... ...
def teardown():
if os.path.exists(test_output_dir):
shutil.rmtree(test_output_dir)
... ...
@with_setup(init_test_db, destroy_test_db)
def test_foo1():
eq_(foo1(),1)
@with_setup(init_test_db, destroy_test_db)
def test_foo2():
eq_(foo2(),2)
... ...
There is a db(sqlite3) file in the test_output_dir which been used as fixture. Actually it is that db file that cannot be remove by the teardown, due to it be locked by other process. To my understand, a teardown will always run after all test functions finished running. So why does that happen ? why can those test functions still lock the db file? Is it a sqlite3 issue or it's some wrong in my test code?
| [
"You could try explicitly closing the sqlite connection in the teardown before removing test_output_dir.\n",
"I believe I did have the same issue in my c# unit tests. \nI solved it by using calling SqliteConnection.ClearAllPools() before deleting the database file, so it is related to connection pooling. \nMaybe ... | [
0,
0
] | [] | [] | [
"fixture",
"nose",
"python",
"unit_testing"
] | stackoverflow_0003915850_fixture_nose_python_unit_testing.txt |
Q:
python to uncomment the right line
I have a file as follows:
line 1: _____
...
# for AAA
#export CONFIG = AAA_defconfig
# for BBB, BbB, Bbb, BBb3
#export CONFIG = BBB_defconfig
# for CCC, CcC, Ccc, Ccc1
#export CONFIG = CCC_defconfig
...
other lines
I want manipulate the file, so that based on the given string, I could export the right "CONFIG", and comment the others.
e.g. if I got "CcC", then the file would be manipulated as
line 1: _____
...
# for AAA
#export CONFIG = AAA_defconfig
# for BBB, BbB, Bbb, BBb3
#export CONFIG = BBB_defconfig
# for CCC, CcC, Ccc, Ccc1
export CONFIG = CCC_defconfig
...
other lines
what the good way to do it in python?
Thanks in advance!!
A:
Then why not make it as
line = 'xxxx'
CONFIG = default_deconfig
if line == 'AAA':
CONFIG = AAA_defconfig
elif line == 'CCC':
CONFIG = CCC_defconfig
...
unless this is not a python file and you want to manipulate it. It looks like that.
In that case create a config generator which will create a config file based on the line variable.
[Edit: Based on comments]
You might have to make some adjustments but this should work.
# Simple , crude solution
f = open('file1', 'r')
manipulated_lines = []
readFirstLine = False
config = ''
configComma = ''
uncommentLine = 0
for line in f:
tokens = line.split()
if uncommentLine == 1:
# this is comment line
if tokens[0] == '#export':
manipulated_lines.append(line[1:])
uncommentLine = uncommentLine + 1
continue
elif uncommentLine > 1:
manipulated_lines.append(line)
continue
if not readFirstLine:
config = line.rstrip('\n')
configComma = config + ','
readFirstLine = True
# Process additional lines
manipulated_lines.append(line)
if len(tokens) > 0 and tokens[0] == '#':
if tokens[1] == 'for':
if config in tokens or configComma in tokens:
uncommentLine = uncommentLine + 1
continue
print manipulated_lines
f.close()
fw = open('file2', 'w')
fw.writelines(manipulated_lines)
fw.close()
Input: file1
CCC
# for AAA
#export CONFIG = AAA_defconfig
# for BBB, BbB, Bbb, BBb3
#export CONFIG = BBB_defconfig
# for CCC, CcC, Ccc, Ccc1
#export CONFIG = CCC_defconfig
...
Output: file2
CCC
# for AAA
#export CONFIG = AAA_defconfig
# for BBB, BbB, Bbb, BBb3
#export CONFIG = BBB_defconfig
# for CCC, CcC, Ccc, Ccc1
export CONFIG = CCC_defconfig
...
A:
def select_export(text, source, destination):
uncomment_next = False
for line in source:
line = line.strip()
if line.startswith('# for ') and text in set(t.strip()
for t in line[6:].split(',')):
uncomment_next = True
elif line.startswith('#') and uncomment_next:
line = line[1:]
uncomment_next = False
destination.write(line + '\n')
with open('source') as f:
with open('destination', 'w') as w:
select_export('CcC', f, w)
A:
A little cleaner more readable approach IMO.
(And, yes, to modify one line in a file, you have to overwrite and rewrite the entire file.)
#!/usr/bin/env python2.7
import re
def find_and_modify(config_file, given_string):
with open(config_file) as f:
lines = f.readlines()
given_string_re = re.compile(r'# for .*{}'.format(given_string))
# line #'s that start with either "export" or "#export"
export_line_numbers = []
# the line # containing the given_string we're searching for
uncomment_line_number = None
for i,line in enumerate(lines):
if re.match(r'#?export', line):
export_line_numbers.append(i)
prev_line = lines[i-1]
if given_string_re.match(prev_line):
uncomment_line_number = i
for i in export_line_numbers:
if i == uncomment_line_number:
lines[i] = re.sub(r'^#*', '', lines[i])
else:
lines[i] = re.sub(r'^#*', '#', lines[i])
with open(config_file, 'w') as f:
f.writelines(lines)
find_and_modify('some_file', 'AAA')
find_and_modify('some_file', 'CcC')
A:
First create a generator function:
import re
def uncomment(seq, prev_pattern, curr_pattern):
"""Remove comment from any string in seq matching curr_pattern if the previous line matches prev_pattern"""
prev = ""
for curr in seq:
if re.match(curr_pattern, curr) and re.match(prev_pattern, prev):
yield curr[1:]
else:
yield curr
prev = curr
Now test it:
>>> lines = ["leave this alone", "#fix next line", "#fix this line", "leave this alone"]
>>> print "\n".join(uncomment(lines, "^#fix next", "^#fix this"))
leave this alone
#fix next line
fix this line
leave this alone
Now use it to fix your file:
with open(input_filename, 'r') as f_in:
with open(output_filename, 'w') as f_out:
for line in uncomment(f_in, "^#for AAA", "^#export CONFIG"):
f_out.write(line)
| python to uncomment the right line | I have a file as follows:
line 1: _____
...
# for AAA
#export CONFIG = AAA_defconfig
# for BBB, BbB, Bbb, BBb3
#export CONFIG = BBB_defconfig
# for CCC, CcC, Ccc, Ccc1
#export CONFIG = CCC_defconfig
...
other lines
I want manipulate the file, so that based on the given string, I could export the right "CONFIG", and comment the others.
e.g. if I got "CcC", then the file would be manipulated as
line 1: _____
...
# for AAA
#export CONFIG = AAA_defconfig
# for BBB, BbB, Bbb, BBb3
#export CONFIG = BBB_defconfig
# for CCC, CcC, Ccc, Ccc1
export CONFIG = CCC_defconfig
...
other lines
what the good way to do it in python?
Thanks in advance!!
| [
"Then why not make it as\nline = 'xxxx'\nCONFIG = default_deconfig\n\nif line == 'AAA':\n CONFIG = AAA_defconfig\nelif line == 'CCC':\n CONFIG = CCC_defconfig\n...\n\nunless this is not a python file and you want to manipulate it. It looks like that.\nIn that case create a config generator which will create... | [
1,
1,
1,
1
] | [] | [] | [
"file_manipulation",
"python",
"string"
] | stackoverflow_0003989967_file_manipulation_python_string.txt |
Q:
Django - AutoField with regards to a foreign key
I have a model with a unique integer that needs to increment with regards to a foreign key, and the following code is how I currently handle it:
class MyModel(models.Model):
business = models.ForeignKey(Business)
number = models.PositiveIntegerField()
spam = models.CharField(max_length=255)
class Meta:
unique_together = (('number', 'business'),)
def save(self, *args, **kwargs):
if self.pk is None: # New instance's only
try:
highest_number = MyModel.objects.filter(business=self.business).order_by('-number').all()[0].number
self.number = highest_number + 1
except ObjectDoesNotExist: # First MyModel instance
self.number = 1
super(MyModel, self).save(*args, **kwargs)
I have the following questions regarding this:
Multiple people can create MyModel instances for the same business, all over the internet. Is it possible for 2 people creating MyModel instances at the same time, and .count() returns 500 at the same time for both, and then both try to essentially set self.number = 501 at the same time (raising an IntegrityError)? The answer seems like an obvious "yes, it could happen", but I had to ask.
Is there a shortcut, or "Best way" to do this, which I can use (or perhaps a SuperAutoField that handles this)?
I can't just slap a while model_not_saved: try:, except IntegrityError: in, because other restraints in the model could lead to an endless loop, and a disaster worse than Chernobyl (maybe not quite that bad).
A:
You want that constraint at the database level. Otherwise you're going to eventually run into the concurrency problem you discussed. The solution is to wrap the entire operation (read, increment, write) in a transaction.
Why can't you use an AutoField for instead of a PositiveIntegerField?
number = models.AutoField()
However, in this case number is almost certainly going to equal yourmodel.id, so why not just use that?
Edit:
Oh, I see what you want. You want a numberfield that doesn't increment unless there's more than one instance of MyModel.business.
I would still recommend just using the id field if you can, since it's certain to be unique. If you absolutely don't want to do that (maybe you're showing this number to users), then you will need to wrap your save method in a transaction.
You can read more about transactions in the docs:
http://docs.djangoproject.com/en/dev/topics/db/transactions/
If you're just using this to count how many instances of MyModel have a FK to Business, you should do that as a query rather than trying to store a count.
| Django - AutoField with regards to a foreign key | I have a model with a unique integer that needs to increment with regards to a foreign key, and the following code is how I currently handle it:
class MyModel(models.Model):
business = models.ForeignKey(Business)
number = models.PositiveIntegerField()
spam = models.CharField(max_length=255)
class Meta:
unique_together = (('number', 'business'),)
def save(self, *args, **kwargs):
if self.pk is None: # New instance's only
try:
highest_number = MyModel.objects.filter(business=self.business).order_by('-number').all()[0].number
self.number = highest_number + 1
except ObjectDoesNotExist: # First MyModel instance
self.number = 1
super(MyModel, self).save(*args, **kwargs)
I have the following questions regarding this:
Multiple people can create MyModel instances for the same business, all over the internet. Is it possible for 2 people creating MyModel instances at the same time, and .count() returns 500 at the same time for both, and then both try to essentially set self.number = 501 at the same time (raising an IntegrityError)? The answer seems like an obvious "yes, it could happen", but I had to ask.
Is there a shortcut, or "Best way" to do this, which I can use (or perhaps a SuperAutoField that handles this)?
I can't just slap a while model_not_saved: try:, except IntegrityError: in, because other restraints in the model could lead to an endless loop, and a disaster worse than Chernobyl (maybe not quite that bad).
| [
"You want that constraint at the database level. Otherwise you're going to eventually run into the concurrency problem you discussed. The solution is to wrap the entire operation (read, increment, write) in a transaction.\nWhy can't you use an AutoField for instead of a PositiveIntegerField?\nnumber = models.AutoFi... | [
2
] | [] | [] | [
"django",
"django_models",
"python"
] | stackoverflow_0003991437_django_django_models_python.txt |
Q:
Best way to integrate Erlang and python
What's the best way to integrate Erlang and python?
We need to call python functions in Erlang and call Erlang functions in python. At this moment we are trying to use SOAP as a intermediate layer between these two languages, but we have a lot of "not compatible" troubles. Could you advise the best way to perform integration?
A:
As already mentioned with erlport you can use Erlang port protocol and term_to_binary/binary_to_term functions on Erlang side. On Python side there are low level port driver Port which can send and receive messages from Erlang and more high level protocol handler Protocol which simplified situation when you want to call a Python function from Erlang. Currently there are no any auxiliary interfaces on Erlang side but it's possible there will be some in the future. In the examples directory you can find some example code for different situations.
And feel free to contact me about any ErlPort related topic.
A:
In my experience, the best is erlport.
It allows you to build an Erlang port in Python by satisfying the Erlang port protocol. It handles the data compatibility issue by implementing the Erlang external term format. The linked page shows a clear example of how to use it.
| Best way to integrate Erlang and python | What's the best way to integrate Erlang and python?
We need to call python functions in Erlang and call Erlang functions in python. At this moment we are trying to use SOAP as a intermediate layer between these two languages, but we have a lot of "not compatible" troubles. Could you advise the best way to perform integration?
| [
"As already mentioned with erlport you can use Erlang port protocol and term_to_binary/binary_to_term functions on Erlang side. On Python side there are low level port driver Port which can send and receive messages from Erlang and more high level protocol handler Protocol which simplified situation when you want t... | [
26,
11
] | [] | [] | [
"erlang",
"python"
] | stackoverflow_0003990344_erlang_python.txt |
Q:
CherryPy combine file and dictionary based configuration
I'm setting up a CherryPy application and would like to have the majority of my configuration settings in a .conf file like this:
[global]
server.socketPort = 8080
server.threadPool = 10
server.environment = "production"
However I would also like to setup a few with a dictionary in code like this:
conf = {'/': {'tools.staticdir.on': True,
'tools.staticdir.dir': os.path.join(current_dir, 'templates')}}
cherrypy.quickstart(HelloWorld(), config=conf)
Is it possible to combine both configs into one and then pass it into the config quickstart option?
A:
quickstart is for quick sites. If you're doing anything as complex as having multiple configs, it's time to graduate. Look at the source code for the quickstart function (it's not scary!): you're going to unpack that into your startup script. So instead of quickstart, write this:
cherrypy.config.update(conffile)
cherrypy.config.update(confdict)
app = cherrypy.tree.mount(HelloWorld(), '/', conffile)
app.merge(confdict)
if hasattr(cherrypy.engine, "signal_handler"):
cherrypy.engine.signal_handler.subscribe()
if hasattr(cherrypy.engine, "console_control_handler"):
cherrypy.engine.console_control_handler.subscribe()
cherrypy.engine.start()
cherrypy.engine.block()
We've essentially added two lines to the quickstart code. First, we have an extra call to config.update; that merges the config dict into the global config. Second, app.merge(confdict); that's for merging multiple configs into each app.
It's perfectly OK to do these in the opposite order if you want the file config to override the dict. It's also OK to stick the dict-based config in HelloWorld._cp_config as described in the docs.
A:
Those are two different configurations. Cherrypy has two configurations: One is the global config and the other is application config. You can use both normally:
cherrypy.config.update('my_file.ini')
cherrypy.quickstart(HelloWorld(), config=conf)
Please note that your example config file is wrong -- instead of server.socketPort it should be server.socket_port and instead of server.threadPool it should be server.threadpool. Check config docs for more information.
| CherryPy combine file and dictionary based configuration | I'm setting up a CherryPy application and would like to have the majority of my configuration settings in a .conf file like this:
[global]
server.socketPort = 8080
server.threadPool = 10
server.environment = "production"
However I would also like to setup a few with a dictionary in code like this:
conf = {'/': {'tools.staticdir.on': True,
'tools.staticdir.dir': os.path.join(current_dir, 'templates')}}
cherrypy.quickstart(HelloWorld(), config=conf)
Is it possible to combine both configs into one and then pass it into the config quickstart option?
| [
"quickstart is for quick sites. If you're doing anything as complex as having multiple configs, it's time to graduate. Look at the source code for the quickstart function (it's not scary!): you're going to unpack that into your startup script. So instead of quickstart, write this:\ncherrypy.config.update(conffile)\... | [
11,
3
] | [] | [] | [
"cherrypy",
"configuration",
"python"
] | stackoverflow_0003989763_cherrypy_configuration_python.txt |
Q:
how to create file names from a number plus a suffix in python
how to create file names from a number plus a suffix??.
for example I am using two programs in python script for work in a server, the first creates a file x and the second uses the x file, the problem is that this file can not overwrite.
no matter what name is generated from the first program. the second program of be taken exactly from the path and file name that was assigned to continue the script.
thanks for your help and attention
A:
As far as I can understand you, you want to create a file with a unique name in one program and pass the name of that file to another program. I think you should take a look at the tempfile module, http://docs.python.org/library/tempfile.html#module-tempfile.
Here is an example that makes use of NamedTemporaryFile:
import tempfile
import os
def produce(text):
with tempfile.NamedTemporaryFile(suffix=".txt", delete=False) as f:
f.write(text)
return f.name
def consume(filename):
try:
with open(filename) as f:
return f.read()
finally:
os.remove(filename)
if __name__ == '__main__':
filename = produce('Hello, world')
print('Filename is: {0}'.format(filename))
text = consume(filename)
print('Text is: {0}'.format(text))
assert not os.path.exists(filename)
The output is something like this:
Filename is: /tmp/tmpp_iSrw.txt
Text is: Hello, world
| how to create file names from a number plus a suffix in python | how to create file names from a number plus a suffix??.
for example I am using two programs in python script for work in a server, the first creates a file x and the second uses the x file, the problem is that this file can not overwrite.
no matter what name is generated from the first program. the second program of be taken exactly from the path and file name that was assigned to continue the script.
thanks for your help and attention
| [
"As far as I can understand you, you want to create a file with a unique name in one program and pass the name of that file to another program. I think you should take a look at the tempfile module, http://docs.python.org/library/tempfile.html#module-tempfile.\nHere is an example that makes use of NamedTemporaryFil... | [
3
] | [] | [] | [
"filenames",
"python"
] | stackoverflow_0003991330_filenames_python.txt |
Q:
Options for handling javascript heavy pages while screen scraping
Disclaimer here: I'm really not a programmer. I'm eager to learn, but my experience is pretty much basic on c64 20 years ago and a couple of days of learning Python.
I'm just starting out on a fairly large (for me as a beginner) screen scraping project. So far I have been using python with mechanize+lxml for my browsing/parsing. Now I'm encountering some really javascript heavy pages that doesn't show a anything without javascript enabled, which means trouble for mechanize.
From my searching I've kind come to the conclusion that I have a basically a few options:
Trying to figure out what the javascript is doing a emulate that in my code (I don't quite know where to start with this. ;-))
Using pywin32 to control internet explorer or something similar, like using the webkit-browser from pyqt4 or even using telnet and mozrepl (this seems really hard)
Switching language to perl since www::Mechanize seems be a lot more mature on per (addons and such for javascript). Don't know too much about this at all.
If anyone has some pointers here that would be great. I understand that I need to do a lot of trial and error, but would be nice I wouldn't go too far away from the "true" answer, if there is such a thing.
A:
You might be able to find the data you are looking for elsewhere. Try using the web-developer toolbar in firefox to see what is being loaded by javascript. It might be that you can find the data in the js files.
Otherwise, you probably do need to use Mechanize. There are two tutorials that you might find useful here:
http://scraperwiki.com/help/tutorials/python/
A:
A fourth option might be to use browserjs.
This is supposed to be a way to run a browser environment in Mozilla Rhino or some other command-line javascript engine. Presumably you could (at least in theory) load the page in that environment and dump the HTML after JS has had its way with it.
I haven't really used it myself, I tried a couple of times but found it way too slow for my purposes. I didn't try very hard though, there might be an option you need to set or some such.
A:
I use Chickenfoot for simple tasks and python-webkit for more complex. Have had good experience with both.
Here is a snippet to render a webpage (including executing any JavaScript) and return the resulting HTML:
class Render(QWebPage):
def __init__(self, url):
self.app = QApplication(sys.argv)
QWebPage.__init__(self)
self.loadFinished.connect(self._loadFinished)
self.mainFrame().load(QUrl(url))
self.app.exec_()
def _loadFinished(self, result):
self.html = str(self.mainFrame().toHtml())
self.app.quit()
html = Render(url).html
A:
For nonprogrammers, I recomment using IRobotSoft. It is visual oriented and with full javascript support. The shortcoming is that it runs only on Windows. The good thing is you can become an expert just by trial and error to learn the software.
| Options for handling javascript heavy pages while screen scraping | Disclaimer here: I'm really not a programmer. I'm eager to learn, but my experience is pretty much basic on c64 20 years ago and a couple of days of learning Python.
I'm just starting out on a fairly large (for me as a beginner) screen scraping project. So far I have been using python with mechanize+lxml for my browsing/parsing. Now I'm encountering some really javascript heavy pages that doesn't show a anything without javascript enabled, which means trouble for mechanize.
From my searching I've kind come to the conclusion that I have a basically a few options:
Trying to figure out what the javascript is doing a emulate that in my code (I don't quite know where to start with this. ;-))
Using pywin32 to control internet explorer or something similar, like using the webkit-browser from pyqt4 or even using telnet and mozrepl (this seems really hard)
Switching language to perl since www::Mechanize seems be a lot more mature on per (addons and such for javascript). Don't know too much about this at all.
If anyone has some pointers here that would be great. I understand that I need to do a lot of trial and error, but would be nice I wouldn't go too far away from the "true" answer, if there is such a thing.
| [
"You might be able to find the data you are looking for elsewhere. Try using the web-developer toolbar in firefox to see what is being loaded by javascript. It might be that you can find the data in the js files.\nOtherwise, you probably do need to use Mechanize. There are two tutorials that you might find useful h... | [
1,
0,
0,
0
] | [] | [] | [
"python",
"screen_scraping"
] | stackoverflow_0003929005_python_screen_scraping.txt |
Q:
What exactly does distutils do?
I have read the documentation but I don't understand.
Why do I have to use distutils to install python modules ?
Why do I just can't save the modules in python path ?
A:
You don't have to use distutils. You can install modules manually, just like you can compile a C++ library manually (compile every implementation file, then link the .obj files) or install an application manually (compile, put into its own directory, add a shortcut for launching). It just gets tedious and error-prone, as every repetive task done manually.
Moreover, the manual steps I listed for the examples are pretty optimistic - often, you want to do more. For example, PyQt adds the .ui-to-.py-compiler to the path so you can invoke it via command line.
So you end up with a stack of work that could be automated. This alone is a good argument.
Also, the devs would have to write installing instructions. With distutils etc, you only have to specify what your project consists of (and fancy extras if and only if you need it) - for example, you don't need to tell it to put everything in a new folder in site-packages, because it already knows this.
So in the end, it's easier for developers and for users.
A:
what python modules ? for installing python package if they exist in pypi you should do :
pip install <name_of_package>
if not, you should download them .tar.gz or what so ever and see if you find a setup.py and run it like this :
python setup.py install
or if you want to install it in development mode (you can change in package and see the result without installing it again ) :
python setup.py develop
this is the usual way to distribute python package (the setup.py); and this setup.py is the one that call disutils.
to summarize this distutils is a python package that help developer create a python package installer that will build and install a given package by just running the command setup.py install.
so basically what disutils does (i will sit only important stuff):
it search dependencies of the package (install dependencies automatically).
it copy the package modules in site-packages or just create a sym link if it's in develop mode
you can create an egg of you package.
it can also run test over your package.
you can use it to upload your package to pypi.
if you want more detail see this http://docs.python.org/library/distutils.html
A:
You don't have to use distutils to get your own modules working on your own machine; saving them in your python path is sufficient.
When you decide to publish your modules for other people to use, distutils provides a standard way for them to install your modules on their machines. (The "dist" in "distutils" means distribution, as in distributing your software to others.)
| What exactly does distutils do? | I have read the documentation but I don't understand.
Why do I have to use distutils to install python modules ?
Why do I just can't save the modules in python path ?
| [
"You don't have to use distutils. You can install modules manually, just like you can compile a C++ library manually (compile every implementation file, then link the .obj files) or install an application manually (compile, put into its own directory, add a shortcut for launching). It just gets tedious and error-pr... | [
5,
4,
2
] | [] | [] | [
"distutils",
"python"
] | stackoverflow_0003991335_distutils_python.txt |
Q:
How to draw this window?
Can I create a window without this topbar? :
I want a window like this:
A:
Here they say you could try:
gtk.Window.set_decorated(False)
A:
You could also do
window.set_type_hint(gtk.gdk.WINDOW_TYPE_HINT_SPLASHSCREEN)
but that doesn't have rounded corners either, at least on my theme. As eumiro says, that depends on the theme settings.
| How to draw this window? | Can I create a window without this topbar? :
I want a window like this:
| [
"Here they say you could try:\ngtk.Window.set_decorated(False)\n\n",
"You could also do\nwindow.set_type_hint(gtk.gdk.WINDOW_TYPE_HINT_SPLASHSCREEN)\n\nbut that doesn't have rounded corners either, at least on my theme. As eumiro says, that depends on the theme settings.\n"
] | [
2,
1
] | [] | [] | [
"gtk",
"pygtk",
"python",
"ubuntu"
] | stackoverflow_0003991706_gtk_pygtk_python_ubuntu.txt |
Q:
ImportError: No module named ***** in python
I am very new to python, about one month, and am trying to figure out how the importing works in python. I was told that I can import any 'module' that has Python code in it. So I am trying to import a module just to try it out, but I keep getting an 'ImportError: No module named redue'. This is an example of the python shell:
>>> import os
>>> os.chdir('C:\Users\Cube\Documents\Python')
>>> for file in os.listdir(os.getcwd()):
print file
pronounce.py
pronounce.pyc
readwrite.py
rectangle.py
reduc.py
>>> import reduc
Traceback (most recent call last):
File "<pyshell#32>", line 1, in <module>
import reduc
ImportError: No module named reduc
What am I doing wrong? I am I overlooking something, or was I just wrongly informed?
A:
These files are not on sys.path. It should have been though.
If you want to access them from the interpreter, you will need to add the location to sys.path
>>> import sys
>>> print sys.path
>>> sys.path.append('C:\\Users\\Cube\\Documents\\Python')
>>> import reduc
You could also include the path in environment variable - PYTHONPATH
See the details on module search path here :
http://docs.python.org/tutorial/modules.html#the-module-search-path
http://docs.python.org/library/sys.html#sys.path
Also look at (PYTHONPATH) environment variable details here:
http://docs.python.org/using/cmdline.html#environment-variables
| ImportError: No module named ***** in python | I am very new to python, about one month, and am trying to figure out how the importing works in python. I was told that I can import any 'module' that has Python code in it. So I am trying to import a module just to try it out, but I keep getting an 'ImportError: No module named redue'. This is an example of the python shell:
>>> import os
>>> os.chdir('C:\Users\Cube\Documents\Python')
>>> for file in os.listdir(os.getcwd()):
print file
pronounce.py
pronounce.pyc
readwrite.py
rectangle.py
reduc.py
>>> import reduc
Traceback (most recent call last):
File "<pyshell#32>", line 1, in <module>
import reduc
ImportError: No module named reduc
What am I doing wrong? I am I overlooking something, or was I just wrongly informed?
| [
"These files are not on sys.path. It should have been though.\nIf you want to access them from the interpreter, you will need to add the location to sys.path\n>>> import sys\n>>> print sys.path\n>>> sys.path.append('C:\\\\Users\\\\Cube\\\\Documents\\\\Python')\n>>> import reduc\n\nYou could also include the path in... | [
16
] | [] | [] | [
"python",
"python_import"
] | stackoverflow_0003992952_python_python_import.txt |
Q:
passing text through a dictionary in Python
I currently have python code that compares two texts using the cosine similarity measure. I got the code here.
What I want to do is take the two texts and pass them through a dictionary (not a python dictionary, just a dictionary of words) first before calculating the similarity measure. The dictionary will just be a list of words, although it will be a large list. I know it shouldn't be hard and I could maybe stumble my way through something, but I would like it to be efficient too. Thanks.
A:
If the dictionary fites in memory, use a Python set:
ok_words = set(["a", "b", "c", "e"])
def filter_words(words):
return [word for word in words if word in ok_words]
If it doesn't fit in memory, you can use shelve
A:
The structure you try to create is known as Inverted Index. Here you can find some general information about it and snippets from Heaps and Mills's implementation. Unfortunately, I wasn't able to find it's source, as well as any other efficient implementation. (Please leave comment if you will find any.)
If you haven't a goal to create a library in pure Python, you can use PyLucene - Python extension for accessing Lucene, which is in it's turn very powerful search engine in Java. Lucene implements inverted index and can easily provide you information on word frequency. It also supports wide range of analyzers (parsers + stemmers) for a dozen of languages.
(Also note, that Lucene already has it's own Similarity measure class.)
Some words about similarity and Vector Space Models. It is very powerful abstraction, but your implementation suffers several disadvantages. With a growth of number of documents in your index your co-occurrence matrix will became to big to fit in memory, and searching in it will take a long time. To stop this effect dimension reduction is used. In methods like LSA this is done by Singular Value Decomposition. Also pay attention to such techniques as PLSA, which uses probabilistic theory, and Random Indexing, which is the only incremental (and so the only appropriate for the large indexes) VSM method.
| passing text through a dictionary in Python | I currently have python code that compares two texts using the cosine similarity measure. I got the code here.
What I want to do is take the two texts and pass them through a dictionary (not a python dictionary, just a dictionary of words) first before calculating the similarity measure. The dictionary will just be a list of words, although it will be a large list. I know it shouldn't be hard and I could maybe stumble my way through something, but I would like it to be efficient too. Thanks.
| [
"If the dictionary fites in memory, use a Python set:\nok_words = set([\"a\", \"b\", \"c\", \"e\"])\n\ndef filter_words(words):\n return [word for word in words if word in ok_words]\n\nIf it doesn't fit in memory, you can use shelve\n",
"The structure you try to create is known as Inverted Index. Here you can ... | [
1,
0
] | [] | [] | [
"python",
"similarity",
"text"
] | stackoverflow_0003992769_python_similarity_text.txt |
Q:
Why does this Python service stop by itself in spite of an infinite loop?
I managed to install a Python script as a service using this recipe at ActiveState: win-services-helper.
In order to get much use out of it, I included the business end of my program by replacing lines 99 - 100:
95 class Test(Service):
96 def start(self):
97 self.runflag=True
98 while self.runflag:
99 # My logging script
100 # goes here
101 def stop(self):
102 self.runflag=False
103 self.log("I'm done")
105
106 instart(Test, 'aTest', 'Python Service Test')
My problem is, the service logs only one entry and then stops by itself. Then I go to services.msc, attempt to start the service again, then Windows displays this message:
The Python Service Test service on Local Computer started and then stopped. Some services stop automatically when they have no work to do, for example, the Performance Logs and Alerts service.
How come my service thinks it has nothing to do when I placed it in an infinite loop? It should stop only when I shut down the computer and start working again the next time the computer is turned on, even without anyone logging in.
NOTE: My logging script uses simple file read/write to csv. It doesn't use any fancy logging modules.
A:
I believe that this message appears too when an error is encountered. It's not unlikely that the user running the service hasn't got the rights to write in the log file. You might try to create that csv file and set up the access rights so that "Everyone" can write to it.
A:
It works now. Thanks everyone for the insights!
The recipe consisted of two scripts:
winservice.py
winservice_test.py
In my previous attempt, I picked out some lines which I thought were useful. I might have missed some of them, hence it didn't work.
I left winservice.py as it is. For the business end of my script, I placed it in a loop structure in winservice_test.py.
After writing the code, I went to the command line, browsed to where the two scripts were stored, then installed the service with
python winservice_test.py
My service can now be accessed through services.msc. By default, it runs as Local system account, which presents problems upon logging out. The service will still continue, but it will grind to a halt because the account doesn't have write access to the log folder.
I modified its Properties so that it runs with my user account (which has write access to the log folder). My service now survives logouts, stops working only upon shutdown, and also starts itself as soon as Windows boots up even without anyone logging in.
A:
You need to see on the entire code when you determine the service stop, and you can user other tools like python daemonize to control your codes.
| Why does this Python service stop by itself in spite of an infinite loop? | I managed to install a Python script as a service using this recipe at ActiveState: win-services-helper.
In order to get much use out of it, I included the business end of my program by replacing lines 99 - 100:
95 class Test(Service):
96 def start(self):
97 self.runflag=True
98 while self.runflag:
99 # My logging script
100 # goes here
101 def stop(self):
102 self.runflag=False
103 self.log("I'm done")
105
106 instart(Test, 'aTest', 'Python Service Test')
My problem is, the service logs only one entry and then stops by itself. Then I go to services.msc, attempt to start the service again, then Windows displays this message:
The Python Service Test service on Local Computer started and then stopped. Some services stop automatically when they have no work to do, for example, the Performance Logs and Alerts service.
How come my service thinks it has nothing to do when I placed it in an infinite loop? It should stop only when I shut down the computer and start working again the next time the computer is turned on, even without anyone logging in.
NOTE: My logging script uses simple file read/write to csv. It doesn't use any fancy logging modules.
| [
"I believe that this message appears too when an error is encountered. It's not unlikely that the user running the service hasn't got the rights to write in the log file. You might try to create that csv file and set up the access rights so that \"Everyone\" can write to it.\n",
"It works now. Thanks everyone for... | [
2,
1,
0
] | [] | [] | [
"python",
"service",
"windows",
"windows_services"
] | stackoverflow_0003938746_python_service_windows_windows_services.txt |
Q:
multicpu bzip2 using a python script
I want to quickly bzip2 compress several hundred gigabytes of data
using my 8 core , 16 GB ram workstation.
Currently I am using a simple python script to compress a whole
directory tree using bzip2 and an os.system call coupled to an os.walk
call.
I see that the bzip2 only uses a single cpu while the other cpus
remain relatively idle.
I am a newbie in queue and threaded processes . But I am wondering how
I can implement this such that I can have four bzip2 running threads
(actually I guess os.system threads ), each using probably their own
cpu , that deplete files from a queue as they bzip them.
My single thread script is pasted here .
import os
import sys
for roots, dirlist , filelist in os.walk(os.curdir):
for file in [os.path.join(roots,filegot) for filegot in filelist]:
if "bz2" not in file:
print "Compressing %s" % (file)
os.system("bzip2 %s" % file)
print ":DONE"
A:
Use the subprocess module to spawn several processes at once. If N of them are running (N should a bit bigger than the number of CPUs you have, say 3 for 2 cores, 10 for 8), wait for one to terminate and then start another one.
Note that this might not help much since there will be a lot of disk activity which you can't parallelize. A lot of free RAM for caches helps.
A:
Try this code from MRAB on comp.lang.python:
import os
import sys
from threading import Thread, Lock
from Queue import Queue
def report(message):
mutex.acquire()
print message
sys.stdout.flush()
mutex.release()
class Compressor(Thread):
def __init__(self, in_queue, out_queue):
Thread.__init__(self)
self.in_queue = in_queue
self.out_queue = out_queue
def run(self):
while True:
path = self.in_queue.get()
sys.stdout.flush()
if path is None:
break
report("Compressing %s" % path)
os.system("bzip2 %s" % path)
report("Done %s" % path)
self.out_queue.put(path)
in_queue = Queue()
out_queue = Queue()
mutex = Lock()
THREAD_COUNT = 4
worker_list = []
for i in range(THREAD_COUNT):
worker = Compressor(in_queue, out_queue)
worker.start()
worker_list.append(worker)
for roots, dirlist, filelist in os.walk(os.curdir):
for file in [os.path.join(roots, filegot) for filegot in filelist]:
if "bz2" not in file:
in_queue.put(file)
for i in range(THREAD_COUNT):
in_queue.put(None)
for worker in worker_list:
worker.join()
| multicpu bzip2 using a python script | I want to quickly bzip2 compress several hundred gigabytes of data
using my 8 core , 16 GB ram workstation.
Currently I am using a simple python script to compress a whole
directory tree using bzip2 and an os.system call coupled to an os.walk
call.
I see that the bzip2 only uses a single cpu while the other cpus
remain relatively idle.
I am a newbie in queue and threaded processes . But I am wondering how
I can implement this such that I can have four bzip2 running threads
(actually I guess os.system threads ), each using probably their own
cpu , that deplete files from a queue as they bzip them.
My single thread script is pasted here .
import os
import sys
for roots, dirlist , filelist in os.walk(os.curdir):
for file in [os.path.join(roots,filegot) for filegot in filelist]:
if "bz2" not in file:
print "Compressing %s" % (file)
os.system("bzip2 %s" % file)
print ":DONE"
| [
"Use the subprocess module to spawn several processes at once. If N of them are running (N should a bit bigger than the number of CPUs you have, say 3 for 2 cores, 10 for 8), wait for one to terminate and then start another one.\nNote that this might not help much since there will be a lot of disk activity which yo... | [
1,
1
] | [] | [] | [
"bzip2",
"os.walk",
"python"
] | stackoverflow_0003345199_bzip2_os.walk_python.txt |
Q:
What does ... mean in numpy code?
And what is it called? I don't know how to search for it; I tried calling it ellipsis with the Google. I don't mean in interactive output when dots are used to indicate that the full array is not being shown, but as in the code I'm looking at,
xTensor0[...] = xVTensor[..., 0]
From my experimentation, it appears to function the similarly to : in indexing, but stands in for multiple :'s, making x[:,:,1] equivalent to x[...,1].
A:
Yes, you're right. It fills in as many : as required. The only difference occurs when you use multiple ellipses. In that case, the first ellipsis acts in the same way, but each remaining one is converted to a single :.
A:
Although this feature exists mainly to support numpy and other, similar modules, it's a core feature of the language and can be used anywhere, like so:
>>> class foo:
... def __getitem__(self, key):
... return key
...
>>> aFoo = foo()
>>> aFoo[..., 1]
(Ellipsis, 1)
>>>
or even:
>>> derp = {}
>>> derp[..., 1] = "herp"
>>> derp
{(Ellipsis, 1): 'herp'}
A:
Documentation here: http://docs.scipy.org/doc/numpy/reference/arrays.indexing.html
It does, well, what you describe it doing.
| What does ... mean in numpy code? | And what is it called? I don't know how to search for it; I tried calling it ellipsis with the Google. I don't mean in interactive output when dots are used to indicate that the full array is not being shown, but as in the code I'm looking at,
xTensor0[...] = xVTensor[..., 0]
From my experimentation, it appears to function the similarly to : in indexing, but stands in for multiple :'s, making x[:,:,1] equivalent to x[...,1].
| [
"Yes, you're right. It fills in as many : as required. The only difference occurs when you use multiple ellipses. In that case, the first ellipsis acts in the same way, but each remaining one is converted to a single :.\n",
"Although this feature exists mainly to support numpy and other, similar modules, it's ... | [
7,
3,
0
] | [] | [] | [
"numpy",
"python"
] | stackoverflow_0003993125_numpy_python.txt |
Q:
Facebook Permission request form for Crawling?
I have been Googling for sometime but I guess I am using the wrong set of keywords. Does anyone know this URI that lets me request permission from Facebook to let me crawl their network? Last time I was using Python to do this, someone suggested that I look at it but I couldn't find that post either.
A:
Amazingly enough, that's given in their robots.txt.
The link you're looking for is this one:
http://www.facebook.com/apps/site_scraping_tos.php
If you're not a huge organization already, don't expect to be explicitly whitelisted there. If you're not explicitly whitelisted, you're not allowed to crawl at all, according to the robots.txt and the TOS. You must use the API instead.
Don't even think about pretending to be one of the whitelisted crawlers. Facebook filters by whitelisted IP for each crawler and anything else that looks at all like crawling gets an instant perma-ban. For a while users who simply clicked too fast could occasionally run into this.
A:
Since this is a community with login & password, I am not sure how much of it is legally crawl-able. If you see even Google indexes just the user profile pages. But not their wall posts or photos etc.
I would suggest you to post this question in Facebook Forum. But you can check it up here -
Facebook Developers
Facebook Developers Documentation
Facebook Developers Forum
| Facebook Permission request form for Crawling? | I have been Googling for sometime but I guess I am using the wrong set of keywords. Does anyone know this URI that lets me request permission from Facebook to let me crawl their network? Last time I was using Python to do this, someone suggested that I look at it but I couldn't find that post either.
| [
"Amazingly enough, that's given in their robots.txt.\nThe link you're looking for is this one:\nhttp://www.facebook.com/apps/site_scraping_tos.php\nIf you're not a huge organization already, don't expect to be explicitly whitelisted there. If you're not explicitly whitelisted, you're not allowed to crawl at all, ac... | [
5,
0
] | [] | [] | [
"facebook",
"mechanize",
"python",
"web_crawler"
] | stackoverflow_0003993474_facebook_mechanize_python_web_crawler.txt |
Q:
Python regular expression problem
I need to do something in regex but I'm really not good at it, long time didn't do that .
/a/c/a.doc
I need to change it to
\\a\\c\\a.doc
Please trying to do it by using regular expression in Python.
A:
I'm entirely in favor of helping user483144 distinguish "solution" from "regular expression", as the previous two answerers have already done. It occurs to me, moreover, that os.path.normpath() http://docs.python.org/library/os.path.html might be what he's really after.
A:
why do you think you every solution to your problem needs regular expression??
>>> s="/a/c/a.doc"
>>> '\\'.join(s.split("/"))
'\\a\\c\\a.doc'
By the way, if you are going to change path separators, you may just as well use os.path.join
eg
mypath = os.path.join("C:\\","dir","dir1")
Python will choose the correct slash for you. Also, check out os.sep if you are interested.
A:
You can do it without regular expressions:
x = '/a/c/a.doc'
x = x.replace('/',r'\\')
But if you really want to use re:
x = re.sub('/', r'\\', x )
A:
\\ is means "\\" or r"\\" ?
re.sub(r'/', r'\\', 'a/b/c')
use r'....' alwayse when you use regular expression.
A:
'\\\'.join(r'/a/c/a.doc'.split("/"))
| Python regular expression problem | I need to do something in regex but I'm really not good at it, long time didn't do that .
/a/c/a.doc
I need to change it to
\\a\\c\\a.doc
Please trying to do it by using regular expression in Python.
| [
"I'm entirely in favor of helping user483144 distinguish \"solution\" from \"regular expression\", as the previous two answerers have already done. It occurs to me, moreover, that os.path.normpath() http://docs.python.org/library/os.path.html might be what he's really after.\n",
"why do you think you every solut... | [
5,
2,
1,
0,
0
] | [] | [] | [
"python",
"regex"
] | stackoverflow_0003993385_python_regex.txt |
Q:
GAE CGI: how to response http status code
I am using google appening, it's CGI environment. I want block some request, I want response nothing even no http status code. Alternative, I want just close the connection. Can I do this?
update:
I have decide to use what pyfunc said, use 204 status, but how can I do this at GAE CGI environment without any webframework.
update 2:
Thanks a lot, but... I really need a CGI way, not WSGI way. Please see the comment in my codes.
def main()
#Block requests at once.
if (settings.BLOCK_DOWNLOAD and os.environ.get('HTTP_RANGE')) \
or (settings.BLOCK_EXTENSION.match(os.environ['PATH_INFO'])):
#TODO: return 204 response in CGI way.
#I really do not need construct a WSGIApplication and then response a status code.
return
application = webapp.WSGIApplication([
(r'/', MainHandler),
#...
], debug=True)
run_wsgi_app(application)
if __name__ == '__main__':
main()
A:
The HTTP status code is important in response
You can use HTTP NO CONTENT 204 to responde with empty content.
You could use 403 Forbidden but I prefer 204 to make a silent drop of request
You could loose connection but that would be rude and can result in server being pounded with connections as user may retry.
[Edit: updated question]
You can look at many a examples on SO tagged with GAE:
https://stackoverflow.com/questions/tagged/google-app-engine
It is my understanding that you will be using webapp framework. Beef up on it's usage.
http://code.google.com/appengine/docs/python/tools/webapp/
Check how to set response object status code at
http://code.google.com/appengine/docs/python/tools/webapp/redirects.html
Here is an example of bare bone server that responds with 204 no content. I have not tested it, but it would be in similar lines.
import wsgiref.handlers
from google.appengine.ext import webapp
class MainHandler(webapp.RequestHandler):
def get(self):
return self.response.set_status(204)
def main():
application = webapp.WSGIApplication([('/', MainHandler)], debug=True)
wsgi.handlers.CGIHandler().run(application)
if __name__ == '__main__':
main()
See a complete application at :
http://code.google.com/p/pubsubhubbub/source/browse/trunk/hub/main.py?spec=svn335&r=146#1228
| GAE CGI: how to response http status code | I am using google appening, it's CGI environment. I want block some request, I want response nothing even no http status code. Alternative, I want just close the connection. Can I do this?
update:
I have decide to use what pyfunc said, use 204 status, but how can I do this at GAE CGI environment without any webframework.
update 2:
Thanks a lot, but... I really need a CGI way, not WSGI way. Please see the comment in my codes.
def main()
#Block requests at once.
if (settings.BLOCK_DOWNLOAD and os.environ.get('HTTP_RANGE')) \
or (settings.BLOCK_EXTENSION.match(os.environ['PATH_INFO'])):
#TODO: return 204 response in CGI way.
#I really do not need construct a WSGIApplication and then response a status code.
return
application = webapp.WSGIApplication([
(r'/', MainHandler),
#...
], debug=True)
run_wsgi_app(application)
if __name__ == '__main__':
main()
| [
"\nThe HTTP status code is important in response\n\n\nYou can use HTTP NO CONTENT 204 to responde with empty content.\nYou could use 403 Forbidden but I prefer 204 to make a silent drop of request \nYou could loose connection but that would be rude and can result in server being pounded with connections as user may... | [
4
] | [] | [] | [
"cgi",
"google_app_engine",
"python"
] | stackoverflow_0003993827_cgi_google_app_engine_python.txt |
Q:
how to round to higher 10's place in python
I have a bunch of floats and I want to round them up to the next highest multiple of 10.
For example:
10.2 should be 20
10.0 should be 10
16.7 should be 20
94.9 should be 100
I only need it to go from the range 0-100. I tried math.ceil() but that only rounds up to the nearest integer.
Thanks in advance.
A:
from math import ceil
def ceil_to_tens(x):
return int(ceil(x / 10.0)) * 10
Edit: okay, now that I have an undeserved "Nice answer" badge for this answer, I think owe the community with a proper solution using the decimal module that does not suffer from these problems :) Thanks to Jeff for pointing this out. So, a solution using decimal works as follows:
from decimal import Decimal, ROUND_UP
def ceil_to_tens_decimal(x):
return (Decimal(x) / 10).quantize(1, rounding=ROUND_UP) * 10
Of course the above code requires x to be an integer, a string or a Decimal object - floats won't work as that would defeat the whole purpose of using the decimal module.
It's a pity that Decimal.quantize does not work properly with numbers larger than 1, it would have saved the division-multiplication trick.
A:
>>> x = 16.7
>>> int( 10 * math.ceil(x/10.0))
A:
The answers here are fraught with peril. For example 11*1.1 - 2.1 = 10.0, right? But wait:
>>> x = 11*1.1 - 2.1
>>> int(ceil(x / 10.0)) * 10
20
>>> x
10.000000000000002
>>>
You could try this
int(ceil(round(x, 12) / 10.0)) * 10
But choosing the number of decimal places to round to is really difficult as it is hard to predict how floating point noise accumulates. If it is really important to get this right all of the time, then you need to use fixed point arithmetic or Decimal.
A:
If you're looking for another solution that doesn't involve float division, here's one that uses the modulus:
def ceil_to_tens_mod(x):
tmp = int(ceil(x))
mod10 = tmp % 10
return tmp - mod10 + (10 if mod10 else 0)
There's probably some way to simplify it, but there you go.
| how to round to higher 10's place in python | I have a bunch of floats and I want to round them up to the next highest multiple of 10.
For example:
10.2 should be 20
10.0 should be 10
16.7 should be 20
94.9 should be 100
I only need it to go from the range 0-100. I tried math.ceil() but that only rounds up to the nearest integer.
Thanks in advance.
| [
"from math import ceil\n\ndef ceil_to_tens(x):\n return int(ceil(x / 10.0)) * 10\n\nEdit: okay, now that I have an undeserved \"Nice answer\" badge for this answer, I think owe the community with a proper solution using the decimal module that does not suffer from these problems :) Thanks to Jeff for pointing th... | [
15,
4,
1,
0
] | [] | [] | [
"math",
"python"
] | stackoverflow_0003986996_math_python.txt |
Q:
Python unicode: how to test against unicode string
I have a script like this:
#!/Python26/
# -*- coding: utf-8 -*-
import sys
import xlrd
import xlwt
argset = set(sys.argv[1:])
#----------- import ----------------
wb = xlrd.open_workbook("excelfile.xls")
#----------- script ----------------
#Get the first sheet either by name
sh = wb.sheet_by_name(u'Data')
hlo = []
for i in range(len(sh.col_values(8))):
if sh.cell(i, 1).value in argset:
if sh.cell(i, 8).value == '':
continue
hlo.append(sh.cell(i, 8).value)
excelfile.xls contains unicode strings and I want to test against these strings from command line:
C:\>python pythonscript.py päätyö
pythonscript.py:34: UnicodeWarning: Unicode equal comparison failed to convert both arguments to
icode - interpreting them as being unequal
if sh.cell(i, 1).value in argset:
How should I modify my code for Unicode?
A:
Python has a sequence type called unicode which will be useful here. These links contain more information to help you regarding this:
Python Unicode HOWTO
Python built-in types (See section 6.6).
Unicode In Python, Completely Demystified
A:
Try encoding the Excel unicode to string using cp1252 (windows default unicode) and then testing. I know a lot of people don't recommend this, but this is what sometimes solve my problems.
Pseudo=> if sh.cell(i, 1).value.encode('cp1252') in argset:
...
Br.
| Python unicode: how to test against unicode string | I have a script like this:
#!/Python26/
# -*- coding: utf-8 -*-
import sys
import xlrd
import xlwt
argset = set(sys.argv[1:])
#----------- import ----------------
wb = xlrd.open_workbook("excelfile.xls")
#----------- script ----------------
#Get the first sheet either by name
sh = wb.sheet_by_name(u'Data')
hlo = []
for i in range(len(sh.col_values(8))):
if sh.cell(i, 1).value in argset:
if sh.cell(i, 8).value == '':
continue
hlo.append(sh.cell(i, 8).value)
excelfile.xls contains unicode strings and I want to test against these strings from command line:
C:\>python pythonscript.py päätyö
pythonscript.py:34: UnicodeWarning: Unicode equal comparison failed to convert both arguments to
icode - interpreting them as being unequal
if sh.cell(i, 1).value in argset:
How should I modify my code for Unicode?
| [
"Python has a sequence type called unicode which will be useful here. These links contain more information to help you regarding this:\n\nPython Unicode HOWTO\nPython built-in types (See section 6.6).\nUnicode In Python, Completely Demystified\n\n",
"Try encoding the Excel unicode to string using cp1252 (windows ... | [
4,
1
] | [] | [] | [
"python",
"unicode"
] | stackoverflow_0001818263_python_unicode.txt |
Q:
How do I check for if an exact string exists in another string?
I'm currently running into a bit of a problem. I'm trying to write a program that will highlight occurrences of a word or phrase inside of another string, but only if the string it's being matched to is exactly the same. The part I'm running into troubles with is identifying whether or not the subphrase I'm matching the phrase with is contained within another larger subphrase.
A quick example which shows this problem:
>>> indicators = ["therefore", "for", "since"]
>>> phrase = "... therefore, I conclude I am awesome."
>>> indicators_in_phrase = [indicator for indicator in indicators
if indicator in phrase.lower()]
>>> print indicators_in_phrase
['therefore', 'for']
I do not want 'for' included in that list. I know why it is being included, but I can't think of any expression that could filter out substrings like that.
I've noticed other similar questions on the site, but each involves a Regex solution, which is something I'm not feeling comfortable with yet, especially not in Python. Is there any kind-of-easy way to solve this problem without using a Regex expression? If not, the corresponding Regex expression and how it might be implemented in the above example would be very much appreciated.
A:
There are ways to do it without a regex, but most of those ways are so convoluted that you'll wish you had spent the time learning the simple regex sequence that you need for it.
A:
It is one line with regex...
import re
indicators = ["therefore", "for", "since"]
phrase = "... therefore, I conclude I am awesome."
indicators_in_phrase = set(re.findall(r'\b(%s)\b' % '|'.join(indicators), phrase.lower()))
A:
The regex are the simplest way!
Hint:
re.compile(r'\btherefore\b')
Then you can change the word in the middle!
EDIT: I wrote this for you:
import re
indicators = ["therefore", "for", "since"]
phrase = "... therefore, I conclude I am awesome. "
def find(phrase, indicators):
def _match(i):
return re.compile(r'\b%s\b' % (i)).search(phrase)
return [ind for ind in indicators if _match(ind)]
>>> find(phrase, indicators)
['therefore']
A:
I think what you are trying to do is something more like this:
import string
words_in_phrase = string.split(phrase)
Now you'll have the words in a list like this:
['...', 'therefore,', 'I', 'conclude', 'I', 'am', 'awesome.']
Then compare the lists like so:
indicators_in_phrase = []
for word in words_in_phrase:
if word in indicators:
indicators_in_phrase.append(word)
There's probably several ways to make this less verbose, but I prefer clarity. Also, you might have to think about removing punctuation as in "awesome." and "therefore,"
For that use rstrip as in the other answer
A:
Create set of indicators
Create set of phrases
Find intersection
Code:
indicators = ["therefore", "for", "since"]
phrase = "... therefore, I conclude I am awesome."
print list(set(indicators).intersection(set( [ each.strip('.,') for each in phrase.split(' ')])))
Cheers:)
A:
A little lengthy but gives an idea / of course regex is there to make it simple
>>> indicators = ["therefore", "for", "since"]
>>> phrase = "... therefore, I conclude I am awesome."
>>> phrase_list = phrase.split()
>>> phrase_list
['...', 'therefore,', 'I', 'conclude', 'I', 'am', 'awesome.']
>>> phrase_list = [ k.rstrip(',') for k in phrase_list]
>>> indicators_in_phrase = [indicator for indicator in indicators if indicator in phrase_list]
>>> indicators_in_phrase
['therefore']
A:
Is the problem with "for" that it's inside "therefore" or that it's not a word? For example, if one of your indicators was "awe", would you want it to be included in indicators_in_phrase?
How would you want the following situation to be handled?
indicators = ["abc", "cde"]
phrase = "One abcde two"
A:
You can strip off punctuations from your phrase, then do split on it so that all words are individual. Then you can do your string comparison
>>> indicators = ["therefore", "for", "since"]
>>> phrase = "... therefore, I conclude I am awesome."
>>> ''.join([ i for i in phrase.lower() if i not in string.punctuation]).strip().split()
['therefore', 'I', 'conclude', 'I', 'am', 'awesome']
>>> p = ''.join([ i for i in phrase.lower() if i not in string.punctuation]).strip().split()
>>> indicators_in_phrase = [indicator for indicator in indicators if indicator in p ]
>>> indicators_in_phrase
['therefore']
| How do I check for if an exact string exists in another string? | I'm currently running into a bit of a problem. I'm trying to write a program that will highlight occurrences of a word or phrase inside of another string, but only if the string it's being matched to is exactly the same. The part I'm running into troubles with is identifying whether or not the subphrase I'm matching the phrase with is contained within another larger subphrase.
A quick example which shows this problem:
>>> indicators = ["therefore", "for", "since"]
>>> phrase = "... therefore, I conclude I am awesome."
>>> indicators_in_phrase = [indicator for indicator in indicators
if indicator in phrase.lower()]
>>> print indicators_in_phrase
['therefore', 'for']
I do not want 'for' included in that list. I know why it is being included, but I can't think of any expression that could filter out substrings like that.
I've noticed other similar questions on the site, but each involves a Regex solution, which is something I'm not feeling comfortable with yet, especially not in Python. Is there any kind-of-easy way to solve this problem without using a Regex expression? If not, the corresponding Regex expression and how it might be implemented in the above example would be very much appreciated.
| [
"There are ways to do it without a regex, but most of those ways are so convoluted that you'll wish you had spent the time learning the simple regex sequence that you need for it.\n",
"It is one line with regex...\nimport re\n\nindicators = [\"therefore\", \"for\", \"since\"]\nphrase = \"... therefore, I conclude... | [
5,
2,
2,
1,
1,
0,
0,
0
] | [] | [] | [
"pattern_matching",
"python",
"string"
] | stackoverflow_0003994010_pattern_matching_python_string.txt |
Q:
How do I update a single field, using a variable as a field name?
I want to do something like
for field in field_list:
my_table.field = new_value
Couldn't find any documentation on how to do the above with a variable.
A:
setattr(mytable, field, new_value)
| How do I update a single field, using a variable as a field name? | I want to do something like
for field in field_list:
my_table.field = new_value
Couldn't find any documentation on how to do the above with a variable.
| [
"setattr(mytable, field, new_value)\n"
] | [
5
] | [] | [] | [
"django",
"django_models",
"python"
] | stackoverflow_0003994222_django_django_models_python.txt |
Q:
Python : One regular expression help
In python, I've got a string like
ABC(a =2,bc=2, asdf_3 = None)
By using regular expression, I want to make it like
ABC(a =2,bc=2)
I want to remove the parameter that named 'asdf_3', that's it!
Update: The parameters can be a lot, only the asdf_3 is same in all cases, the order is usually the last one.
A:
import re
foo = 'ABC(a =2,bc=2, asdf_3 = None)'
bar = re.sub(r',?\s+asdf_3\s+=\s+[^,)]+', '', foo)
# bar is now 'ABC(a =2,bc=2,)'
The \s+ portions let you match any amount of whitespace, and the [^,)]+ part lets it basically anything on the right-hand side that doesn't start another argument.
A:
you are doing homework right? Show some effort on your part next time, since you already asked such questions like this
>>> foo.split("asdf_3")[0].strip(", ")+")"
'ABC(a =2,bc=2)'
>>> re.sub(",*\s+asdf_3.*",")",foo)
'ABC(a =2,bc=2)'
A:
Since you say that the asdf_3 parameter is only usually the last one, I suggest the following:
result = re.sub(
r"""(?x) # verbose regex
,? # match a comma, if present
\s* # match spaces, if present
asdf_3 # match asdf_3
[^,)]* # match any number of characters except ) or ,
""", "", subject)
This assumes that commas can't be part of the argument after asdf_3, e. g. like asdf_3 = "foo, bar".
| Python : One regular expression help | In python, I've got a string like
ABC(a =2,bc=2, asdf_3 = None)
By using regular expression, I want to make it like
ABC(a =2,bc=2)
I want to remove the parameter that named 'asdf_3', that's it!
Update: The parameters can be a lot, only the asdf_3 is same in all cases, the order is usually the last one.
| [
"import re\n\nfoo = 'ABC(a =2,bc=2, asdf_3 = None)'\nbar = re.sub(r',?\\s+asdf_3\\s+=\\s+[^,)]+', '', foo)\n\n# bar is now 'ABC(a =2,bc=2,)'\n\nThe \\s+ portions let you match any amount of whitespace, and the [^,)]+ part lets it basically anything on the right-hand side that doesn't start another argument.\n",
"... | [
0,
0,
0
] | [] | [] | [
"python",
"regex"
] | stackoverflow_0003993793_python_regex.txt |
Q:
Python Regular Expression Matching: ## ##
Im new to python and regular expressions. Im searching a file line by line for the occurrence of ##random_string##, and i want to be able to capture the random_string in-between the ##s.
Ive tried both patterns but no luck =/
pattern1=r'[##]()[##]'
pattern2=r'\#{2}()\#{2}'
prog=re.compile(pattern1)
result=prog1.search(line)
if result:
print result.group(0)
Thanks for any help =]
A:
Try using:
'##(.*?)##'
The problem with your regex is that you are trying to match an empty string between the ## using a (), you should be using .*? to match anything or a .+? to match any non-empty thing.
Your first regex [##]()[##] has an additional bug. A character class matches a single character, example: [ab] matches an a or b but not both. So [##] does not match ##, in fact it's redundant to have duplicate characters in a character class, so [##] is same as [#] which is same as #.
Your second regex '\#{2}()\#{2}' is almost correct but for the empty match thing. Also note that a # is not a meta character ( like ., +, *) hence you need not escape it. So you can drop the \ in \#, but having it is not an error.
A:
Your group is empty.
'##(.+?)##'
A:
Or:
'##([^#]*)##'
(not tested)
A:
If your line has multiple ##()## , what would be your output? ie, if there is overlapping of the patterns and you want to get those overlaps
>>> line="blah ## i want 1 ## blah blah ## i want 2 ## blah"
>>> line.split("##")[1:-1]
[' i want 1 ', ' blah blah ', ' i want 2 ']
>>> line="blah ## i want 1 ## blah"
>>> line.split("##")[1:-1]
[' i want 1 ']
>>> line="blah ## i want 1 ## blah ## "
>>> line.split("##")[1:-1]
[' i want 1 ', ' blah ']
>>>
If you don't want overlapping,
>>> line="blah ## i want 1 ## blah ## i want ## "
>>> [i for n,i in enumerate(line.split("##")[1:]) if n%2==0]
[' i want 1 ', ' i want ']
>>> line="blah ## i want 1 ## blah "
>>> [i for n,i in enumerate(line.split("##")[1:]) if n%2==0]
[' i want 1 ']
>>> line="blah ## i want 1 ## blah ## iwant2 ## junk ## i want 3 ## ..."
>>> [i for n,i in enumerate(line.split("##")[1:]) if n%2==0]
[' i want 1 ', ' iwant2 ', ' i want 3 ']
>>>
| Python Regular Expression Matching: ## ## | Im new to python and regular expressions. Im searching a file line by line for the occurrence of ##random_string##, and i want to be able to capture the random_string in-between the ##s.
Ive tried both patterns but no luck =/
pattern1=r'[##]()[##]'
pattern2=r'\#{2}()\#{2}'
prog=re.compile(pattern1)
result=prog1.search(line)
if result:
print result.group(0)
Thanks for any help =]
| [
"Try using:\n'##(.*?)##'\n\nThe problem with your regex is that you are trying to match an empty string between the ## using a (), you should be using .*? to match anything or a .+? to match any non-empty thing.\nYour first regex [##]()[##] has an additional bug. A character class matches a single character, exampl... | [
6,
0,
0,
0
] | [] | [] | [
"python",
"regex"
] | stackoverflow_0003994024_python_regex.txt |
Q:
Python: Create associative array in a loop
I want to create an associative array with values read from a file. My code looks something like this, but its giving me an error saying i can't the indicies must be ints.
Thanks =]
for line in open(file):
x=prog.match(line)
myarray[x.group(1)]=[x.group(2)]
A:
myarray = {} # Declares myarray as a dict
for line in open(file, 'r'):
x = prog.match(line)
myarray[x.group(1)] = [x.group(2)] # Adds a key-value pair to the dict
A:
Associative arrays in Python are called mappings. The most common type is the dictionary.
A:
Because array indices should be an integer
>>> a = [1,2,3]
>>> a['r'] = 3
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: list indices must be integers, not str
>>> a[1] = 4
>>> a
[1, 4, 3]
x.group(1) should be an integer or
if you are using map define the map first
myarray = {}
for line in open(file):
x=prog.match(line)
myarray[x.group(1)]=[x.group(2)]
| Python: Create associative array in a loop | I want to create an associative array with values read from a file. My code looks something like this, but its giving me an error saying i can't the indicies must be ints.
Thanks =]
for line in open(file):
x=prog.match(line)
myarray[x.group(1)]=[x.group(2)]
| [
"myarray = {} # Declares myarray as a dict\nfor line in open(file, 'r'):\n x = prog.match(line)\n myarray[x.group(1)] = [x.group(2)] # Adds a key-value pair to the dict\n\n",
"Associative arrays in Python are called mappings. The most common type is the dictionary.\n",
"Because array indices should be an ... | [
17,
5,
1
] | [] | [] | [
"associative_array",
"python"
] | stackoverflow_0003994345_associative_array_python.txt |
Q:
Reading files in GAE using python
I created a simple python project that serves up a couple of pages. I'm using the 'webapp' framework and django. What I'm trying to do is use one template file, and load 'content files' that contain the actual page text.
When I try to read the content files using os.open, I get the following error:
pageContent = os.open(pageUrl, 'r').read()
OSError: [Errno 1] Operation not permitted: 'content_includes/home.inc' error
If I let the django templating system to read the same file for me, everything works fine!
So the question is What am I doing wrong that django isn't??? The same 'pageUrl' is used.
The code below will give me the error, while if I comment out the first pageContent assignment, everything works fine.
Code:
pageName = "home";
pageUrl = os.path.join(os.path.normpath('content_includes'), pageName + '.inc')
pageContent = os.open(pageUrl, 'r').read()
pageContent=template.render(pageUrl, template_values, debug=True);
template_values = { 'page': pageContent,
'test': "testing my app"
}
Error:
Traceback (most recent call last):
File "/opt/apis/google_appengine/google/appengine/ext/webapp/__init__.py", line 511, in __call__
handler.get(*groups)
File "/home/odessit/Development/Python/Alpha/main.py", line 19, in get
pageContent = os.open(pageUrl, 'r').read()
File "/opt/apis/google_appengine/google/appengine/tools/dev_appserver.py", line 805, in FakeOpen
raise OSError(errno.EPERM, "Operation not permitted", filename)
OSError: [Errno 1] Operation not permitted: 'content_includes/home.inc'
app.yaml:
handlers:
- url: /javascript
static_dir: javascript
- url: /images
static_dir: images
- url: /portfolio
static_dir: portfolio
- url: /.*
script: main.py
A:
os.path.normpath() on "content_includes" is a no-op - normpath just removes double slashes and other denormalizations. What you probably want is to build a path relative to the script, in which case you should do something like os.path.join(os.path.dirname(__file__), 'content_includes', pageName + '.inc').
A:
If you dig in the dev_appserver.py source code and related files you see that the server does some incarnate checking to ensure that you open only files from below your applications root (in fact the rules seem even more complex).
For file access troubled I instrumented that "path permission checking" code from the development server to find that I was using absolute paths. We propbably should do a patch to appengine to provide better error reporting on that: IIRC the Appserver does not display the offending path but a mangled version of this which makes debugging difficult.
| Reading files in GAE using python | I created a simple python project that serves up a couple of pages. I'm using the 'webapp' framework and django. What I'm trying to do is use one template file, and load 'content files' that contain the actual page text.
When I try to read the content files using os.open, I get the following error:
pageContent = os.open(pageUrl, 'r').read()
OSError: [Errno 1] Operation not permitted: 'content_includes/home.inc' error
If I let the django templating system to read the same file for me, everything works fine!
So the question is What am I doing wrong that django isn't??? The same 'pageUrl' is used.
The code below will give me the error, while if I comment out the first pageContent assignment, everything works fine.
Code:
pageName = "home";
pageUrl = os.path.join(os.path.normpath('content_includes'), pageName + '.inc')
pageContent = os.open(pageUrl, 'r').read()
pageContent=template.render(pageUrl, template_values, debug=True);
template_values = { 'page': pageContent,
'test': "testing my app"
}
Error:
Traceback (most recent call last):
File "/opt/apis/google_appengine/google/appengine/ext/webapp/__init__.py", line 511, in __call__
handler.get(*groups)
File "/home/odessit/Development/Python/Alpha/main.py", line 19, in get
pageContent = os.open(pageUrl, 'r').read()
File "/opt/apis/google_appengine/google/appengine/tools/dev_appserver.py", line 805, in FakeOpen
raise OSError(errno.EPERM, "Operation not permitted", filename)
OSError: [Errno 1] Operation not permitted: 'content_includes/home.inc'
app.yaml:
handlers:
- url: /javascript
static_dir: javascript
- url: /images
static_dir: images
- url: /portfolio
static_dir: portfolio
- url: /.*
script: main.py
| [
"os.path.normpath() on \"content_includes\" is a no-op - normpath just removes double slashes and other denormalizations. What you probably want is to build a path relative to the script, in which case you should do something like os.path.join(os.path.dirname(__file__), 'content_includes', pageName + '.inc').\n",
... | [
2,
0
] | [] | [] | [
"django",
"google_app_engine",
"python"
] | stackoverflow_0003955361_django_google_app_engine_python.txt |
Q:
Why wouldn't GAE store my logging.debug?
It seems logging.debug() doesn't appear in GAE logs, but logging.error() does.
Does anyone have an idea how can I make logging.debug() appear in the GAE logs?
A:
Logging in Python can be set to a different level, so that only a specified level of information appears in the log file. Try to change the logging level:
logging.setLevel(logging.DEBUG)
A:
I observed that on the SDK-Server debug logging really disappears. In production I get full debug logs. This may be because of the way I call webapp.WSGIApplication:
application = webapp.WSGIApplication([
('/', Homepage)],
debug=True)
Do you also use debug=True. (Actually I always wondered what it exactly was meant to do)
| Why wouldn't GAE store my logging.debug? | It seems logging.debug() doesn't appear in GAE logs, but logging.error() does.
Does anyone have an idea how can I make logging.debug() appear in the GAE logs?
| [
"Logging in Python can be set to a different level, so that only a specified level of information appears in the log file. Try to change the logging level:\nlogging.setLevel(logging.DEBUG)\n\n",
"I observed that on the SDK-Server debug logging really disappears. In production I get full debug logs. This may be be... | [
1,
0
] | [] | [] | [
"debugging",
"google_app_engine",
"logging",
"python"
] | stackoverflow_0003962423_debugging_google_app_engine_logging_python.txt |
Q:
Can't set full range on QSpinBox
I'm trying to create a QSpinBox that accepts all numbers, but I'm having some trouble with hte maximums
sbox = QSpinBox(self)
sbox.setRange(-sys.maxint/88-1, sys.maxint/86)
sbox.setValue(int(setting.value))
I wanted to just use sbox.setRange(-sys.maxint-1, sys.maxint) but then I couldn't enter anything, if I increase the range any more than above the whole spinner freaks out. Anybody knows why?
A:
that accepts all numbers
I assume you mean all integers rather than all numbers?
Remember that although PyQt is written in Python, the underlying Qt library is written in C++ so it most likely is limited to fixed-size integers of a certain width (for example 32-bit or 64-bit). If you try to use numbers close to the limits then some internal calculations in QSpinBox might overflow and wrap around which could explain the unusual behaviour you see.
To allow any integers at all use a QLineEdit and then parse the contents to a Python integer using the int function. This will allow all integers to be entered (you can even go above sys.maxint). You will lose the spin arrows though.
| Can't set full range on QSpinBox | I'm trying to create a QSpinBox that accepts all numbers, but I'm having some trouble with hte maximums
sbox = QSpinBox(self)
sbox.setRange(-sys.maxint/88-1, sys.maxint/86)
sbox.setValue(int(setting.value))
I wanted to just use sbox.setRange(-sys.maxint-1, sys.maxint) but then I couldn't enter anything, if I increase the range any more than above the whole spinner freaks out. Anybody knows why?
| [
"\nthat accepts all numbers\n\nI assume you mean all integers rather than all numbers?\nRemember that although PyQt is written in Python, the underlying Qt library is written in C++ so it most likely is limited to fixed-size integers of a certain width (for example 32-bit or 64-bit). If you try to use numbers close... | [
1
] | [] | [] | [
"pyqt4",
"python"
] | stackoverflow_0003994717_pyqt4_python.txt |
Q:
Online job-searching is tedious. Help me automate it
Many job sites have broken searches that don't let you narrow down jobs by experience level. Even when they do, it's usually wrong. This requires you to wade through hundreds of postings that you can't apply for before finding a relevant one, quite tedious. Since I'd rather focus on writing cover letters etc., I want to write a program to look through a large number of postings, and save the URLs of just those jobs that don't require years of experience.
I don't require help writing the scraper to get the html bodies of possibly relevant job posts. The issue is accurately detecting the level of experience required for the job. This should not be too difficult as job posts are usually very explicit about this ("must have 5 years experience in..."), but there may be some issues with overly simple solutions.
In my case, I'm looking for entry-level positions. Often they don't say "entry-level", but inclusion of the words probably means the job should be saved.
Next, I can safely exclude a job the says it requires "5 years" of experience in whatever, so a regex like /\d\syears/ seems reasonable to exclude jobs. But then, I realized some jobs say they'll take 0-2 years of experience, matches the exclusion regex but is clearly a job I want to take a look at. Hmmm, I can handle that with another regex. But some say "less than 2 years" or "fewer than 2 years". Can handle that too, but it makes me wonder what other patterns I'm not thinking of, and possibly excluding many jobs. That's what brings me here, to find a better way to do this than regexes, if there is one.
I'd like to minimize the false negative rate and save all the jobs that seem like they might not require many years of experience. Does excluding anything that matches /[3-9]\syears|1\d\syears/ seem reasonable? Or is there a better way? Training a bayesian filter maybe?
Edit: There's a similar, but harder problem, which would probably be more useful to solve. There are lots of jobs that just require an "engineering degree", as you just have to understand a few technical things. But searching for "engineering" gives you thousands of jobs, mostly irrelevant.
How do I narrow this down to just those jobs that require any engineering degree, rather than particular degrees, without looking at each myself?
A:
Ok, this answer is probably not going to be helpful -- I will say that up front. But, in my opinion, merely thinking about the problem in this way is enough to get you hired at most places I've worked. My suggestion? Contact the hiring manager at any of the postings in which you have interest, tell them this is what you are doing. Tell them generically what you have coded so far, and ask for assistance in learning the patterns they use when writing their adverts.
If I were on the receiving end of this letter, I think I would invite the person in for an interview.
A:
I developed a good parse and email routine for a couple of job websites when I was looking for work for myself and a couple of friends. I agree with the other posts, this is a great way to look at the problem. Just to drop a little info, I did it mostly in ruby, and used tor proxies and some other methods to make sure that I wouldn't be iced out of the job site. This sort of project is unlike usual scraping as you really can't afford to get kicked off a job board. In any case, I just have one piece of advice: forget about sorting and fine tuning this too intensely. Let the HR department do that for you and get your resume and credentials out everywhere. It's a statistical game, and you want to broadcast yourself and throw that net as widely as possible.
A:
Here's some sample code if you're interested. It's for looking for a flat, not a job, but the concepts should be similar enough. http://github.com/agrimm/Easy-Roommate-parser
| Online job-searching is tedious. Help me automate it | Many job sites have broken searches that don't let you narrow down jobs by experience level. Even when they do, it's usually wrong. This requires you to wade through hundreds of postings that you can't apply for before finding a relevant one, quite tedious. Since I'd rather focus on writing cover letters etc., I want to write a program to look through a large number of postings, and save the URLs of just those jobs that don't require years of experience.
I don't require help writing the scraper to get the html bodies of possibly relevant job posts. The issue is accurately detecting the level of experience required for the job. This should not be too difficult as job posts are usually very explicit about this ("must have 5 years experience in..."), but there may be some issues with overly simple solutions.
In my case, I'm looking for entry-level positions. Often they don't say "entry-level", but inclusion of the words probably means the job should be saved.
Next, I can safely exclude a job the says it requires "5 years" of experience in whatever, so a regex like /\d\syears/ seems reasonable to exclude jobs. But then, I realized some jobs say they'll take 0-2 years of experience, matches the exclusion regex but is clearly a job I want to take a look at. Hmmm, I can handle that with another regex. But some say "less than 2 years" or "fewer than 2 years". Can handle that too, but it makes me wonder what other patterns I'm not thinking of, and possibly excluding many jobs. That's what brings me here, to find a better way to do this than regexes, if there is one.
I'd like to minimize the false negative rate and save all the jobs that seem like they might not require many years of experience. Does excluding anything that matches /[3-9]\syears|1\d\syears/ seem reasonable? Or is there a better way? Training a bayesian filter maybe?
Edit: There's a similar, but harder problem, which would probably be more useful to solve. There are lots of jobs that just require an "engineering degree", as you just have to understand a few technical things. But searching for "engineering" gives you thousands of jobs, mostly irrelevant.
How do I narrow this down to just those jobs that require any engineering degree, rather than particular degrees, without looking at each myself?
| [
"Ok, this answer is probably not going to be helpful -- I will say that up front. But, in my opinion, merely thinking about the problem in this way is enough to get you hired at most places I've worked. My suggestion? Contact the hiring manager at any of the postings in which you have interest, tell them this is... | [
1,
1,
0
] | [] | [] | [
"nlp",
"perl",
"python",
"regex",
"ruby"
] | stackoverflow_0003048268_nlp_perl_python_regex_ruby.txt |
Q:
Python - Stripping timestamps and username from line in file
I'm writing a script that will print a random line of text from a file into an XCHAT channel. So far it works fine, but I want to add one last bit of functionality.
I have logs with, for example, "Oct 23 12:07:59 (nickname> " appearing before every line of text. I just want to print the parts of the lines that follow the "(nickname>", how can I do this?
__module_name__ = "ran.py"
__module_version__ = "1.0"
__module_description__ = "script to add random text to channel messages"
import xchat
import random
def ran(message):
message = random.choice(open("E:/logs/myfile.log", "r").readlines())
return(message)
def ran_cb(word, word_eol, userdata):
message = ''
message = ran(message)
xchat.command("msg %s %s"%(xchat.get_info('channel'), message))
return xchat.EAT_ALL
xchat.hook_command("ran", ran_cb, help="/ran to use")
A:
If the first > character is exactly the place to split, then try:
toBeIgnored, toBeUsed = line.split('>', 1)
A:
I think > can be used as separator, so:
_, message = message.split('>', 1)
will split line into 2 part, first part (underscore while it is not important) with "Oct 23 12:07:59 (nickname" and second with text after >
You can use it in ran():
def ran(message):
message = random.choice(open("E:/logs/myfile.log", "r").readlines())
_, message = message.split('>', 1)
return(message)
A:
I would have asked in the comments but I don't have enough rep. Does/Can your user's nicknames contain ">" character? If not, you can use the split command:
message = random.choice(open("E:/logs/myfile.log", "r").readlines())
text = message.split("> ", 1)[1]
return(text)
Or you can use a single line:
message = random.choice(open("E:/logs/myfile.log", "r").readlines()).split("> ", 1)[1]
return(message)
| Python - Stripping timestamps and username from line in file | I'm writing a script that will print a random line of text from a file into an XCHAT channel. So far it works fine, but I want to add one last bit of functionality.
I have logs with, for example, "Oct 23 12:07:59 (nickname> " appearing before every line of text. I just want to print the parts of the lines that follow the "(nickname>", how can I do this?
__module_name__ = "ran.py"
__module_version__ = "1.0"
__module_description__ = "script to add random text to channel messages"
import xchat
import random
def ran(message):
message = random.choice(open("E:/logs/myfile.log", "r").readlines())
return(message)
def ran_cb(word, word_eol, userdata):
message = ''
message = ran(message)
xchat.command("msg %s %s"%(xchat.get_info('channel'), message))
return xchat.EAT_ALL
xchat.hook_command("ran", ran_cb, help="/ran to use")
| [
"If the first > character is exactly the place to split, then try:\ntoBeIgnored, toBeUsed = line.split('>', 1)\n\n",
"I think > can be used as separator, so:\n_, message = message.split('>', 1)\n\nwill split line into 2 part, first part (underscore while it is not important) with \"Oct 23 12:07:59 (nickname\" and... | [
1,
0,
0
] | [] | [] | [
"irc",
"python",
"string",
"text_files"
] | stackoverflow_0003994825_irc_python_string_text_files.txt |
Q:
Python: Adding elements to an dict list or associative array
Im trying to add elements to a dict list (associative array), but every time it loops, the array overwrites the previous element. So i just end up with an array of size 1 with the last element read. I verified that the keys ARE changing every time.
array=[]
for line in open(file):
result=prog.match(line)
array={result.group(1) : result.group(2)}
any help would be great, thanks =]
A:
Your solution is incorrect; the correct version is:
array={}
for line in open(file):
result=prog.match(line)
array[result.group(1)] = result.group(2)
Issues with your version:
associative arrays are dicts and empty dicts = {}
arrays are list , empty list = []
You are pointing the array to new dictionary every time.
This is like saying:
array={result.group(1) : result.group(2)}
array={'x':1}
array={'y':1}
array={'z':1}
....
array remains one element dict
A:
array=[]
for line in open(file):
result=prog.match(line)
array.append({result.group(1) : result.group(2)})
Or:
array={}
for line in open(file):
result=prog.match(line)
array[result.group(1)] = result.group(2)
A:
Maybe even more Pythonic:
with open(filename, 'r') as f:
array = dict(prog.match(line).groups() for line in f)
or, if your prog matches more groups:
with open(filename, 'r') as f:
array = dict(prog.match(line).groups()[:2] for line in f)
| Python: Adding elements to an dict list or associative array | Im trying to add elements to a dict list (associative array), but every time it loops, the array overwrites the previous element. So i just end up with an array of size 1 with the last element read. I verified that the keys ARE changing every time.
array=[]
for line in open(file):
result=prog.match(line)
array={result.group(1) : result.group(2)}
any help would be great, thanks =]
| [
"Your solution is incorrect; the correct version is:\narray={}\nfor line in open(file):\n result=prog.match(line)\n array[result.group(1)] = result.group(2)\n\nIssues with your version:\n\nassociative arrays are dicts and empty dicts = {}\narrays are list , empty list = []\nYou are pointing the array to new dicti... | [
6,
0,
0
] | [] | [] | [
"associative_array",
"list",
"python"
] | stackoverflow_0003994874_associative_array_list_python.txt |
Q:
Is there a way to resist unnecessary joins that are only checking id existence when using Django's orm?
In example. If I have a model Person who's got a mother field, which is a foreign key.. The following is getting to me:
p = Person.object.get(id=1)
if p.mother_id:
print "I have a mother!"
In the above example, we've issued one query. I've tricked Django into not fetching the mother by using the _id field instead of mother.id. But if I was to filter on everyone who doesn't have a mother:
Person.objects.filter(mother=None)
Person.objects.filter(mother__id=None)
Person.objects.filter(mother__isnull=True)
Person.objects.filter(mother__id__isnull=True)
All of these join in the related table unnecessarily.. and I can't reference the _id columns, because they're not fields.. so either of the following fails:
Person.objects.filter(mother_id__isnull=True)
Person.objects.filter(mother_id=None)
Is there a way for me to build a querySet that checks the existence of a value in the foreign key column without incurring the join?
Thanks in advance.
Edit (answered):
Credit goes to Bernd, who commented on Daniel's answer, but it turns out that this workaround does splendidly for returning people without mothers, without issuing the unnecessary join:
Person.objects.exclude(mother__isnull=False)
Edit (more detail):
I should also mention that I've found this behavior actually only seems to rear it's head when the FK relationship is nullable. Odd, but true.
A:
I was surprised by this - I thought Person.object.filter(mother=None) would work without an extra join - but on checking it turns out you're right. In fact this has been logged as a bug in Django's ticket tracker.
Unfortunately, the person who logged it - and who (re)wrote most of the Django query code - is no longer actively contributing to Django, so I don't know when it will actually get fixed. You might try one of the patches on that ticket and see if they help.
| Is there a way to resist unnecessary joins that are only checking id existence when using Django's orm? | In example. If I have a model Person who's got a mother field, which is a foreign key.. The following is getting to me:
p = Person.object.get(id=1)
if p.mother_id:
print "I have a mother!"
In the above example, we've issued one query. I've tricked Django into not fetching the mother by using the _id field instead of mother.id. But if I was to filter on everyone who doesn't have a mother:
Person.objects.filter(mother=None)
Person.objects.filter(mother__id=None)
Person.objects.filter(mother__isnull=True)
Person.objects.filter(mother__id__isnull=True)
All of these join in the related table unnecessarily.. and I can't reference the _id columns, because they're not fields.. so either of the following fails:
Person.objects.filter(mother_id__isnull=True)
Person.objects.filter(mother_id=None)
Is there a way for me to build a querySet that checks the existence of a value in the foreign key column without incurring the join?
Thanks in advance.
Edit (answered):
Credit goes to Bernd, who commented on Daniel's answer, but it turns out that this workaround does splendidly for returning people without mothers, without issuing the unnecessary join:
Person.objects.exclude(mother__isnull=False)
Edit (more detail):
I should also mention that I've found this behavior actually only seems to rear it's head when the FK relationship is nullable. Odd, but true.
| [
"I was surprised by this - I thought Person.object.filter(mother=None) would work without an extra join - but on checking it turns out you're right. In fact this has been logged as a bug in Django's ticket tracker.\nUnfortunately, the person who logged it - and who (re)wrote most of the Django query code - is no lo... | [
3
] | [] | [] | [
"django",
"python"
] | stackoverflow_0003992576_django_python.txt |
Q:
Embedding Python thread safe
I'm trying to use Python in a module for an analysis software of vehicle bus systems. For this I have to embed Python in a thread safe manner, since there can be multiple instances of the module witch work independently. I could use a mutex to guard all access to Python and create an unique (python) module for every thread. Obviously this is the easiest way out but comes with the price of not being able to scale across multiple cores. Or I could modify my module to spawn new processes which intern uses Python and connect to them via shared memory. This gives me a performance penalty and costs more time to implement but scales great.
My question: witch one do you think makes more sense? Is there any other way to embed Python thread safe or even in a way that scales over multiple cores.
Kind regards Moritz
edit: I'm using CPython
A:
If you're CPU bound, Python can only scale to multiple core using the multiprocessing library. However, if you're I/O bound, then threading is usually sufficient.
If you want easy thread safety, then use Queue for all message passing.
A:
To follow-up on my question: I went ahead in implemented it using Processes with intern use Python. A good text why the multiprocessing library does not help can be found here:
http://pkaudio.blogspot.com/2010/04/whey-multiprocessing-doesnt-always-work.html
It was not written by myself but that guy has the same problem as I have. I'm thankful for everybody who tried to help me.
| Embedding Python thread safe | I'm trying to use Python in a module for an analysis software of vehicle bus systems. For this I have to embed Python in a thread safe manner, since there can be multiple instances of the module witch work independently. I could use a mutex to guard all access to Python and create an unique (python) module for every thread. Obviously this is the easiest way out but comes with the price of not being able to scale across multiple cores. Or I could modify my module to spawn new processes which intern uses Python and connect to them via shared memory. This gives me a performance penalty and costs more time to implement but scales great.
My question: witch one do you think makes more sense? Is there any other way to embed Python thread safe or even in a way that scales over multiple cores.
Kind regards Moritz
edit: I'm using CPython
| [
"If you're CPU bound, Python can only scale to multiple core using the multiprocessing library. However, if you're I/O bound, then threading is usually sufficient.\nIf you want easy thread safety, then use Queue for all message passing. \n",
"To follow-up on my question: I went ahead in implemented it using Proce... | [
3,
0
] | [] | [] | [
"embedded_language",
"embedding",
"multicore",
"python",
"thread_safety"
] | stackoverflow_0003943106_embedded_language_embedding_multicore_python_thread_safety.txt |
Q:
Solving the unicode output in Python
I have written some code which sends queries to google and returns the query results. Apparently the contents which are retrieved are in unicode format, so when I put them in a list for example and print this list (the whole list together and not member by member) an annoying extra 'u' is always behind all of the members in this list..How can I get rid of them? I tried to convert the whole text to ascii but because there are some non-ascii characters(different languages) is in the text it fails, now do u know what I should do to have a better output? and I hope this extra 'u' doesn't make any troubles. thanks
A:
Instead of:
>>> print your_list
[u'foo', u'bar']
Use:
>>> print '\n'.join(your_list)
foo
bar
You can use ', ' instead of '\n' as the separator if you prefer to keep it all on one line.
You may also have problems if you are trying to display Unicode characters in the Windows console. If so, you could use for example IDLE which can display Unicode characters. Alternatively you can convert to ASCII and ignore the characters that don't exist in ASCII:
print '\n'.join(x.encode('ascii', 'ignore') for x in your_list)
A:
If your going to do anything meaningful with your output, you have to decide which output encoding you want. Throwing all those non-ascii characters away is not even the second best solution.
Decide for an appropiate output encoding (e.g. for shell output your shell encoding, for web output your web encoding, best all-rounder is UTF-8) and encode appropiately: ', '.join(x.encode('utf-a') for x in your_list) (En-/Decoding )
| Solving the unicode output in Python | I have written some code which sends queries to google and returns the query results. Apparently the contents which are retrieved are in unicode format, so when I put them in a list for example and print this list (the whole list together and not member by member) an annoying extra 'u' is always behind all of the members in this list..How can I get rid of them? I tried to convert the whole text to ascii but because there are some non-ascii characters(different languages) is in the text it fails, now do u know what I should do to have a better output? and I hope this extra 'u' doesn't make any troubles. thanks
| [
"Instead of:\n>>> print your_list\n[u'foo', u'bar']\n\nUse:\n>>> print '\\n'.join(your_list)\nfoo\nbar\n\nYou can use ', ' instead of '\\n' as the separator if you prefer to keep it all on one line.\nYou may also have problems if you are trying to display Unicode characters in the Windows console. If so, you could ... | [
5,
1
] | [] | [] | [
"encoding",
"python",
"unicode",
"unicode_string"
] | stackoverflow_0003994930_encoding_python_unicode_unicode_string.txt |
Q:
For user-based and certificate-based authentication, do I want to use urllib, urllib2, or curl?
A few months ago, I hastily put together a Python program that hit my company's web services API. It worked in three different modes:
1) HTTP with no authentication
2) HTTP with user-name and password authentication
3) HTTPS with client certificate authentication
I got 1) to work with urllib, but ran into problems with 2) and 3). Instead of figuring it out, I ended up calculating the proper command-line parameters to curl, and executing it via os.system().
Now I get to re-write this program with the benefit of experience, and I'm not sure if I should use urllib, urllib2, or just stick with curl.
The urllib documentation mentions:
When performing basic authentication, a FancyURLopener instance
calls its prompt_user_passwd() method. The default implementation
asks the users for the required information on the controlling
terminal. A subclass may override this method to support
more appropriate behavior if needed.
It also mentions the **x509 argument to urllib.URLopener():
Additional keyword parameters, collected in x509, may be used for
authentication of the client when using the https: scheme. The
keywords key_file and cert_file are supported to provide an SSL
key and certificate; both are needed to support client
authentication.
But urllib2 is one greater than urllib, so naturally I want to use it instead. The urllib2 documentation is full of information about authentication handlers that seem to be designed for 2) above, but makes no mention whatsoever of client certificates.
My question: do I want to use urllib, because it appears to support everything I need to achieve? Or should I just stick with curl?
Thanks.
Edit: Maybe my question isn't specific enough, so here's another shot. Can I achieve what I want to do with urllib? Or with urllib2? Or am I forced to use curl out of necessity?
| For user-based and certificate-based authentication, do I want to use urllib, urllib2, or curl? | A few months ago, I hastily put together a Python program that hit my company's web services API. It worked in three different modes:
1) HTTP with no authentication
2) HTTP with user-name and password authentication
3) HTTPS with client certificate authentication
I got 1) to work with urllib, but ran into problems with 2) and 3). Instead of figuring it out, I ended up calculating the proper command-line parameters to curl, and executing it via os.system().
Now I get to re-write this program with the benefit of experience, and I'm not sure if I should use urllib, urllib2, or just stick with curl.
The urllib documentation mentions:
When performing basic authentication, a FancyURLopener instance
calls its prompt_user_passwd() method. The default implementation
asks the users for the required information on the controlling
terminal. A subclass may override this method to support
more appropriate behavior if needed.
It also mentions the **x509 argument to urllib.URLopener():
Additional keyword parameters, collected in x509, may be used for
authentication of the client when using the https: scheme. The
keywords key_file and cert_file are supported to provide an SSL
key and certificate; both are needed to support client
authentication.
But urllib2 is one greater than urllib, so naturally I want to use it instead. The urllib2 documentation is full of information about authentication handlers that seem to be designed for 2) above, but makes no mention whatsoever of client certificates.
My question: do I want to use urllib, because it appears to support everything I need to achieve? Or should I just stick with curl?
Thanks.
Edit: Maybe my question isn't specific enough, so here's another shot. Can I achieve what I want to do with urllib? Or with urllib2? Or am I forced to use curl out of necessity?
| [] | [] | [
"I believe that mechanize is the module you need.\nEDIT: mechanize objects have this method for authentication: add_password(self, url, user, password, realm=None)\n"
] | [
-1
] | [
"python",
"urllib",
"urllib2"
] | stackoverflow_0003982291_python_urllib_urllib2.txt |
Q:
Python easy_install gives [errno13]
I'm tring to install Hookbox but without success, when I call easy_install or
python setup.py install
it gives me [Errno 13] Permission denied: '/usr/local/lib/python2.6/site-packages/test-easy-install-68779.write-test'
When I try to grant write permissions to this derectory it gives
chmod: /usr/local/lib/python2.6/site-packages/: Operation not permitted
is there any way to solve this prob or install hookbox without easy_install?
A:
You should have used appropriate privilege to install
sudo python setup.py install
Another option is to use virtualenv to create a isolated environment where you could install
http://pypi.python.org/pypi/virtualenv
Another way is too install some where, where you have permission.
python setup.py install --home=<dir>
see also the alternate unix installation with option prefix
python setup.py install --prefix=/usr/local
See the details of these options in the docs: http://docs.python.org/install/
If you ask my preference it would be virtualenv, virtualenvwrapper, pip and yolk to manage external modules. google for them
| Python easy_install gives [errno13] | I'm tring to install Hookbox but without success, when I call easy_install or
python setup.py install
it gives me [Errno 13] Permission denied: '/usr/local/lib/python2.6/site-packages/test-easy-install-68779.write-test'
When I try to grant write permissions to this derectory it gives
chmod: /usr/local/lib/python2.6/site-packages/: Operation not permitted
is there any way to solve this prob or install hookbox without easy_install?
| [
"You should have used appropriate privilege to install\nsudo python setup.py install\n\nAnother option is to use virtualenv to create a isolated environment where you could install\n\nhttp://pypi.python.org/pypi/virtualenv\n\nAnother way is too install some where, where you have permission.\npython setup.py install... | [
7
] | [] | [] | [
"easy_install",
"python",
"ssh"
] | stackoverflow_0003995339_easy_install_python_ssh.txt |
Q:
C Lexical analyzer in python
I'm creating a C Lexical analyzer using python as part of developing a parser.Here in my code i have written some methods for identifying keywords,numbers,operators etc. No error is shown after compiling. While executing i could input a .c file.My output should list all the keywords,identifiers etc in the input file. But it is not showing anything .Can anyone help me with this. The code is attached.
import sys
import string
delim=['\t','\n',',',';','(',')','{','}','[',']','#','<','>']
oper=['+','-','*','/','%','=','!']
key=["int","float","char","double","bool","void","extern","unsigned","goto","static","class","struct","for","if","else","return","register","long","while","do"]
predirect=["include","define"]
header=["stdio.h","conio.h","malloc.h","process.h","string.h","ctype.h"]
word_list1=""
i=0
j=0
f=0
numflag=0
token=[0]*50
def isdelim(c):
for k in range(0,14):
if c==delim[k]:
return 1
return 0
def isop(c):
for k in range(0,7):
if c==oper[k]:
ch=word_list1[i+1]
i+=1
for j in range(0,6):
if ch==oper[j]:
fop=1
sop=ch
return 1
#ungetc(ch,fp);
return 1
j+=1
return 0;
k+=1
def check(t):
print t
if numflag==1:
print "\n number "+str(t)
return
for k in range(0,2):#(i=0;i<2;i++)
if strcmp(t,predirect[k])==0:
print "\n preprocessor directive "+str(t)
return
for k in range(0,6): #=0;i<6;i++)
if strcmp(t,header[k])==0:
print "\n header file "+str(t)
return
for k in range(0,21): #=0;i<21;i++)
if strcmp(key[k],t)==0:
print "\n keyword "+str(key[k])
return
print "\n identifier \t%s"+str(t)
def skipcomment():
ch=word_list[i+1]
i+=1
if ch=='/':
while word_list1[i]!='\0':
i+=1#ch=getc(fp))!='\0':
elif ch=='*':
while f==0:
ch=word_list1[i]
i+=1
if c=='/':
f=1
f=0
a=raw_input("Enter the file name:")
s=open(a,"r")
str1=s.read()
word_list1=str1.split()
i=0
#print word_list1[i]
for word in word_list1 :
print word_list1[i]
if word_list1[i]=="/":
print word_list1[i]
elif word_list1[i]==" ":
print word_list1[i]
elif word_list1[i].isalpha():
if numflag!=1:
token[j]=word_list1[i]
j+=1
if numflag==1:
token[j]='\0'
check(token)
numflag=0
j=0
f=0
if f==0:
f=1
elif word_list1[i].isalnum():
if numflag==0:
numflag=1
token[j]=word_list1[i]
j+=1
else:
if isdelim(word_list1[i]):
if numflag==1:
token[j]='\0'
check(token)
numflag=0
if f==1:
token[j]='\0'
numflag=0
check(token)
j=0
f=0
print "\n delimiters : "+word_list1[i]
elif isop(word_list1[i]):
if numflag==1:
token[j]='\0'
check(token)
numflag=0
j=0
f=0
if f==1:
token[j]='\0'
j=0
f=0
numflag=0
check(token)
if fop==1:
fop=0
print "\n operator \t"+str(word_list1[i])+str(sop)
else:
print "\n operator \t"+str(c)
elif word_list1[i]=='.':
token[j]=word_list1[i]
j+=1
i+=1
A:
Your code is bad. Try splitting it up into smaller functions that you can test individually. Have you tried debugging the program? Once you find the place that causes the problem, you can come back here and ask a more specific question.
Some more hints. You can implement isdelim much simpler like this:
def isdelim(c):
return c in delim
To compare string for equality, use string1 == string2. strcmp does not exist in Python. I do not know if you are aware that Python is usually interpreted and not compiled. This means that you will get no compiler-error if you call a function that does not exist. The program will only complain at run-time when it reaches the call.
In your function isop you have unreachable code. The lines j += 1 and k += 1 can never be reached as they are right after a return statement.
In Python iterating over a collection is done like this:
for item in collection:
# do stuff with item
These are just some hints. You should really read the Python Tutorial.
A:
def isdelim(c):
if c in delim:
return 1
return 0
You should learn more about Python basics. ATM, your code contains too much ifs and fors.
Try learning it the hard way.
A:
It seems to print out quite a bit of output for me, but the code is pretty hard to follow. I ran it against itself and it errored out like so:
Traceback (most recent call last):
File "C:\dev\snippets\lexical.py", line 92, in <module>
token[j]=word_list1[i]
IndexError: list assignment index out of range
Honestly, this is pretty bad code. You should give the functions better names and don't use magic numbers like this:
for k in range(0,14)
I mean, you have already made a list you can use for the range.
for k in range(delim)
Makes slightly more sense.
But you're just trying to determine if c is in the list delim, so just say:
if c in delim
Why are you returning 1 and 0, what do they mean? Why not use True and False.
There are probably several other blatantly obvious problems, like the whole "main" section of the code.
This is not very pythonic:
token=[0]*50
Do you really just mean to say?
token = []
Now it's just an empty list.
Instead of trying to use a counter like this:
token[j]=word_list1[i]
You want to append, like this:
token.append (word_list[i])
I honestly think you've started with too hard a problem.
| C Lexical analyzer in python | I'm creating a C Lexical analyzer using python as part of developing a parser.Here in my code i have written some methods for identifying keywords,numbers,operators etc. No error is shown after compiling. While executing i could input a .c file.My output should list all the keywords,identifiers etc in the input file. But it is not showing anything .Can anyone help me with this. The code is attached.
import sys
import string
delim=['\t','\n',',',';','(',')','{','}','[',']','#','<','>']
oper=['+','-','*','/','%','=','!']
key=["int","float","char","double","bool","void","extern","unsigned","goto","static","class","struct","for","if","else","return","register","long","while","do"]
predirect=["include","define"]
header=["stdio.h","conio.h","malloc.h","process.h","string.h","ctype.h"]
word_list1=""
i=0
j=0
f=0
numflag=0
token=[0]*50
def isdelim(c):
for k in range(0,14):
if c==delim[k]:
return 1
return 0
def isop(c):
for k in range(0,7):
if c==oper[k]:
ch=word_list1[i+1]
i+=1
for j in range(0,6):
if ch==oper[j]:
fop=1
sop=ch
return 1
#ungetc(ch,fp);
return 1
j+=1
return 0;
k+=1
def check(t):
print t
if numflag==1:
print "\n number "+str(t)
return
for k in range(0,2):#(i=0;i<2;i++)
if strcmp(t,predirect[k])==0:
print "\n preprocessor directive "+str(t)
return
for k in range(0,6): #=0;i<6;i++)
if strcmp(t,header[k])==0:
print "\n header file "+str(t)
return
for k in range(0,21): #=0;i<21;i++)
if strcmp(key[k],t)==0:
print "\n keyword "+str(key[k])
return
print "\n identifier \t%s"+str(t)
def skipcomment():
ch=word_list[i+1]
i+=1
if ch=='/':
while word_list1[i]!='\0':
i+=1#ch=getc(fp))!='\0':
elif ch=='*':
while f==0:
ch=word_list1[i]
i+=1
if c=='/':
f=1
f=0
a=raw_input("Enter the file name:")
s=open(a,"r")
str1=s.read()
word_list1=str1.split()
i=0
#print word_list1[i]
for word in word_list1 :
print word_list1[i]
if word_list1[i]=="/":
print word_list1[i]
elif word_list1[i]==" ":
print word_list1[i]
elif word_list1[i].isalpha():
if numflag!=1:
token[j]=word_list1[i]
j+=1
if numflag==1:
token[j]='\0'
check(token)
numflag=0
j=0
f=0
if f==0:
f=1
elif word_list1[i].isalnum():
if numflag==0:
numflag=1
token[j]=word_list1[i]
j+=1
else:
if isdelim(word_list1[i]):
if numflag==1:
token[j]='\0'
check(token)
numflag=0
if f==1:
token[j]='\0'
numflag=0
check(token)
j=0
f=0
print "\n delimiters : "+word_list1[i]
elif isop(word_list1[i]):
if numflag==1:
token[j]='\0'
check(token)
numflag=0
j=0
f=0
if f==1:
token[j]='\0'
j=0
f=0
numflag=0
check(token)
if fop==1:
fop=0
print "\n operator \t"+str(word_list1[i])+str(sop)
else:
print "\n operator \t"+str(c)
elif word_list1[i]=='.':
token[j]=word_list1[i]
j+=1
i+=1
| [
"Your code is bad. Try splitting it up into smaller functions that you can test individually. Have you tried debugging the program? Once you find the place that causes the problem, you can come back here and ask a more specific question.\nSome more hints. You can implement isdelim much simpler like this:\ndef isdel... | [
1,
1,
0
] | [] | [] | [
"lexical_analysis",
"parsing",
"python"
] | stackoverflow_0003995439_lexical_analysis_parsing_python.txt |
Q:
Pylons: accidentally typed 'setup.py install', how can I fix?
While working on a Pylons app, I accidentally typed
python setup.py install
in the home directory of a project, instead of what I meant to type, namely
python setup.py egg_info
Oops. It looks like the Pylons app has now been installed as a Python package. Whenever I make changes to the project, they don't get propagated unless I re-run install, which is annoying.
How can I fix this?
A:
delete the related content from the dist-packages/site-packages and .egg info files manually
| Pylons: accidentally typed 'setup.py install', how can I fix? | While working on a Pylons app, I accidentally typed
python setup.py install
in the home directory of a project, instead of what I meant to type, namely
python setup.py egg_info
Oops. It looks like the Pylons app has now been installed as a Python package. Whenever I make changes to the project, they don't get propagated unless I re-run install, which is annoying.
How can I fix this?
| [
"delete the related content from the dist-packages/site-packages and .egg info files manually\n"
] | [
2
] | [] | [] | [
"pylons",
"python"
] | stackoverflow_0003995837_pylons_python.txt |
Q:
Stateless pagination in CouchDB?
Most of the research I've seen on pagination with CouchDB suggests that what you need to do is take the first ten (or however many) items from your view, then record the last document's docid and pass it on to the next page. Unfortunately, I can see a few glaring issues with that method.
It apparently makes it impossible to skip around within the set of pages (if someone jumps directly to page 100, you would have to run the queries for pages 2-99 so you would know how to load page 100).
It requires you to pass around possibly a lot of state information between your pages.
It's difficult to properly code.
Unfortunately, my research has shown that using skip develops considerable slowdown for datasets 5000 records or larger, and would be positively crippling once you reached anything really huge (going to page 20000 with 10 records to a page would take about 20 seconds - and yes, there are datasets that big in production). So that's not really an option.
So, what I'm asking is, is there an efficient way to paginate view results in CouchDB that can get all the items from an arbitrary page? (I'm using couchdb-python, but hopefully there isn't anything about this that would be client-dependent.)
A:
I'm new to CouchDB, but I think I might be able to help. I read the following from CouchDB: The Definitive Guide:
One drawback of the linked list style pagination is that... jumping to a specific page doesn’t really work... If you really do need jump to page over the full range of documents... you can still maintain an integer value index as the view index and have a hybrid approach at solving pagination. — http://books.couchdb.org/relax/receipts/pagination
If I'm reading that right, the approach in your case is going to be:
Embed a numeric sequence into your document set.
Extract that numeric sequence to a numeric view index.
Use arithmetic to calculate the correct start/end numeric keys for your arbitrary page.
For step 1, you need to actually add something like "page_seq" as a field to your document. I don't have a specific recommendation on how you obtain this number, and am curious to know what people think. For this scheme to work, it has to increment by exactly 1 for each new record, so RDBMS sequences are probably out (the ones I'm familiar with may skip numbers).
For step 2, you'd write a view with a map function that's something like this (in Javascript):
function(doc):
emit(doc.page_seq, doc)
For step 3, you'd write your query something like this (assuming the page_seq and page numbering sequences start at 1):
results = db.view("name_of_view")
page_size = ... # say, 20
page_no = ... # 1 = page 1, 2 = page 2, etc.
begin = ((page_no - 1) * page_size) + 1
end = begin + page_size
my_page = results[begin:end]
and then you can iterate through my_page.
A clear drawback of this is that page_seq assumes you're not filtering the data set for your view, and you'll quickly run into trouble if you're trying to get this to work with an arbitrary query.
Comments/improvements welcome.
A:
We have solved this problem by using CouchDB Lucene for search listings.
The 0.6 Snapshot is stable enough you should try it :
CouchDB Lucene repository
| Stateless pagination in CouchDB? | Most of the research I've seen on pagination with CouchDB suggests that what you need to do is take the first ten (or however many) items from your view, then record the last document's docid and pass it on to the next page. Unfortunately, I can see a few glaring issues with that method.
It apparently makes it impossible to skip around within the set of pages (if someone jumps directly to page 100, you would have to run the queries for pages 2-99 so you would know how to load page 100).
It requires you to pass around possibly a lot of state information between your pages.
It's difficult to properly code.
Unfortunately, my research has shown that using skip develops considerable slowdown for datasets 5000 records or larger, and would be positively crippling once you reached anything really huge (going to page 20000 with 10 records to a page would take about 20 seconds - and yes, there are datasets that big in production). So that's not really an option.
So, what I'm asking is, is there an efficient way to paginate view results in CouchDB that can get all the items from an arbitrary page? (I'm using couchdb-python, but hopefully there isn't anything about this that would be client-dependent.)
| [
"I'm new to CouchDB, but I think I might be able to help. I read the following from CouchDB: The Definitive Guide:\n\nOne drawback of the linked list style pagination is that... jumping to a specific page doesn’t really work... If you really do need jump to page over the full range of documents... you can still mai... | [
3,
1
] | [] | [] | [
"couchdb",
"pagination",
"python"
] | stackoverflow_0003081721_couchdb_pagination_python.txt |
Q:
How can I call a function with delay in python?
I have a slider and I want to call a specific function only when the interaction is complete. The idea is to call a function after 500ms (and not before). If the slider continues to move then the call is canceled. In other words, if the slider "rests" for more than 500ms than the function is call.
Thanks
Update
#slider
def updateValue(self):
#self._job = None
#self.preview.updateContourValue(float(self.slider.get() ))
print "updated value"
timer = Timer(5.0, updateValue)
def sliderCallback(self):
timer.cancel()
timer.start()
A:
Patrick: See this thread - How to create a timer using tkinter?
You can use Threading.Timer to do this. It has a cancel method that you can use to cancel it before it runs.
| How can I call a function with delay in python? | I have a slider and I want to call a specific function only when the interaction is complete. The idea is to call a function after 500ms (and not before). If the slider continues to move then the call is canceled. In other words, if the slider "rests" for more than 500ms than the function is call.
Thanks
Update
#slider
def updateValue(self):
#self._job = None
#self.preview.updateContourValue(float(self.slider.get() ))
print "updated value"
timer = Timer(5.0, updateValue)
def sliderCallback(self):
timer.cancel()
timer.start()
| [
"Patrick: See this thread - How to create a timer using tkinter?\nYou can use Threading.Timer to do this. It has a cancel method that you can use to cancel it before it runs.\n"
] | [
2
] | [] | [] | [
"python"
] | stackoverflow_0003996083_python.txt |
Q:
How can I trigger and listen for events in python?
How can I trigger and listen for events in python ?
I need a practical example..
thanks
Update
#add Isosurface button
def addIso():
#trigger event
self.addButton = tk.Button(self.leftFrame, text="Add", command=addIso) #.grid(column=3, row=1)
self.addButton.pack(in_=self.leftFrame, side="right", pady=2)
A:
There's a bind function for TKinter objects that takes two parameters: the first is a string representing the name of the event you want to listen for (probably "" in this case), and the second is a method which takes the triggering event as a parameter.
Example:
def addButton_click(event):
print 'button clicked'
self.addButton.bind("<Button-1>", addButton_click)
I believe this will still work even if used from another class:
class X:
def addButton_click(self, event):
print 'button clicked'
...
inst = X()
self.addButton.bind("<Button-1>", inst.addButton_click)
A:
Something like this:
#add Isosurface button
def addIso(event):
#trigger event
self.addButton = tk.Button(self.leftFrame, text="Add") #.grid(column=3, row=1)
self.addButton.pack(in_=self.leftFrame, side="right", pady=2)
self.addButton.bind("<Button-1>", addIso)
From: http://www.bembry.org/technology/python/notes/tkinter_3.php
A:
Based on your comments to some of the existing answers, I think what you want is something like the pubsub module. In the context of Tkinter, events are single purpose -- one event fires on one widget, and some handler handles the event (though there could be multiple handlers involved).
What it sounds like you want is more of a broadcast model -- your widget says "hey, the user did something" and any other module can register interest and get notifications without knowing specifically what widget or low level event caused that to happen.
I use this in an app that supports plugins. Plugins can say something like "call my 'open' method when the user opens a new file". They don't care how the user does it (ie: whether it was from the File menu, or a toolbar icon or a shortcut), only that it happened.
In this case, you would configure your button to call a specific method, typically in the same module that creates the button. That method would then use the pubsub module to publish some generic sort of event that other modules listen for and respond to.
A:
If you are using IronPython with Windows Presentation Foundation (WPF), you can find pyevent.py in the Ironpython\Tutorial directory. This lets you write stuff like:
import clr
clr.AddReferenceByPartialName("PresentationFramework")
import System
import pyevent
class myclass(object):
def __init__(self):
self._PropertyChanged, self._OnPropertyChanged = pyevent.make_event()
self.initialize()
def add_PropertyChanged(self, handler):
self._PropertyChanged += handler
def remove_PropertyChanged(self, handler):
self._PropertyChanged -= handler
def raiseAPropertyChangedEvent(self, name):
self._OnPropertyChanged(self, System.ComponentModel.PropertyChangedEventArgs(name))
| How can I trigger and listen for events in python? | How can I trigger and listen for events in python ?
I need a practical example..
thanks
Update
#add Isosurface button
def addIso():
#trigger event
self.addButton = tk.Button(self.leftFrame, text="Add", command=addIso) #.grid(column=3, row=1)
self.addButton.pack(in_=self.leftFrame, side="right", pady=2)
| [
"There's a bind function for TKinter objects that takes two parameters: the first is a string representing the name of the event you want to listen for (probably \"\" in this case), and the second is a method which takes the triggering event as a parameter.\nExample:\ndef addButton_click(event):\n print 'button ... | [
1,
1,
1,
0
] | [] | [] | [
"python",
"tkinter"
] | stackoverflow_0003969947_python_tkinter.txt |
Q:
Python Mechanize + GAEpython code
I am aware of previous questions regarding mechanize + Google App Engine,
What pure Python library should I use to scrape a website?
and Mechanize and Google App Engine.
Also there is some code here, which I cannot get to work on app engine, throwing
File “D:\data\eclipse-php\testpy4\src\mechanize\_http.py”, line 43, in socket._fileobject(”fake socket”, close=True)
File “C:\Program Files (x86)\Google\google_appengine\google\appengine\dist\socket.py”, line 42, in _fileobject
fp.fileno = lambda: None
AttributeError: ’str’ object has no attribute ‘fileno’
INFO 2009-12-14 09:37:50,405 dev_appserver.py:3178] “GET / HTTP/1.1″ 500 -
Is anybody willing to share their working mechanize + appengine code?
A:
I have solved this problem, just change the code of mechanize._http.py, about line 43,
from:
try:
socket._fileobject("fake socket", close=True)
except TypeError:
# python <= 2.4
create_readline_wrapper = socket._fileobject
else:
def create_readline_wrapper(fh):
return socket._fileobject(fh, close=True)
to:
try:
# fixed start -- fixed for gae
class x:
pass
# the x should be an object, not a string,
# This is the key
socket._fileobject(x, close=True)
# fixed ended
except TypeError:
# python <= 2.4
create_readline_wrapper = socket._fileobject
else:
def create_readline_wrapper(fh):
return socket._fileobject(fh, close=True)
A:
I managed to get mechanize code that runs on GAE, many thanks to
MStodd,
from GAEMechanize project http://code.google.com/p/gaemechanize/ and
If anybody needs the code, you can contact MStodd !
ps: the code is not on google code, so you have to contact the owner..
Cheers
don
A:
I've uploaded the source of the gaemechanize project to a new project: http://code.google.com/p/gaemechanize2/
Insert usual caveats.
| Python Mechanize + GAEpython code | I am aware of previous questions regarding mechanize + Google App Engine,
What pure Python library should I use to scrape a website?
and Mechanize and Google App Engine.
Also there is some code here, which I cannot get to work on app engine, throwing
File “D:\data\eclipse-php\testpy4\src\mechanize\_http.py”, line 43, in socket._fileobject(”fake socket”, close=True)
File “C:\Program Files (x86)\Google\google_appengine\google\appengine\dist\socket.py”, line 42, in _fileobject
fp.fileno = lambda: None
AttributeError: ’str’ object has no attribute ‘fileno’
INFO 2009-12-14 09:37:50,405 dev_appserver.py:3178] “GET / HTTP/1.1″ 500 -
Is anybody willing to share their working mechanize + appengine code?
| [
"I have solved this problem, just change the code of mechanize._http.py, about line 43, \nfrom:\ntry:\n socket._fileobject(\"fake socket\", close=True)\nexcept TypeError:\n # python <= 2.4\n create_readline_wrapper = socket._fileobject\nelse:\n def create_readline_wrapper(fh):\n return socket._fi... | [
10,
1,
0
] | [] | [] | [
"google_app_engine",
"mechanize_python",
"python"
] | stackoverflow_0001902079_google_app_engine_mechanize_python_python.txt |
Q:
else clause in try statement... what is it good for
Possible Duplicate:
Python try-else
Comming from a Java background, I don't quite get what the else clause is good for.
According to the docs
It is useful for code that must be
executed if the try clause does not
raise an exception.
But why not put the code after the try block? It seems im missing something important here...
A:
The else clause is useful specifically because you then know that the code in the try suite was successful. For instance:
for arg in sys.argv[1:]:
try:
f = open(arg, 'r')
except IOError:
print 'cannot open', arg
else:
print arg, 'has', len(f.readlines()), 'lines'
f.close()
You can perform operations on f safely, because you know that its assignment succeeded. If the code was simply after the try ... except, you may not have an f.
A:
Consider
try:
a = 1/0
except ZeroDivisionError:
print "Division by zero not allowed."
else:
print "In this universe, division by zero is allowed."
What would happen if you put the second print outside of the try/except/else block?
A:
It is for code you want to execute only when no exception was raised.
| else clause in try statement... what is it good for |
Possible Duplicate:
Python try-else
Comming from a Java background, I don't quite get what the else clause is good for.
According to the docs
It is useful for code that must be
executed if the try clause does not
raise an exception.
But why not put the code after the try block? It seems im missing something important here...
| [
"The else clause is useful specifically because you then know that the code in the try suite was successful. For instance:\nfor arg in sys.argv[1:]:\n try:\n f = open(arg, 'r')\n except IOError:\n print 'cannot open', arg\n else:\n print arg, 'has', len(f.readlines()), 'lines'\n ... | [
19,
3,
3
] | [] | [] | [
"exception_handling",
"python"
] | stackoverflow_0003996329_exception_handling_python.txt |
Q:
Python with statement - is there a need for old-style file handling any more?
With having the with statement, is there ever a need to open a file/check for exceptions/do manual closing of resources, like in
try:
f = open('myfile.txt')
for line in f:
print line
except IOError:
print 'Could not open/read file'
finally:
f.close()
A:
Your current code tries to handle the exception of the file not being found, or of insufficient access permissions etc., which a with open(file) as f: block wouldn't have done.
Also, in this case, the finally: block would have raised a NameError since f wouldn't have been defined.
In a with block, any exception (of whatever kind, maybe a division by zero in your code) that occurs within the block will still be raised, but even if you don't handle it, your file will always be closed properly. That's something entirely different.
What you want is probably:
try:
with open("myfile.txt") as f:
do_Stuff() # even if this raises an exception, f will be closed.
except IOError:
print "Couldn't open/read myfile.txt"
| Python with statement - is there a need for old-style file handling any more? | With having the with statement, is there ever a need to open a file/check for exceptions/do manual closing of resources, like in
try:
f = open('myfile.txt')
for line in f:
print line
except IOError:
print 'Could not open/read file'
finally:
f.close()
| [
"Your current code tries to handle the exception of the file not being found, or of insufficient access permissions etc., which a with open(file) as f: block wouldn't have done.\nAlso, in this case, the finally: block would have raised a NameError since f wouldn't have been defined.\nIn a with block, any exception ... | [
5
] | [] | [] | [
"exception_handling",
"python",
"with_statement"
] | stackoverflow_0003996358_exception_handling_python_with_statement.txt |
Q:
Python - Non-matching regex
I have the following regex:
regex = compile("((?P<lastyear>[\dBFUPR]+)/)*((?P<lastseason>[\dBFUPR]+))*(^|-(?P<thisseason>[\dBFUPR]*))")
Which I am using to process horce racing form strings. Sometimes a horses' form will look like this "1234-" meaning that it has not raced yet this season (there are no numbers to the right of the "-").
Currently, my regex will match "" at the end of such a form string in the thisseason group. I do not want this behaviour. I want the group to be None in such a case. i.e.
match = regex.match("1234-")
print match.group("thisseason") #None
Examples
string = "1234/123-12"
match.group("lastyear") #1234
match.group("lastseason") #123
match.group("thisseason") #12
string = "00999F"
match.group("lastyear") #None
match.group("lastseason") #None
match.group("thisseason") #00999F
string = "12-3456"
match.group("lastyear") #None
match.group("lastseason") #12
match.group("thisseason") #3456
A:
This works:
>>> regex = re.compile(r'(?:(?P<lastyear>[\dBFUPR]+)/)?(?:(?P<lastseason>[\dBFUPR]+)-)?(?P<thisseason>[\dBFUPR]+)?')
>>> regex.match("1234/123-12").groupdict()
{'thisseason': '12', 'lastyear': '1234', 'lastseason': '123'}
>>> regex.match("00999F").groupdict()
{'thisseason': '00999F', 'lastyear': None, 'lastseason': None}
>>> regex.match("12-").groupdict()
{'thisseason': None, 'lastyear': None, 'lastseason': '12'}
>>> regex.match("12-3456").groupdict()
{'thisseason': '3456', 'lastyear': None, 'lastseason': '12'}
| Python - Non-matching regex | I have the following regex:
regex = compile("((?P<lastyear>[\dBFUPR]+)/)*((?P<lastseason>[\dBFUPR]+))*(^|-(?P<thisseason>[\dBFUPR]*))")
Which I am using to process horce racing form strings. Sometimes a horses' form will look like this "1234-" meaning that it has not raced yet this season (there are no numbers to the right of the "-").
Currently, my regex will match "" at the end of such a form string in the thisseason group. I do not want this behaviour. I want the group to be None in such a case. i.e.
match = regex.match("1234-")
print match.group("thisseason") #None
Examples
string = "1234/123-12"
match.group("lastyear") #1234
match.group("lastseason") #123
match.group("thisseason") #12
string = "00999F"
match.group("lastyear") #None
match.group("lastseason") #None
match.group("thisseason") #00999F
string = "12-3456"
match.group("lastyear") #None
match.group("lastseason") #12
match.group("thisseason") #3456
| [
"This works:\n>>> regex = re.compile(r'(?:(?P<lastyear>[\\dBFUPR]+)/)?(?:(?P<lastseason>[\\dBFUPR]+)-)?(?P<thisseason>[\\dBFUPR]+)?')\n>>> regex.match(\"1234/123-12\").groupdict()\n{'thisseason': '12', 'lastyear': '1234', 'lastseason': '123'}\n>>> regex.match(\"00999F\").groupdict()\n{'thisseason': '00999F', 'lasty... | [
0
] | [] | [] | [
"match",
"python",
"regex"
] | stackoverflow_0003996391_match_python_regex.txt |
Q:
Is it possible to modify the return value of a function without defining new function in python?
def foo(a, b, c = 0):
return a+b
I have dozens of functions like 'foo', which all have different argument numbers and names. Is there a common way that I can get the return values of these functions and do just a single extra operation like pformat to them?
Yes I can just generate a new function like the following:
func = ... # func can be got using getattr by name
def wrapper(*arg, **kw):
data = func(*arg, **kw)
return pprint.pformat(data)
return wrapper
But then the new function 'wrapper' is different to the old one 'func', for example, in argument number, 'wrapper' has only 2 args--'arg' and 'kw', but 'func' may have many args, like 'a', 'b', 'c'.
I just want play with the return value, everything else should stay still, is it possible?
Thanks!
Update
Finally this problem was solved using decorator module and the following patch:
--- /home/jaime/cache/decorator-3.2.0/src/decorator.py 2010-05-22 23:53:46.000000000 +0800
+++ decorator.py 2010-10-28 14:55:11.511140589 +0800
@@ -66,9 +66,12 @@
self.name = '_lambda_'
self.doc = func.__doc__
self.module = func.__module__
- if inspect.isfunction(func):
+ if inspect.isfunction(func) or inspect.ismethod(func):
argspec = inspect.getargspec(func)
self.args, self.varargs, self.keywords, self.defaults = argspec
+ if inspect.ismethod(func):
+ self.args = self.args[1:] # Remove the useless 'self' arg
+ argspec = inspect.ArgSpec(self.args, self.varargs, self.keywords, self.defaults)
for i, arg in enumerate(self.args):
setattr(self, 'arg%d' % i, arg)
self.signature = inspect.formatargspec(
This patch allows you to decorate bounded methods, it just throws the first 'self' argument away, usage of decorator.decorator stays the same, no bad effects found right now.
example code:
def __getattr__(self, attr):
def pformat_wrapper(f, *args, **kw):
data = f(*args, **kw)
return pprint.pformat(data, indent = 4)
method = getattr(self.ncapi, attr)
return decorator(pformat_wrapper, method) # Signature preserving decorating
jaime@westeros:~/bay/dragon.testing/tests$ python
Python 2.6.5 (r265:79063, Apr 16 2010, 13:09:56)
[GCC 4.4.3] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import decorator
>>> class A:
... def f(self):
... pass
...
>>> a = A()
>>> a.f
<bound method A.f of <__main__.A instance at 0xb774a20c>>
>>> def hello(f, *args, **kw):
... print 'hello'
... return f(*args, **kw)
...
>>> f1 = decorator.decorator(hello, a.f)
>>> f1()
hello
>>>
A:
About your problem :
"But then the new function 'wrapper' is different to the old one 'func', for example, >> in argument number, 'wrapper' has only 2 args--'arg' and 'kw', but 'func' may have many >> args, like 'a', 'b', 'c'."
you can use the decorator module which enable you to create a signature-preserving decorators.
A:
Decorators.
from functools import wraps
def pformat_this( someFunc ):
@wraps( someFunc )
def wrapper(*arg, **kw):
data = someFunc(*arg, **kw)
return pprint.pformat(data)
return wrapper
@pformat_this
def foo(a, b, c = 0):
return a+b
A:
Decorators are essentially the same as what you do not want.
Getting curious, I looked into this for python 2.7, and found that there is a wealth of meta information available for user defined functions under Callable types -> User-defined functions. Unfortunately, there is nothing about the returned value.
There is also an internal type you can access through the function, a code object, on the same page, under Internal types -> Code objects. Even though these internal types are essentially provided with no promises to API stability, there doesn't seem to be anything writable with regard to the returned value there either.
I have a feeling that if there were anything you could do directly, it would be here. Hopefully someone else has better luck for you.
| Is it possible to modify the return value of a function without defining new function in python? | def foo(a, b, c = 0):
return a+b
I have dozens of functions like 'foo', which all have different argument numbers and names. Is there a common way that I can get the return values of these functions and do just a single extra operation like pformat to them?
Yes I can just generate a new function like the following:
func = ... # func can be got using getattr by name
def wrapper(*arg, **kw):
data = func(*arg, **kw)
return pprint.pformat(data)
return wrapper
But then the new function 'wrapper' is different to the old one 'func', for example, in argument number, 'wrapper' has only 2 args--'arg' and 'kw', but 'func' may have many args, like 'a', 'b', 'c'.
I just want play with the return value, everything else should stay still, is it possible?
Thanks!
Update
Finally this problem was solved using decorator module and the following patch:
--- /home/jaime/cache/decorator-3.2.0/src/decorator.py 2010-05-22 23:53:46.000000000 +0800
+++ decorator.py 2010-10-28 14:55:11.511140589 +0800
@@ -66,9 +66,12 @@
self.name = '_lambda_'
self.doc = func.__doc__
self.module = func.__module__
- if inspect.isfunction(func):
+ if inspect.isfunction(func) or inspect.ismethod(func):
argspec = inspect.getargspec(func)
self.args, self.varargs, self.keywords, self.defaults = argspec
+ if inspect.ismethod(func):
+ self.args = self.args[1:] # Remove the useless 'self' arg
+ argspec = inspect.ArgSpec(self.args, self.varargs, self.keywords, self.defaults)
for i, arg in enumerate(self.args):
setattr(self, 'arg%d' % i, arg)
self.signature = inspect.formatargspec(
This patch allows you to decorate bounded methods, it just throws the first 'self' argument away, usage of decorator.decorator stays the same, no bad effects found right now.
example code:
def __getattr__(self, attr):
def pformat_wrapper(f, *args, **kw):
data = f(*args, **kw)
return pprint.pformat(data, indent = 4)
method = getattr(self.ncapi, attr)
return decorator(pformat_wrapper, method) # Signature preserving decorating
jaime@westeros:~/bay/dragon.testing/tests$ python
Python 2.6.5 (r265:79063, Apr 16 2010, 13:09:56)
[GCC 4.4.3] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import decorator
>>> class A:
... def f(self):
... pass
...
>>> a = A()
>>> a.f
<bound method A.f of <__main__.A instance at 0xb774a20c>>
>>> def hello(f, *args, **kw):
... print 'hello'
... return f(*args, **kw)
...
>>> f1 = decorator.decorator(hello, a.f)
>>> f1()
hello
>>>
| [
"About your problem :\n\n\n\"But then the new function 'wrapper' is different to the old one 'func', for example, >> in argument number, 'wrapper' has only 2 args--'arg' and 'kw', but 'func' may have many >> args, like 'a', 'b', 'c'.\" \n\n\nyou can use the decorator module which enable you to create a signature-pr... | [
4,
2,
2
] | [] | [] | [
"python"
] | stackoverflow_0003995815_python.txt |
Q:
Updating Feedparser Feeds in Django
So I'm writing a basic feed aggregator/popurls clone site in Django and having trouble getting the feeds to update.
For each feed source, I have a separate app to parse and return the requested info, for the sake of simplicity let's say it's just getting the feed title. eg:
#feed source xyz views.py
from django.http import HttpResponse
import feedparser
def get_feed_xyz():
xyz_feed = "http://www.xyz.com/feed.xml"
feed = feedparser.parse(xyz_feed)
info = []
for entry in feed.entries:
info.append(entry.title)
return info
I then have an aggregator app that aggregates all the links.
#aggregator views.py
from django.shortcuts import render_to_response
from site.source.views import get_feed_xyz
#etc
aggregate = get_feed_xyz() # + other feeds etc
def index(request):
return render_to_response('template.html',{'aggregate' : aggregate})
My problem is in updating the feeds... they will not update unless I restart Apache! I've tried make a feed_update.py that runs the get_feed_xyz() command, but the site still doesn't update. I think I'm missing some essential part of how Django works here, because I simply can't figure it out.
A:
aggregate is a global variable, so the function get_feed_xyz() is only called when the module loads. You will need to update it inside index().
| Updating Feedparser Feeds in Django | So I'm writing a basic feed aggregator/popurls clone site in Django and having trouble getting the feeds to update.
For each feed source, I have a separate app to parse and return the requested info, for the sake of simplicity let's say it's just getting the feed title. eg:
#feed source xyz views.py
from django.http import HttpResponse
import feedparser
def get_feed_xyz():
xyz_feed = "http://www.xyz.com/feed.xml"
feed = feedparser.parse(xyz_feed)
info = []
for entry in feed.entries:
info.append(entry.title)
return info
I then have an aggregator app that aggregates all the links.
#aggregator views.py
from django.shortcuts import render_to_response
from site.source.views import get_feed_xyz
#etc
aggregate = get_feed_xyz() # + other feeds etc
def index(request):
return render_to_response('template.html',{'aggregate' : aggregate})
My problem is in updating the feeds... they will not update unless I restart Apache! I've tried make a feed_update.py that runs the get_feed_xyz() command, but the site still doesn't update. I think I'm missing some essential part of how Django works here, because I simply can't figure it out.
| [
"aggregate is a global variable, so the function get_feed_xyz() is only called when the module loads. You will need to update it inside index().\n"
] | [
1
] | [] | [] | [
"django",
"python"
] | stackoverflow_0003994437_django_python.txt |
Q:
Python urllib urlencode problem with æøå
How can I urlencode a string with special chars æøå?
ex.
urllib.urlencode('http://www.test.com/q=testæøå')
I get this error :(..
not a valid non-string sequence or
mapping object
A:
urlencode is intended to take a dictionary, for example:
>>> q= u'\xe6\xf8\xe5' # u'æøå'
>>> params= {'q': q.encode('utf-8')}
>>> 'http://www.test.com/?'+urllib.urlencode(params)
'http://www.test.com/?q=%C3%A6%C3%B8%C3%A5'
If you just want to URL-encode a single string, the function you're looking for is quote:
>>> 'http://www.test.com/?q='+urllib.quote(q.encode('utf-8'))
'http://www.test.com/?q=%C3%A6%C3%B8%C3%A5'
I'm guessing UTF-8 is the right encoding (it should be, for modern sites). If what you actually want ?q=%E6%F8%E5, then the encoding you want is probably cp1252 (similar to iso-8859-1).
A:
You should pass dictionary to urlencode, not a string. See the correct example below:
from urllib import urlencode
print 'http://www.test.com/?' + urlencode({'q': 'testæøå'})
| Python urllib urlencode problem with æøå | How can I urlencode a string with special chars æøå?
ex.
urllib.urlencode('http://www.test.com/q=testæøå')
I get this error :(..
not a valid non-string sequence or
mapping object
| [
"urlencode is intended to take a dictionary, for example:\n>>> q= u'\\xe6\\xf8\\xe5' # u'æøå'\n>>> params= {'q': q.encode('utf-8')}\n>>> 'http://www.test.com/?'+urllib.urlencode(params)\n'http://www.test.com/?q=%C3%A6%C3%B8%C3%A5'\n\nIf you just want to URL-encode a single string, the function you're looking for is... | [
26,
9
] | [] | [] | [
"python",
"urlencode",
"urllib"
] | stackoverflow_0003996974_python_urlencode_urllib.txt |
Q:
Why is PyQt application startup so slow?
C++ is fast. Qt is fast. Python interpreter startup is relatively fast (2-3 seconds on my computer). PyQt is thin wrapper around Qt. Why then PyQt app startup (about 80 lines, even in .pyc form) is so slow (6-12 seconds)? I'm interested in reasons for this and, most importantly, ways to improve this situation.
A:
There is a known bug in Qt 4.6.3 that cripples application start times. Maybe 4.4.3 is also affected. Try to upgrade to 4.7.0, which fixes that bug.
Your problem might also come from dynamic library relocation. If you compile either Qt or Python yourself, try to give them a base address different from the default (0x10000000), to prevent relocation to occur when the two libraries are loaded together.
Under Linux, you could have used prelink, but to my knowledge it has no equivalent on Windows.
| Why is PyQt application startup so slow? | C++ is fast. Qt is fast. Python interpreter startup is relatively fast (2-3 seconds on my computer). PyQt is thin wrapper around Qt. Why then PyQt app startup (about 80 lines, even in .pyc form) is so slow (6-12 seconds)? I'm interested in reasons for this and, most importantly, ways to improve this situation.
| [
"There is a known bug in Qt 4.6.3 that cripples application start times. Maybe 4.4.3 is also affected. Try to upgrade to 4.7.0, which fixes that bug.\nYour problem might also come from dynamic library relocation. If you compile either Qt or Python yourself, try to give them a base address different from the default... | [
5
] | [] | [] | [
"performance",
"pyqt",
"python",
"startup"
] | stackoverflow_0003994443_performance_pyqt_python_startup.txt |
Q:
Exception when running a Python method from another class
Here is my code.
import urllib2
import urllib
import json
from BeautifulSoup import BeautifulSoup
class parser:
"""
This class uses the Beautiful Soup library to scrape the information from
the HTML source code from Google Translate.
It also offers a way to consume the AJAX result of the translation, however
encoding on Windows won't work well right now so it's recommended to use
the scraping method.
"""
def fromHtml(self, text, languageFrom, languageTo):
"""
Returns translated text that is scraped from Google Translate's HTML
source code.
"""
langCode={
"arabic":"ar", "bulgarian":"bg", "chinese":"zh-CN",
"croatian":"hr", "czech":"cs", "danish":"da", "dutch":"nl",
"english":"en", "finnish":"fi", "french":"fr", "german":"de",
"greek":"el", "hindi":"hi", "italian":"it", "japanese":"ja",
"korean":"ko", "norwegian":"no", "polish":"pl", "portugese":"pt",
"romanian":"ro", "russian":"ru", "spanish":"es", "swedish":"sv" }
urllib.FancyURLopener.version = "Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.0.1) Gecko/2008070400 SUSE/3.0.1-0.1 Firefox/3.0.1"
try:
postParameters = urllib.urlencode({"langpair":"%s|%s" %(langCode[languageFrom.lower()],langCode[languageTo.lower()]), "text":text,"ie":"UTF8", "oe":"UTF8"})
except KeyError, error:
print "Currently we do not support %s" %(error.args[0])
return
page = urllib.urlopen("http://translate.google.com/translate_t", postParameters)
content = page.read()
page.close()
htmlSource = BeautifulSoup(content)
translation = htmlSource.find('span', title=text )
return translation.renderContents()
def fromAjaxService(self, text, languageFrom, languageTo):
"""
Returns a simple string translating the text from "languageFrom" to
"LanguageTo" using Google Translate AJAX Service.
"""
LANG={
"arabic":"ar", "bulgarian":"bg", "chinese":"zh-CN",
"croatian":"hr", "czech":"cs", "danish":"da", "dutch":"nl",
"english":"en", "finnish":"fi", "french":"fr", "german":"de",
"greek":"el", "hindi":"hi", "italian":"it", "japanese":"ja",
"korean":"ko", "norwegian":"no", "polish":"pl", "portugese":"pt",
"romanian":"ro", "russian":"ru", "spanish":"es", "swedish":"sv" }
base_url='http://ajax.googleapis.com/ajax/services/language/translate?'
langpair='%s|%s'%(LANG.get(languageFrom.lower(),languageFrom),
LANG.get(languageTo.lower(),languageTo))
params=urllib.urlencode( (('v',1.0),
('q',text.encode('utf-8')),
('langpair',langpair),) )
url=base_url+params
content=urllib2.urlopen(url).read()
try: trans_dict=json.loads(content)
except AttributeError:
try: trans_dict=json.load(content)
except AttributeError: trans_dict=json.read(content)
return trans_dict['responseData']['translatedText']
Now in another class called TestingGrounds.py I want to try out both methods, but I get the following error:
from Parser import parser
print parser.fromHtml("Hello my lady!", "English", "Italian")
Traceback (most recent call last): File "C:\Users\Sergio.Tapia\Documents\NetBeansProjects\BabylonPython\src\TestingGrounds.py", line 3, in
print parser.fromHtml("Hello my lady!", "English", "Italian") TypeError: unbound method fromHtml() must be called with parser instance as first argument (got str instance instead)
A:
You have to have an instance of the parser class, not call the method on the class itself.
from Parser import parser
print parser().fromHTML("Hello my lady!", "English", "Italian")
or
from Parser import parser
p = parser()
p.fromHTML(...)
Alternatively, you could make fromHTML a staticmethod:
class parser(object): # you should probably use new-style classes
...
@staticmethod
def fromHTML(...):
...
which you could then use like:
from Parser import parser
print parser.fromHTML(...)
A:
If you want to use fromHtml() as a static method, useful if you don't really need to access any datamembers in parser, you'll need to do this (cut for brevity)
class parser:
@staticmethod
def fromHtml(text, languageFrom, languageTo):
# etc.
Or, if you want it to be both a static method and have the ability to be an instance method...
class parser:
@classmethod
def fromHtml(self, text, languageFrom, languageTo):
# etc.
You can now use it as parser.fromHtml() or parser().fromHtml()
Looking at your code, I should think you'd only need a static method.
| Exception when running a Python method from another class | Here is my code.
import urllib2
import urllib
import json
from BeautifulSoup import BeautifulSoup
class parser:
"""
This class uses the Beautiful Soup library to scrape the information from
the HTML source code from Google Translate.
It also offers a way to consume the AJAX result of the translation, however
encoding on Windows won't work well right now so it's recommended to use
the scraping method.
"""
def fromHtml(self, text, languageFrom, languageTo):
"""
Returns translated text that is scraped from Google Translate's HTML
source code.
"""
langCode={
"arabic":"ar", "bulgarian":"bg", "chinese":"zh-CN",
"croatian":"hr", "czech":"cs", "danish":"da", "dutch":"nl",
"english":"en", "finnish":"fi", "french":"fr", "german":"de",
"greek":"el", "hindi":"hi", "italian":"it", "japanese":"ja",
"korean":"ko", "norwegian":"no", "polish":"pl", "portugese":"pt",
"romanian":"ro", "russian":"ru", "spanish":"es", "swedish":"sv" }
urllib.FancyURLopener.version = "Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.0.1) Gecko/2008070400 SUSE/3.0.1-0.1 Firefox/3.0.1"
try:
postParameters = urllib.urlencode({"langpair":"%s|%s" %(langCode[languageFrom.lower()],langCode[languageTo.lower()]), "text":text,"ie":"UTF8", "oe":"UTF8"})
except KeyError, error:
print "Currently we do not support %s" %(error.args[0])
return
page = urllib.urlopen("http://translate.google.com/translate_t", postParameters)
content = page.read()
page.close()
htmlSource = BeautifulSoup(content)
translation = htmlSource.find('span', title=text )
return translation.renderContents()
def fromAjaxService(self, text, languageFrom, languageTo):
"""
Returns a simple string translating the text from "languageFrom" to
"LanguageTo" using Google Translate AJAX Service.
"""
LANG={
"arabic":"ar", "bulgarian":"bg", "chinese":"zh-CN",
"croatian":"hr", "czech":"cs", "danish":"da", "dutch":"nl",
"english":"en", "finnish":"fi", "french":"fr", "german":"de",
"greek":"el", "hindi":"hi", "italian":"it", "japanese":"ja",
"korean":"ko", "norwegian":"no", "polish":"pl", "portugese":"pt",
"romanian":"ro", "russian":"ru", "spanish":"es", "swedish":"sv" }
base_url='http://ajax.googleapis.com/ajax/services/language/translate?'
langpair='%s|%s'%(LANG.get(languageFrom.lower(),languageFrom),
LANG.get(languageTo.lower(),languageTo))
params=urllib.urlencode( (('v',1.0),
('q',text.encode('utf-8')),
('langpair',langpair),) )
url=base_url+params
content=urllib2.urlopen(url).read()
try: trans_dict=json.loads(content)
except AttributeError:
try: trans_dict=json.load(content)
except AttributeError: trans_dict=json.read(content)
return trans_dict['responseData']['translatedText']
Now in another class called TestingGrounds.py I want to try out both methods, but I get the following error:
from Parser import parser
print parser.fromHtml("Hello my lady!", "English", "Italian")
Traceback (most recent call last): File "C:\Users\Sergio.Tapia\Documents\NetBeansProjects\BabylonPython\src\TestingGrounds.py", line 3, in
print parser.fromHtml("Hello my lady!", "English", "Italian") TypeError: unbound method fromHtml() must be called with parser instance as first argument (got str instance instead)
| [
"You have to have an instance of the parser class, not call the method on the class itself.\nfrom Parser import parser\n\nprint parser().fromHTML(\"Hello my lady!\", \"English\", \"Italian\")\n\nor\nfrom Parser import parser\n\np = parser()\np.fromHTML(...)\n\nAlternatively, you could make fromHTML a staticmethod:\... | [
1,
0
] | [] | [] | [
"class",
"methods",
"module",
"python"
] | stackoverflow_0003997071_class_methods_module_python.txt |
Q:
How can I make contents of a txt file be used as subject for mail in linux?
I have a crontab file which executes a shell script as shown below
27 11 * * * /usr/python/bi_python/launcher/launch_script_leds.sh
The shell script does a number of things:
1)executes python script launcher.py which runs tests and outputs to log files
2)sends mail notification when tests have completed with test output as body of the message
This is the command in the .sh file:
mail me@sample.com < /usr/python/bi_python/launcher/test_output.txt
This works fine but subject is blank
The subject for the email is out put to a txt file subject.txt from launcher.py. Is there a way to make the contents of this file the subject of my mail message?
I know you can use mail -s to specify subject but since many tests are being run through the launcher the subject will always vary
Thanks in advance
A:
Try
subject=$(</path/subject.txt)
mailx -s "$subject" me@sample.com < /usr/python/bi_python/launcher/test_output.txt
A:
Well, just pass the parameter -s to the mail command, with a suitable subject.
To use the contents of a file as the subject, just read the file. In Bash,
filecontents=$(cat /my/file)
will read the contents of /my/file into variable filecontents. Then you can truncate/sanitize the text as necessary, and use it as the parameter to -s.
| How can I make contents of a txt file be used as subject for mail in linux? | I have a crontab file which executes a shell script as shown below
27 11 * * * /usr/python/bi_python/launcher/launch_script_leds.sh
The shell script does a number of things:
1)executes python script launcher.py which runs tests and outputs to log files
2)sends mail notification when tests have completed with test output as body of the message
This is the command in the .sh file:
mail me@sample.com < /usr/python/bi_python/launcher/test_output.txt
This works fine but subject is blank
The subject for the email is out put to a txt file subject.txt from launcher.py. Is there a way to make the contents of this file the subject of my mail message?
I know you can use mail -s to specify subject but since many tests are being run through the launcher the subject will always vary
Thanks in advance
| [
"Try\nsubject=$(</path/subject.txt)\nmailx -s \"$subject\" me@sample.com < /usr/python/bi_python/launcher/test_output.txt\n\n",
"Well, just pass the parameter -s to the mail command, with a suitable subject.\nTo use the contents of a file as the subject, just read the file. In Bash,\nfilecontents=$(cat /my/file)\... | [
3,
0
] | [] | [] | [
"cron",
"email",
"linux",
"python"
] | stackoverflow_0003997444_cron_email_linux_python.txt |
Q:
Python generator, non-swallowing exception in 'coroutine'
I recently came across some surprising behaviour in Python generators:
class YieldOne:
def __iter__(self):
try:
yield 1
except:
print '*Excepted Successfully*'
# raise
for i in YieldOne():
raise Exception('test exception')
Which gives the output:
*Excepted Successfully*
Traceback (most recent call last):
File "<stdin>", line 2, in <module>
Exception: test exception
I was (pleasantly) surprised that *Excepted Successfully* got printed, as this was what I wanted, but also surprised that the Exception still got propagated up to the top level. I was expecting to have to use the (commented in this example) raise keyword to get the observed behaviour.
Can anyone explain why this functionality works as it does, and why the except in the generator doesn't swallow the exception?
Is this the only instance in Python where an except doesn't swallow an exception?
A:
Your code does not do what you think it does. You cannot raise Exceptions in a coroutine like this. What you do instead is catching the GeneratorExit exception. See what happens when you use a different Exception:
class YieldOne:
def __iter__(self):
try:
yield 1
except RuntimeError:
print "you won't see this"
except GeneratorExit:
print 'this is what you saw before'
# raise
for i in YieldOne():
raise RuntimeError
As this still gets upvotes, here is how you raise an Exception in a generator:
class YieldOne:
def __iter__(self):
try:
yield 1
except Exception as e:
print "Got a", repr(e)
yield 2
# raise
gen = iter(YieldOne())
for row in gen:
print row # we are at `yield 1`
print gen.throw(Exception) # throw there and go to `yield 2`
See docs for generator.throw.
A:
EDIT: What THC4k said.
If you really want to raise an arbitrary exception inside a generator, use the throw method:
>>> def Gen():
... try:
... yield 1
... except Exception:
... print "Excepted."
...
>>> foo = Gen()
>>> next(foo)
1
>>> foo.throw(Exception())
Excepted.
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
StopIteration
You'll notice that you get a StopIteration at the top level. These are raised by generators which have run out of elements; they are usually swallowed by the for loop but in this case we made the generator raise an exception so the loop doesn't notice them.
| Python generator, non-swallowing exception in 'coroutine' | I recently came across some surprising behaviour in Python generators:
class YieldOne:
def __iter__(self):
try:
yield 1
except:
print '*Excepted Successfully*'
# raise
for i in YieldOne():
raise Exception('test exception')
Which gives the output:
*Excepted Successfully*
Traceback (most recent call last):
File "<stdin>", line 2, in <module>
Exception: test exception
I was (pleasantly) surprised that *Excepted Successfully* got printed, as this was what I wanted, but also surprised that the Exception still got propagated up to the top level. I was expecting to have to use the (commented in this example) raise keyword to get the observed behaviour.
Can anyone explain why this functionality works as it does, and why the except in the generator doesn't swallow the exception?
Is this the only instance in Python where an except doesn't swallow an exception?
| [
"Your code does not do what you think it does. You cannot raise Exceptions in a coroutine like this. What you do instead is catching the GeneratorExit exception. See what happens when you use a different Exception:\nclass YieldOne:\n def __iter__(self):\n try:\n yield 1\n except RuntimeError:\n p... | [
14,
6
] | [] | [] | [
"exception",
"generator",
"python"
] | stackoverflow_0003997496_exception_generator_python.txt |
Q:
Replacing Words in TextField In Django
In django, in TextField, how to we replace,
[vimeo 123456]
with
<iframe src="http://player.vimeo.com/video/123456" width="400" height="225" frameborder="0"></iframe>
Thank you.
A:
I don't think it's a good idea to have the HTML in the TextField. First, it would make editing a pain (you'd have to write code to translate back, which will be more difficult than forward); second, it would waste disk on storing a lot of HTML in the database; and finally, it would make it harder to fix bugs later (such as if Vimeo changed their URL format).
You have two options that I can see:
1. View Function
Do this translation in your view function. Your view function would have a line like:
context["commentText"] = process_markup(thePost.commentText)
Then, in your template file, you need to mark the field as safe since you've already filtered it:
{{ commentText|safe }}
2. Custom Filter
Do this translation in a custom filter tag, like the restructuredtext filter in django.contrib.markup. This is what sebpiq recommended, and is probably the better option.
from django.template.defaultfilters import stringfilter
import re
@stringfilter
def mymarkup(value):
return process_markup(value)
Then, in your template file, you need to call your filter:
{{ commentText|mymarkup }}
In both cases, you would need to write process_markup(value), which would look something like:
import re
_TAGS = [
# First, escape anything that might be misinterpreted. Order is important.
(r'&', r'&'),
(r'<', r'<'),
(r'>', r'>'),
(r'"', r'"'),
(r"'", r'''),
# Simple tags
(r'\[b\]', r'<b>'),
(r'\[/b\]', r'</b>'),
# Complex tags with parameters
(r'\[vimeo +(\d+) *\]', r'<iframe src="http://player.vimeo.com/video/\g<1>"'
r' width="400" height="225" frameborder="0"></iframe>'),
]
def process_markup(value):
for pattern, repl in _TAGS:
value = re.sub(pattern, repl, value)
return value
There are probably better ways to write this function, but you get the idea.
A:
Don't do this in your TextField. Rather in the templates. But then you have to parse the value, so I would suggest you write a simple template filter :
from django.template.defaultfilters import stringfilter
import re
@stringfilter
def textfieldtourl(value):
#parsing of your '[vimeo <id>]'
#return "http://player.vimeo.com/video/<id>"
And then in template :
<iframe src="{{ my_object.my_text_field|textfieldtourl }}" width="400" height="225" frameborder="0"></iframe>
Where my_object is the object on which your TextField is defined, my_text_field is the name of your TextField, and textfieldtourl is the name of the filter you'll define to replace a code like [vimeo 1235] by an actual url.
More infos on writing custom template fiters.
| Replacing Words in TextField In Django | In django, in TextField, how to we replace,
[vimeo 123456]
with
<iframe src="http://player.vimeo.com/video/123456" width="400" height="225" frameborder="0"></iframe>
Thank you.
| [
"I don't think it's a good idea to have the HTML in the TextField. First, it would make editing a pain (you'd have to write code to translate back, which will be more difficult than forward); second, it would waste disk on storing a lot of HTML in the database; and finally, it would make it harder to fix bugs late... | [
1,
0
] | [] | [] | [
"django",
"python",
"replace"
] | stackoverflow_0003995248_django_python_replace.txt |
Q:
Class has method that shows in intellisense, but gives an error when running it
I'm tring to set the default encoding of my console to UTF-8 so it can display cyrillic letters and accented letters.
Here is my code:
from Parser import parser
import sys
print sys.getdefaultencoding()
sys.setdefaultencoding('utf-8')
Output:
ascii
Traceback (most recent call last):
File "C:\Users\Sergio.Tapia\Documents\NetBeansProjects\BabylonPython\src\TestingGrounds.py", line 5, in <module>
sys.setdefaultencoding('utf-8')
AttributeError: 'module' object has no attribute 'setdefaultencoding'
The weird thing is, when pressing the '.' after sys to see what method it has, setdefaultencoding is shown as a possible choice. Any suggestion on what to do?
A:
The function sys.setdefaultencoding() does exist but is removed from sys' dictionary by site.py itself.
On my machine, site.py contains the following comment:
# Remove sys.setdefaultencoding() so that users cannot change the
# encoding after initialization. The test for presence is needed when
# this module is run as a script, because this code is executed twice.
A:
From http://docs.python.org/library/sys.html
This function is only intended to be used by the site module implementation and, where needed, by sitecustomize.
| Class has method that shows in intellisense, but gives an error when running it | I'm tring to set the default encoding of my console to UTF-8 so it can display cyrillic letters and accented letters.
Here is my code:
from Parser import parser
import sys
print sys.getdefaultencoding()
sys.setdefaultencoding('utf-8')
Output:
ascii
Traceback (most recent call last):
File "C:\Users\Sergio.Tapia\Documents\NetBeansProjects\BabylonPython\src\TestingGrounds.py", line 5, in <module>
sys.setdefaultencoding('utf-8')
AttributeError: 'module' object has no attribute 'setdefaultencoding'
The weird thing is, when pressing the '.' after sys to see what method it has, setdefaultencoding is shown as a possible choice. Any suggestion on what to do?
| [
"The function sys.setdefaultencoding() does exist but is removed from sys' dictionary by site.py itself.\nOn my machine, site.py contains the following comment:\n# Remove sys.setdefaultencoding() so that users cannot change the\n# encoding after initialization. The test for presence is needed when\n# this module i... | [
0,
0
] | [] | [] | [
"python"
] | stackoverflow_0003997430_python.txt |
Q:
Can Cython speed up array of object iteration?
I want to speed up the following code using cython:
class A(object):
cdef fun(self):
return 3
class B(object):
cdef fun(self):
return 2
def test():
cdef int x, y, i, s = 0
a = [ [A(), B()], [B(), A()]]
for i in xrange(1000):
for x in xrange(2):
for y in xrange(2):
s += a[x][y].fun()
return s
The only thing that comes to mind is something like this:
def test():
cdef int x, y, i, s = 0
types = [ [0, 1], [1, 0]]
data = [[...], [...]]
for i in xrange(1000):
for x in xrange(2):
for y in xrange(2):
if types[x,y] == 0:
s+= A(data[x,y]).fun()
else:
s+= B(data[x,y]).fun()
return s
Basically, the solution in C++ will be to have array of pointers to some base class with virtual method fun(), then you could iterate through it pretty quickly. Is there a way to do it using python/cython?
BTW: would it be faster to use numpy's 2D array with dtype=object_, instead of python lists?
A:
Looks like code like this gives about 20x speedup:
import numpy as np
cimport numpy as np
cdef class Base(object):
cdef int fun(self):
return -1
cdef class A(Base):
cdef int fun(self):
return 3
cdef class B(Base):
cdef int fun(self):
return 2
def test():
bbb = np.array([[A(), B()], [B(), A()]], dtype=np.object_)
cdef np.ndarray[dtype=object, ndim=2] a = bbb
cdef int i, x, y
cdef int s = 0
cdef Base u
for i in xrange(1000):
for x in xrange(2):
for y in xrange(2):
u = a[x,y]
s += u.fun()
return s
It even checks, that A and B are inherited from Base, probably there is way to disable it in release builds and get additional speedup
EDIT: Check could be removed using
u = <Base>a[x,y]
| Can Cython speed up array of object iteration? | I want to speed up the following code using cython:
class A(object):
cdef fun(self):
return 3
class B(object):
cdef fun(self):
return 2
def test():
cdef int x, y, i, s = 0
a = [ [A(), B()], [B(), A()]]
for i in xrange(1000):
for x in xrange(2):
for y in xrange(2):
s += a[x][y].fun()
return s
The only thing that comes to mind is something like this:
def test():
cdef int x, y, i, s = 0
types = [ [0, 1], [1, 0]]
data = [[...], [...]]
for i in xrange(1000):
for x in xrange(2):
for y in xrange(2):
if types[x,y] == 0:
s+= A(data[x,y]).fun()
else:
s+= B(data[x,y]).fun()
return s
Basically, the solution in C++ will be to have array of pointers to some base class with virtual method fun(), then you could iterate through it pretty quickly. Is there a way to do it using python/cython?
BTW: would it be faster to use numpy's 2D array with dtype=object_, instead of python lists?
| [
"Looks like code like this gives about 20x speedup:\nimport numpy as np\ncimport numpy as np\ncdef class Base(object):\n cdef int fun(self):\n return -1\n\ncdef class A(Base):\n cdef int fun(self):\n return 3\n\n\ncdef class B(Base):\n cdef int fun(self):\n return 2\n\ndef test():\n ... | [
7
] | [] | [] | [
"cython",
"python"
] | stackoverflow_0003988233_cython_python.txt |
Q:
Can some explain this strange behavior of the hypergeometric distribution in scipy?
I am running Python 2.6.5 on Mac OS X 10.6.4 (this is not the native version, I installed it myself) with Scipy 0.8.0. If I do the following:
>>> from scipy.stats import hypergeom
>>> hypergeom.sf(5,10,2,5)
I get an IndexError. Then I do:
>>> hypergeom.sf(2,10,2,2)
-4.44....
I suspect the negative value is due to bad floating point precision. Then I do the first one again:
>>> hypergeom.sf(5,10,2,5)
0.0
Now it works! Can someone explain this? Are you seeing this behavior too?
A:
The problem seems to arise based if the first call to the survival function is in the range that should obviously be zero (see my comment to the previous answer). E.g., for calls to hypergeom.sf(x,M,n,N) it fails if the first call to a hypergeometric function to the function is a situation where x > n, where the survival function will always be zero.
You could trivially fix this temporarily by:
def new_hypergeom_sf(k, *args, **kwds):
from scipy.stats import hypergeom
(M, n, N) = args[0:3]
try:
return hypergeom.sf(k, *args, **kwds)
except Exception as inst:
if k >= n and type(inst) == IndexError:
return 0 ## or conversely 1 - hypergeom.cdf(k, *args, **kwds)
else:
raise inst
Now if you have no problem editing the /usr/share/pyshared/scipy/stats/distributions.py (or equivalent file), the fix is likely on line 3966 where right now it reads:
place(output,cond,self._sf(*goodargs))
if output.ndim == 0:
return output[()]
return output
But if you change it to:
if output.ndim == 0:
return output[()]
place(output,cond,self._sf(*goodargs))
if output.ndim == 0:
return output[()]
return output
It now works without the IndexError. Basically if the output is zero dimensional because it fails the checks, it tries to call place, fails, and doesn't generate the distribution. (This doesn't happen if a previous distribution has already been created which is likely why this wasn't caught on earlier tests.) Note that place (defined in numpy's function_base.py) will change elements of the array (though I'm not sure if it changes the dimensionality) so it may be best to still have it leave the 0 dim check after place too. I haven't fully tested this to see if this change breaks anything else (and it applies to all discrete random variable distributions), so it maybe its best to do the first fix.
It does break it; e.g., stats.hypergeom.sf(1,10,2,5) returns as zero (instead of 2/9).
This fix seems to work much better, in the same section:
class rv_discrete(rv_generic):
...
def sf(self, k, *args, **kwds):
...
if any(cond):
place(output,cond,self._sf(*goodargs))
if output.ndim == 0:
return output[()]
return output
A:
I don't know python, but the function is defined like this:
hypergeom.sf(x,M,n,N,loc=0)
M is the number of interesting objects, N the total number of objects, and n is how often you "pick one" (Sorry, German statistician).
If you had a bowl with 20 balls, 7 of those yellow (an interesting yellow), then N is 20 and M is 7.
Perhaps the function behaves undefined for the (nonsense) case when M>N ?
| Can some explain this strange behavior of the hypergeometric distribution in scipy? | I am running Python 2.6.5 on Mac OS X 10.6.4 (this is not the native version, I installed it myself) with Scipy 0.8.0. If I do the following:
>>> from scipy.stats import hypergeom
>>> hypergeom.sf(5,10,2,5)
I get an IndexError. Then I do:
>>> hypergeom.sf(2,10,2,2)
-4.44....
I suspect the negative value is due to bad floating point precision. Then I do the first one again:
>>> hypergeom.sf(5,10,2,5)
0.0
Now it works! Can someone explain this? Are you seeing this behavior too?
| [
"The problem seems to arise based if the first call to the survival function is in the range that should obviously be zero (see my comment to the previous answer). E.g., for calls to hypergeom.sf(x,M,n,N) it fails if the first call to a hypergeometric function to the function is a situation where x > n, where the ... | [
3,
0
] | [] | [] | [
"python",
"scipy"
] | stackoverflow_0003812896_python_scipy.txt |
Q:
Specific Quote Issue
Here is my problem. I have a model Project, that has a quote field in it. When a new instance of project is created I need to append the last 2 digits of the year plus a hyphen onto the start of the "quote" field. Ex. 2010 = "10-". Im just not quite sure how to start it?
As of right now I have hard coded in "10-" in as a pre-quote field, but I do not want to have to do that.
models.py
class Project(models.Model):
client = models.ForeignKey(Clients, related_name='projects')
created_by = models.ForeignKey(User, related_name='created_by')
#general information
proj_name = models.CharField(max_length=255, verbose_name='Project Name')
pre_quote = models.CharField(max_length=3,default='10-')
quote = models.IntegerField(max_length=10, verbose_name='Quote #', unique=True)
desc = models.TextField(verbose_name='Description')
starts_on = models.DateField(verbose_name='Start Date')
completed_on = models.DateField(verbose_name='Finished On')
Anyone have to do this before? Or have any suggestions?
A:
Try this:
def save(self):
today = datetime.date.today()
self.quote = "%s-%s" % (str(today.year)[2:4], self.quote)
Assuming you imported datetime.
A:
Your existing quote field is set as an integer. You will need to set this as a text field. Once you do that, you can override the save() function to prepend "10-" to the field.
class Project(models.Model):
client = models.ForeignKey(Clients, related_name='projects')
created_by = models.ForeignKey(User, related_name='created_by')
proj_name = models.CharField(max_length=255, verbose_name='Project Name')
quote = models.TextField(max_length=10, verbose_name='Quote #', unique=True)
desc = models.TextField(verbose_name='Description')
starts_on = models.DateField(verbose_name='Start Date')
completed_on = models.DateField(verbose_name='Finished On')
def save(self):
self.quote = "10-" + self.quote
| Specific Quote Issue | Here is my problem. I have a model Project, that has a quote field in it. When a new instance of project is created I need to append the last 2 digits of the year plus a hyphen onto the start of the "quote" field. Ex. 2010 = "10-". Im just not quite sure how to start it?
As of right now I have hard coded in "10-" in as a pre-quote field, but I do not want to have to do that.
models.py
class Project(models.Model):
client = models.ForeignKey(Clients, related_name='projects')
created_by = models.ForeignKey(User, related_name='created_by')
#general information
proj_name = models.CharField(max_length=255, verbose_name='Project Name')
pre_quote = models.CharField(max_length=3,default='10-')
quote = models.IntegerField(max_length=10, verbose_name='Quote #', unique=True)
desc = models.TextField(verbose_name='Description')
starts_on = models.DateField(verbose_name='Start Date')
completed_on = models.DateField(verbose_name='Finished On')
Anyone have to do this before? Or have any suggestions?
| [
"Try this:\ndef save(self):\n today = datetime.date.today()\n self.quote = \"%s-%s\" % (str(today.year)[2:4], self.quote)\n\nAssuming you imported datetime.\n",
"Your existing quote field is set as an integer. You will need to set this as a text field. Once you do that, you can override the save() functio... | [
1,
0
] | [] | [] | [
"django",
"django_forms",
"django_models",
"django_views",
"python"
] | stackoverflow_0003997887_django_django_forms_django_models_django_views_python.txt |
Q:
modelling the google datastore/python
Hi I am trying to build an application which has models resembling something like the below ones:-(While it would be easy to merge the two models into one and use them , but that is not feasible in the actual app)
class User(db.Model):
username=db.StringProperty()
email=db.StringProperty()
class UserLikes(db.Model):
username=db.StringProperty()
food=db.StringProperty()
The objective- The user after logging in enters the food that he likes and the app in turn returns all the other users who like that food.
Now suppose a user Alice enters that she likes "Pizzas" , it gets stored in the datastore. She logs out and logs in again.At this point we query the datastore for the food that she likes and then query again for all users who like that food. This as you see are two datastore queries which is not the best way. I am sure there would definitely be a better way to do this. Can someone please help.
[Update:-Or can something like this be done that I change the second model such that usernames become a multivalued property in which all the users that like that food can be stored.. however I am a little unclear here]
[Edit:-Hi Thanks for replying but I found both the solutions below a bit of a overkill here. I tried doing it like below.Request you to have a look at this and kindly advice. I maintained the same two tables,however changed them like below:-
class User(db.Model):
username=db.StringProperty()
email=db.StringProperty()
class UserLikes(db.Model):
username=db.ListProperty(basestring)
food=db.StringProperty()
Now when 2 users update same food they like, it gets stored like
'pizza' ----> 'Alice','Bob'
And my db query to retrieve data becomes quite easy here
query=db.Query(UserLikes).filter('username =','Alice').get()
which I can then iterate over as something like
for elem in query.username:
print elem
Now if there are two foods like below:-
'pizza' ----> 'Alice','Bob'
'bacon'----->'Alice','Fred'
I use the same query as above , and iterate over the queries and then the usernames.
I am quite new to this , to realize that this just might be wrong. Please Suggest!
A:
Beside the relation model you have, you could handle this in two other ways depending on your exact use case. You have a good idea in your update, use a ListProperty. Check out Brett Slatkin's taslk on Relation Indexes for some background.
You could use a child entity (Relation Index) on user that contains a list of foods:
class UserLikes(db.Model):
food = db.StringListProperty()
Then when you are creating a UserLikes instance, you will define the user it relates to as the parent:
likes = UserLikes(parent=user)
That lets you query for other users who like a particular food nicely:
like_apples_keys = UserLikes.all(keys_only=True).filter(food='apples')
user_keys = [key.parent() for key in like_apples_keys]
users_who_like_apples = db.get(user_keys)
However, what may suit your application better, would be to make the Relation a child of a food:
class WhoLikes(db.Model):
users = db.StringListProperty()
Set the key_name to the name of the food when creating the like:
food_i_like = WhoLikes(key_name='apples')
Now, to get all users who like apples:
apple_lover_key_names = WhoLikes.get_by_key_name('apples')
apple_lovers = UserModel.get_by_key_names(apple_lover_key_names.users)
To get all users who like the same stuff as a user:
same_likes = WhoLikes.all().filter('users', current_user_key_name)
like_the_same_keys = set()
for keys in same_likes:
like_the_same_keys.union(keys.users)
same_like_users = UserModel.get_by_key_names(like_the_same_keys)
If you will have lots of likes, or lots users with the same likes, you will need to make some adjustments to the process. You won't be able to fetch 1,000s of users.
A:
Food and User relation is a so called Many-to-Many relationship tipically handled with a Join table; in this case a db.Model that links User and Food.
Something like this:
class User(db.Model):
name = db.StringProperty()
def get_food_I_like(self):
return (entity.name for entity in self.foods)
class Food(db.Model):
name = db.StringProperty()
def get_users_who_like_me(self):
return (entity.name for entity in self.users)
class UserFood(db.Model):
user= db.ReferenceProperty(User, collection_name='foods')
food = db.ReferenceProperty(Food, collection_name='users')
For a given User's entity you could retrieve preferred food with:
userXXX.get_food_I_like()
For a given Food's entity, you could retrieve users that like that food with:
foodYYY.get_users_who_like_me()
There's also another approach to handle many to many relationship storing a list of keys inside a db.ListProperty().
class Food(db.Model):
name = db.StringProperty()
class User(db.Model):
name = db.StringProperty()
food = db.ListProperty(db.Key)
Remember that ListProperty is limited to 5.000 keys or again, you can't add useful properties that would fit perfectly in the join table (ex: a number of stars representing how much a User likes a Food).
| modelling the google datastore/python | Hi I am trying to build an application which has models resembling something like the below ones:-(While it would be easy to merge the two models into one and use them , but that is not feasible in the actual app)
class User(db.Model):
username=db.StringProperty()
email=db.StringProperty()
class UserLikes(db.Model):
username=db.StringProperty()
food=db.StringProperty()
The objective- The user after logging in enters the food that he likes and the app in turn returns all the other users who like that food.
Now suppose a user Alice enters that she likes "Pizzas" , it gets stored in the datastore. She logs out and logs in again.At this point we query the datastore for the food that she likes and then query again for all users who like that food. This as you see are two datastore queries which is not the best way. I am sure there would definitely be a better way to do this. Can someone please help.
[Update:-Or can something like this be done that I change the second model such that usernames become a multivalued property in which all the users that like that food can be stored.. however I am a little unclear here]
[Edit:-Hi Thanks for replying but I found both the solutions below a bit of a overkill here. I tried doing it like below.Request you to have a look at this and kindly advice. I maintained the same two tables,however changed them like below:-
class User(db.Model):
username=db.StringProperty()
email=db.StringProperty()
class UserLikes(db.Model):
username=db.ListProperty(basestring)
food=db.StringProperty()
Now when 2 users update same food they like, it gets stored like
'pizza' ----> 'Alice','Bob'
And my db query to retrieve data becomes quite easy here
query=db.Query(UserLikes).filter('username =','Alice').get()
which I can then iterate over as something like
for elem in query.username:
print elem
Now if there are two foods like below:-
'pizza' ----> 'Alice','Bob'
'bacon'----->'Alice','Fred'
I use the same query as above , and iterate over the queries and then the usernames.
I am quite new to this , to realize that this just might be wrong. Please Suggest!
| [
"Beside the relation model you have, you could handle this in two other ways depending on your exact use case. You have a good idea in your update, use a ListProperty. Check out Brett Slatkin's taslk on Relation Indexes for some background.\nYou could use a child entity (Relation Index) on user that contains a li... | [
2,
1
] | [] | [] | [
"google_app_engine",
"google_cloud_datastore",
"python"
] | stackoverflow_0003995851_google_app_engine_google_cloud_datastore_python.txt |
Q:
Python: variable is not correctly initialized?
I wrote the following code in python
self._job = None
#slider
def sliderCallback(self):
if self._job:
And I get this error message
AttributeError: 'str' object has no attribute '_job'
why ? I thought I have initialized the variable before...
Update
Same issue with Timer variable
import Tkinter as tk
import vtk
from time import *
from threading import *
from vtk.tk import *
from Visualization import *
from Histogram import *
from ListItem import *
class UI(tk.Frame):
def build(self, root):
#left column
self.leftFrame = tk.Frame(root, width=400, height=550, bg="black") #.grid(column=4, row=0)
self.leftFrame.pack(fill=tk.BOTH, expand=True, side=tk.LEFT)
#right column
self.rightFrame = tk.Frame(root, width=400, height=550, bg="black") #.grid(column=4, row=0)
self.rightFrame.pack(fill=tk.BOTH, expand=True, side=tk.RIGHT)
#self.rightBottomFrame = tk.Frame(rightFrame, width=400, height=550, bg="red") #.grid(column=4, row=0)
#visualization
self.vis = Visualization(self.rightFrame, 400, 350, tk.RIGHT)
#self.vis.updateContourValue(400)
#left column
self.middleFrame = tk.Frame(root, width=400, height=550, bg="black") #.grid(column=0, columnspan=4, row=0)
self.middleFrame.pack(fill=tk.Y, expand=True)
#isosurfaces list
def addItem(color, volume, surface):
listitem = ListItem(self.listFrame, color, volume, surface)
self.listFrame = tk.Frame(self.middleFrame, width=400, height=500, bg="black") #.grid(column=0, columnspan=4, row=0)
self.listFrame.pack(fill=tk.BOTH, expand=True, side=tk.TOP)
addItem("#243", self.vis.getVolume(), self.vis.getSurface())
#preview
self.preview = Visualization(self.middleFrame, 200, 200, tk.BOTTOM)
#self.preview.updateContourValue(1500)
#histogram
self.histFrame = Histogram(self.leftFrame, 400, 400, tk.TOP, self.preview.getData())
#slider
def updateValue(self):
self.preview.updateContourValue(float(self.slider.get() ))
print "updated value"
self.timer = Timer(5.0, updateValue)
def sliderCallback(self):
self.timer.cancel()
self.timer.start() # after 30 seconds, "hello, world" will be printed
#if self._job:
#root.after_cancel(self._job)
#print "remove it"
#self._job = root.after(500, self.updateValue)
#def updateValue(value):
#print('horizontal: {v}'.format(v=value))
self.slider = tk.Scale(self.leftFrame, from_=0, to=256, orient=tk.HORIZONTAL, command=sliderCallback) #.grid(column=0, columnspan=3, row=1)
self.slider.pack(in_=self.leftFrame, fill=tk.X)
self.slider.set(200)
#add Isosurface button
def addIso():
addItem("#243", self.vis.getVolume(), self.vis.getSurface())
self.addButton = tk.Button(self.leftFrame, text="Add", command=addIso) #.grid(column=3, row=1)
self.addButton.pack(in_=self.leftFrame, side="right", pady=2)
A:
try this:
self.slider = tk.Scale(self.leftFrame, from_=0, to=256, orient=tk.HORIZONTAL, command=self.sliderCallback)
the difference is the self, when invoked sliderCallback has to be bound to its instance to be useful.
A:
if this is happening in class: try change self._job = None to _job = None - at that step self is not declared, i think
A:
Edit:
With your newly provided code, there does not seem to be the definition of self._job or _job in the class.
Based on your current information, for some reason the function is receiving a string for self instead of an instance of the class you've defined.
A:
updateValue and sliderCallback must be defined as methods of the class. You defined them as local functions of the build method. In addition, consider kitti's answer.
| Python: variable is not correctly initialized? | I wrote the following code in python
self._job = None
#slider
def sliderCallback(self):
if self._job:
And I get this error message
AttributeError: 'str' object has no attribute '_job'
why ? I thought I have initialized the variable before...
Update
Same issue with Timer variable
import Tkinter as tk
import vtk
from time import *
from threading import *
from vtk.tk import *
from Visualization import *
from Histogram import *
from ListItem import *
class UI(tk.Frame):
def build(self, root):
#left column
self.leftFrame = tk.Frame(root, width=400, height=550, bg="black") #.grid(column=4, row=0)
self.leftFrame.pack(fill=tk.BOTH, expand=True, side=tk.LEFT)
#right column
self.rightFrame = tk.Frame(root, width=400, height=550, bg="black") #.grid(column=4, row=0)
self.rightFrame.pack(fill=tk.BOTH, expand=True, side=tk.RIGHT)
#self.rightBottomFrame = tk.Frame(rightFrame, width=400, height=550, bg="red") #.grid(column=4, row=0)
#visualization
self.vis = Visualization(self.rightFrame, 400, 350, tk.RIGHT)
#self.vis.updateContourValue(400)
#left column
self.middleFrame = tk.Frame(root, width=400, height=550, bg="black") #.grid(column=0, columnspan=4, row=0)
self.middleFrame.pack(fill=tk.Y, expand=True)
#isosurfaces list
def addItem(color, volume, surface):
listitem = ListItem(self.listFrame, color, volume, surface)
self.listFrame = tk.Frame(self.middleFrame, width=400, height=500, bg="black") #.grid(column=0, columnspan=4, row=0)
self.listFrame.pack(fill=tk.BOTH, expand=True, side=tk.TOP)
addItem("#243", self.vis.getVolume(), self.vis.getSurface())
#preview
self.preview = Visualization(self.middleFrame, 200, 200, tk.BOTTOM)
#self.preview.updateContourValue(1500)
#histogram
self.histFrame = Histogram(self.leftFrame, 400, 400, tk.TOP, self.preview.getData())
#slider
def updateValue(self):
self.preview.updateContourValue(float(self.slider.get() ))
print "updated value"
self.timer = Timer(5.0, updateValue)
def sliderCallback(self):
self.timer.cancel()
self.timer.start() # after 30 seconds, "hello, world" will be printed
#if self._job:
#root.after_cancel(self._job)
#print "remove it"
#self._job = root.after(500, self.updateValue)
#def updateValue(value):
#print('horizontal: {v}'.format(v=value))
self.slider = tk.Scale(self.leftFrame, from_=0, to=256, orient=tk.HORIZONTAL, command=sliderCallback) #.grid(column=0, columnspan=3, row=1)
self.slider.pack(in_=self.leftFrame, fill=tk.X)
self.slider.set(200)
#add Isosurface button
def addIso():
addItem("#243", self.vis.getVolume(), self.vis.getSurface())
self.addButton = tk.Button(self.leftFrame, text="Add", command=addIso) #.grid(column=3, row=1)
self.addButton.pack(in_=self.leftFrame, side="right", pady=2)
| [
"try this:\n\nself.slider = tk.Scale(self.leftFrame, from_=0, to=256, orient=tk.HORIZONTAL, command=self.sliderCallback)\n\nthe difference is the self, when invoked sliderCallback has to be bound to its instance to be useful.\n",
"if this is happening in class: try change self._job = None to _job = None - at that... | [
1,
0,
0,
0
] | [] | [] | [
"python"
] | stackoverflow_0003996142_python.txt |
Q:
HTML code processing
I want to process some HTML code and remove the tags as in the example:
"<p><b>This</b> is a very interesting paragraph.</p>" results in "This is a very interesting paragraph."
I'm using Python as technology; do you know any framework I may use to remove the HTML tags?
Thanks!
A:
This question may help you: Strip HTML from strings in Python
No matter what solution you choose, I'd recommend avoiding regular expressions. They can be slow when processing large strings, they might not work due to invalid HTML, and stripping HTML with regex isn't always secure or reliable.
A:
BeautifulSoup
A:
import libxml2
text = "<p><b>This</b> is a very interesting paragraph.</p>"
root = libxml2.parseDoc(text)
print root.content
# 'This is a very interesting paragraph.'
A:
Depending on your needs, you could just use the regular expression /<(.|\n)*?>/ and replace all matches with empty strings. This works perfectly for manual cases, but if you're building this as an application feature then you'll need a more robust and secure option.
A:
you can use lxml.
| HTML code processing | I want to process some HTML code and remove the tags as in the example:
"<p><b>This</b> is a very interesting paragraph.</p>" results in "This is a very interesting paragraph."
I'm using Python as technology; do you know any framework I may use to remove the HTML tags?
Thanks!
| [
"This question may help you: Strip HTML from strings in Python\nNo matter what solution you choose, I'd recommend avoiding regular expressions. They can be slow when processing large strings, they might not work due to invalid HTML, and stripping HTML with regex isn't always secure or reliable.\n",
"BeautifulSou... | [
5,
4,
1,
1,
1
] | [] | [] | [
"html_parsing",
"python"
] | stackoverflow_0003998165_html_parsing_python.txt |
Q:
Remove differences between two folders
can anyone please suggest a method (ruby, python or dos preferable) to remove only the different files and sub-folders between two given folders?
I need it to recurse through sub-directories and delete everything that is different.
I don't wanna have to install anything, so a script would be great.
Thanks in advance
A:
Wouldn't rsync be the better solution? It supports everything you want and does it fast.
A:
You can use Python's difflib to tell what files differ, then os.unlink them. Really, if all you need is to tell if the files differ at all, you can just compare their text with:
for file1, file2 in files:
f1 = open(file1, 'r').read()
f1.close()
f2 = open(file2, 'r').read()
f2.close()
if f1 != f2:
os.unlink(file1)
os.unlink(file2)
You can use os.walk to get lists of files. The above code is written without new things like with, since you didn't want to install things. If you have a new Python installation, you can make it a bit nicer.
A:
In Python, you can get the filenames using os.walk. Put each full pathname into a set and use the difference method to get the files and folders that are different.
A:
Ruby
folder1=ARGV[0]
folder2=ARGV[1]
f1=Dir["#{folder1}/**"].inject([]){|r,f|r<<File.basename(f)}
Dir["#{folder2}/**"].each{|f2|File.unlink(f2) if not f1.include?(File.basename(f2))}
A:
This is the kind of thing I have done when I wanted to diff directories:
#!/usr/bin/env python
import os, os.path
import stat
def traverse_path(start_dir='.'):
for root, dirs, files in os.walk(start_dir, topdown=False):
for f in files:
complete_path = os.path.join(root, f)
try:
m = os.stat(complete_path)[stat.ST_MODE]
if stat.S_ISREG(m):
yield complete_path[len(start_dir):]
except OSError, err:
print 'Skipping', complete_path
except IOError, err:
print 'Skipping', complete_path
if __name__ == '__main__':
s = set(traverse_path('/home/hughdbrown'))
t = set(traverse_path('/home.backup/hughdbrown'))
for e in s - t:
print e
print '-' * 25
for e in t - s:
print e
Notice that there is a check for regular files. I seem to recall that I encountered files used as semaphores or which were written to by one process and read by another or something. It turned out to be important.
You can add code to delete files, according to whatever rules you like.
| Remove differences between two folders | can anyone please suggest a method (ruby, python or dos preferable) to remove only the different files and sub-folders between two given folders?
I need it to recurse through sub-directories and delete everything that is different.
I don't wanna have to install anything, so a script would be great.
Thanks in advance
| [
"Wouldn't rsync be the better solution? It supports everything you want and does it fast.\n",
"You can use Python's difflib to tell what files differ, then os.unlink them. Really, if all you need is to tell if the files differ at all, you can just compare their text with:\nfor file1, file2 in files:\n f1 = ope... | [
1,
0,
0,
0,
0
] | [] | [] | [
"dos",
"python",
"ruby"
] | stackoverflow_0003997445_dos_python_ruby.txt |
Q:
Python Facebook login
How can I make a Python script which checks if I have logged in the Facebook? If I haven't, it should log me in.
A:
Being "logged in" to Facebook (or any system for that matter) is generally a contract between the server and the client - and not just a "flipped bit" on the server.
As an example, if you log into Facebook on you phone - you can't then pull up Facebook on your desktop machine and be logged in.
In short - no, I don't think so.
| Python Facebook login | How can I make a Python script which checks if I have logged in the Facebook? If I haven't, it should log me in.
| [
"Being \"logged in\" to Facebook (or any system for that matter) is generally a contract between the server and the client - and not just a \"flipped bit\" on the server.\nAs an example, if you log into Facebook on you phone - you can't then pull up Facebook on your desktop machine and be logged in.\nIn short - no,... | [
1
] | [] | [] | [
"facebook",
"python"
] | stackoverflow_0003998586_facebook_python.txt |
Q:
add rights to a folder using python
I want to give anyone full access to a specific folder (+sub-folders +files in it).
I tried that code:
f = "c:\test" #... which is the folder
#vars
sidWorld = win32security.CreateWellKnownSid(win32security.WinWorldSid, None)
worldRights = win32file.FILE_ALL_ACCESS
#get DACL
fileSecDesc = win32security.GetNamedSecurityInfo( \
f, win32security.SE_FILE_OBJECT, win32security.DACL_SECURITY_INFORMATION)
fileDacl = fileSecDesc.GetSecurityDescriptorDacl()
#add rights
fileDacl.AddAccessAllowedAce( win32security.ACL_REVISION, worldRights, sidWorld )
win32security.SetNamedSecurityInfo( \
f, win32security.SE_FILE_OBJECT, win32security.DACL_SECURITY_INFORMATION, \
None, None, fileDacl, None )
Problem is, it isn't shown as full access, i think because of the missing inheritance (I also need inheritance). I cannot figure out how to solve this.
Any idea what I'm missing?
Thanks, best regards,
Florian Lagg.
A:
Got it after a break: it's so easy:
#vars
sidWorld = win32security.CreateWellKnownSid(win32security.WinWorldSid, None)
worldRights = win32file.FILE_ALL_ACCESS
flags = win32security.OBJECT_INHERIT_ACE| \
win32security.CONTAINER_INHERIT_ACE
#get DACL
fileSecDesc = win32security.GetNamedSecurityInfo( \
f, win32security.SE_FILE_OBJECT, win32security.DACL_SECURITY_INFORMATION)
fileDacl = fileSecDesc.GetSecurityDescriptorDacl()
#add rights
fileDacl.AddAccessAllowedAceEx( \
win32security.ACL_REVISION_DS, \
flags, \
worldRights, \
sidWorld)
win32security.SetNamedSecurityInfo( \
f, win32security.SE_FILE_OBJECT, win32security.DACL_SECURITY_INFORMATION, \
None, None, fileDacl, None )
Therefore: SOLVED!
Thanks anyway!
| add rights to a folder using python | I want to give anyone full access to a specific folder (+sub-folders +files in it).
I tried that code:
f = "c:\test" #... which is the folder
#vars
sidWorld = win32security.CreateWellKnownSid(win32security.WinWorldSid, None)
worldRights = win32file.FILE_ALL_ACCESS
#get DACL
fileSecDesc = win32security.GetNamedSecurityInfo( \
f, win32security.SE_FILE_OBJECT, win32security.DACL_SECURITY_INFORMATION)
fileDacl = fileSecDesc.GetSecurityDescriptorDacl()
#add rights
fileDacl.AddAccessAllowedAce( win32security.ACL_REVISION, worldRights, sidWorld )
win32security.SetNamedSecurityInfo( \
f, win32security.SE_FILE_OBJECT, win32security.DACL_SECURITY_INFORMATION, \
None, None, fileDacl, None )
Problem is, it isn't shown as full access, i think because of the missing inheritance (I also need inheritance). I cannot figure out how to solve this.
Any idea what I'm missing?
Thanks, best regards,
Florian Lagg.
| [
"Got it after a break: it's so easy:\n#vars\nsidWorld = win32security.CreateWellKnownSid(win32security.WinWorldSid, None)\nworldRights = win32file.FILE_ALL_ACCESS\nflags = win32security.OBJECT_INHERIT_ACE| \\\n win32security.CONTAINER_INHERIT_ACE\n\n#get DACL\nfileSecDesc = win32security.GetNamedSecurityInfo( \\... | [
4
] | [] | [] | [
"python",
"pywin32"
] | stackoverflow_0003996863_python_pywin32.txt |
Q:
Why my code doesn't work?
Why doesn't this work?
for i in [a, b, c]:
i.SetBitmap(wx.Bitmap(VarFiles[str(i)]))
I get:
Traceback (most recent call last):
File "<string>", line 11, in ?
File "codecc.py", line 724, in ?
app = MyApp(0) # stdio to console; nothing = stdio to its own window
File "C:\Program Files (x86)\WorldViz\Vizard30\bin\lib\site-packages\wx-2.8-msw-unicode\wx\_core.py", line 7978, in __init__
self._BootstrapApp()
File "C:\Program Files (x86)\WorldViz\Vizard30\bin\lib\site-packages\wx-2.8-msw-unicode\wx\_core.py", line 7552, in _BootstrapApp
return _core_.PyApp__BootstrapApp(*args, **kwargs)
File "codecc.py", line 719, in OnInit
frame = VFrame(parent=None)
File "codecc.py", line 374, in __init__
i.SetBitmap(wx.Bitmap(VarFiles[str(i)]))
KeyError: "<wx._core.MenuItem; proxy of <Swig Object of type 'wxMenuItem *' at 0x165aeab0> >"
Interestingly, this works:
i.SetBitmap(wx.Bitmap(VarFiles["i"]))
but this doesn't:
i.SetBitmap(wx.Bitmap(VarFiles[i]))
The last one returns an wxpython object with the same name as i, thus breaking the loop. So I need to find a way of returning the name of this object. But i.__name__ doesn't work.
A:
As the traceback says you have a KeyError. Since i is an object when you do str(i) you get "<wx._core.MenuItem; proxy of <Swig Object of type 'wxMenuItem *' at 0x165aeab0> >", such key doesn't exist in a VarFiles container.
It has nothing whatsoever to do with the for loop or the way you write your list.
A:
Break it down using a single case. Where is the error in this?
s = str(a)
v = VarFiles[s]
w = wx.Bitmap(v)
a.SetBitmap(w)
A:
This is how I """"fixed"""" my code:
list_a = [a, b, c]
list_b = ["a", "b", "c"]
[i.SetBitmap(wx.Bitmap(VarFiles[list_b[list_a.index(i)]])) for i in list_a]
| Why my code doesn't work? | Why doesn't this work?
for i in [a, b, c]:
i.SetBitmap(wx.Bitmap(VarFiles[str(i)]))
I get:
Traceback (most recent call last):
File "<string>", line 11, in ?
File "codecc.py", line 724, in ?
app = MyApp(0) # stdio to console; nothing = stdio to its own window
File "C:\Program Files (x86)\WorldViz\Vizard30\bin\lib\site-packages\wx-2.8-msw-unicode\wx\_core.py", line 7978, in __init__
self._BootstrapApp()
File "C:\Program Files (x86)\WorldViz\Vizard30\bin\lib\site-packages\wx-2.8-msw-unicode\wx\_core.py", line 7552, in _BootstrapApp
return _core_.PyApp__BootstrapApp(*args, **kwargs)
File "codecc.py", line 719, in OnInit
frame = VFrame(parent=None)
File "codecc.py", line 374, in __init__
i.SetBitmap(wx.Bitmap(VarFiles[str(i)]))
KeyError: "<wx._core.MenuItem; proxy of <Swig Object of type 'wxMenuItem *' at 0x165aeab0> >"
Interestingly, this works:
i.SetBitmap(wx.Bitmap(VarFiles["i"]))
but this doesn't:
i.SetBitmap(wx.Bitmap(VarFiles[i]))
The last one returns an wxpython object with the same name as i, thus breaking the loop. So I need to find a way of returning the name of this object. But i.__name__ doesn't work.
| [
"As the traceback says you have a KeyError. Since i is an object when you do str(i) you get \"<wx._core.MenuItem; proxy of <Swig Object of type 'wxMenuItem *' at 0x165aeab0> >\", such key doesn't exist in a VarFiles container.\nIt has nothing whatsoever to do with the for loop or the way you write your list.\n",
... | [
1,
1,
0
] | [] | [] | [
"python",
"wxpython"
] | stackoverflow_0003998561_python_wxpython.txt |
Q:
How to convert colors into strings with the same size?
I'm converting color values to string values, however when I get 0, the string is too short and I have to manually add "00" after it.
Whats the most elegant way to solve this issue in Python ?
print "#" + str(self.R) + str(self.G) + str(self.B)
if self.R is 0 then I get a too short string.
A:
Use formatting strings:
"#%02X%02X%02X" % (self.r, self.g, self.b)
will give you what you probably want. If you actually want your values to be in decimal like in your example, use "%03d" instead.
A:
Assuming you are using Python 2.6 or newer you can use str.format:
print "#{0:02}{1:02}{2:02}".format(self.R, self.G, self.B)
If you want hexadecimal (probably you do) then add an X:
print "#{0:02X}{1:02X}{2:02X}".format(self.R, self.G, self.B)
A:
Format can also resolve qualified attributes:
print('#{s.R:02x}{s.G:02x}{s.B:02x}'.format(s=self))
A:
print "#" + str(self.R).rjust(2,"0") + str(self.G).rjust(2,"0") +
str(self.B).rjust(2,"0")
will work fine. It will turn 0 into 00 and 1 into 01 etc.
| How to convert colors into strings with the same size? | I'm converting color values to string values, however when I get 0, the string is too short and I have to manually add "00" after it.
Whats the most elegant way to solve this issue in Python ?
print "#" + str(self.R) + str(self.G) + str(self.B)
if self.R is 0 then I get a too short string.
| [
"Use formatting strings:\n\"#%02X%02X%02X\" % (self.r, self.g, self.b)\nwill give you what you probably want. If you actually want your values to be in decimal like in your example, use \"%03d\" instead.\n",
"Assuming you are using Python 2.6 or newer you can use str.format:\nprint \"#{0:02}{1:02}{2:02}\".format(... | [
4,
4,
3,
2
] | [] | [] | [
"python"
] | stackoverflow_0003999145_python.txt |
Q:
Detect when column in gtk.treeview is resized
What signal can I catch to detect when a column changes size in a gtk.TreeView? I can't seem to find it in the docs.
A:
gtk.TreeViewColumns aren't widgets so they unfortunately don't have a dedicated signal for size changes. But you can register a callback function that receives "width" change notifications:
def onColWidthChange(col, width):
# Note that "width" is a GParamInt object, not an integer
...
col.connect("notify::width", onColWidthChange)
In the example, col must be a gtk.TreeViewColumn object. If you don't initialize the columns in code, you can use gtk.TreeView.get_column to get these objects.
If you only need notifications when the treeview changes its size, you can use its "size-allocate" signal instead.
| Detect when column in gtk.treeview is resized | What signal can I catch to detect when a column changes size in a gtk.TreeView? I can't seem to find it in the docs.
| [
"gtk.TreeViewColumns aren't widgets so they unfortunately don't have a dedicated signal for size changes. But you can register a callback function that receives \"width\" change notifications:\ndef onColWidthChange(col, width):\n # Note that \"width\" is a GParamInt object, not an integer\n ...\n\ncol.connect... | [
7
] | [] | [] | [
"gtk",
"gtktreeview",
"pygtk",
"python",
"signals"
] | stackoverflow_0003999146_gtk_gtktreeview_pygtk_python_signals.txt |
Q:
What does process exit status 3 mean?
I've seen the usage of exit status 3 in several python scripts that restart processes. As far as I know the convention is only about 0 and "not 0" on Unix/Linux.
Is there a convention defining other values like 3.
A:
At least in the old days, a return value of 1 generally meant a hard error and value 2 was usually reserved for problems with the command line arguments — it meant that the user had made an error, not the program. But beyond that: no, no convention; and even that slight convention was not universal. Like dashes in front of command-line arguments, which some versions of ps(1) let you omit, return codes were just convention. In general, read the docs (or the source!) to the script you're running and you then have to write error-code checking code to its specific meanings.
A:
There is no convention for non-zero values, they are commonly used to communicate the reason for termination and it's up to each application to define the mapping of error code and reason. In the case you're linking to you can clearly see a few lines above the check for exit code 3 that it is used to indicate that the code has changed.
Ie in this case this will give the behaviour that the automatic restart is done as long as the reason to terminate was that the code changed and nothing else.
A:
BSD tried to standardize exit codes, but it didn't (hasn't yet?) caught on:
sysexits3
A:
In this case, its unclear. foret's suggestion is one I would totally do if the developer is still around.
The Advanced Bash Scripting Guide lists some common exit codes with special meanings.
| What does process exit status 3 mean? | I've seen the usage of exit status 3 in several python scripts that restart processes. As far as I know the convention is only about 0 and "not 0" on Unix/Linux.
Is there a convention defining other values like 3.
| [
"At least in the old days, a return value of 1 generally meant a hard error and value 2 was usually reserved for problems with the command line arguments — it meant that the user had made an error, not the program. But beyond that: no, no convention; and even that slight convention was not universal. Like dashes in... | [
10,
4,
0,
0
] | [] | [] | [
"linux",
"process",
"python",
"status",
"unix"
] | stackoverflow_0003996061_linux_process_python_status_unix.txt |
Q:
Raise child_exception Errno 2
I'm attempting to convert a C header into a Python library using ctypes and ctypeslib. I'm running Python 2.7, on OSX 10.6.4 (Snow Leopard).
The header-file I am converting is mcbcio32.h, located in /header/mcbcio32.h
I wish to create an xml output in the same folder, named mcbcio32.xml.
I run h2xml.py (which converts the c header into an wrapped xml file) from the ctypeslib folder with the following command:
$ python h2xml.py /header/mcbcio32.h -o mcbcio32.xml -q -c
Output:
Traceback (most recent call last):
File "h2xml.py", line 92, in <module>
sys.exit(main())
File "h2xml.py", line 86, in main
compile_to_xml(argv)
File "h2xml.py", line 79, in compile_to_xml
parser.parse(files)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/ctypeslib/codegen/cparser.py", line 306, in parse
self.create_final_xml(include_files, types, None)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/ctypeslib/codegen/cparser.py", line 265, in create_final_xml
self.create_xml(source, xmlfile)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/ctypeslib/codegen/cparser.py", line 97, in create_xml
stdin=subprocess.PIPE)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/subprocess.py", line 672, in __init__
errread, errwrite)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/subprocess.py", line 1201, in _execute_child
raise child_exception
OSError: [Errno 2] No such file or directory
From what I can tell, three main scripts are called upon, h2xml.py, cparser.py, and finally subprocess.py. All of these scripts are written by Python developers, and thus I'm guessing the error lies somewhere in how I run the issuing command.
h2xml.py Code:
"""h2xml - convert C include file(s) into an xml file by running gccxml."""
import sys, os, ConfigParser
from ctypeslib.codegen import cparser
from optparse import OptionParser
def compile_to_xml(argv):
def add_option(option, opt, value, parser):
parser.values.gccxml_options.extend((opt, value))
# Hm, should there be a way to disable the config file?
# And then, this should be done AFTER the parameters are processed.
config = ConfigParser.ConfigParser()
try:
config.read("h2xml.cfg")
except ConfigParser.ParsingError, detail:
print >> sys.stderr, detail
return 1
parser = OptionParser("usage: %prog includefile ... [options]")
parser.add_option("-q", "--quiet",
dest="quiet",
action="store_true",
default=False)
parser.add_option("-D",
type="string",
action="callback",
callback=add_option,
dest="gccxml_options",
help="macros to define",
metavar="NAME[=VALUE]",
default=[])
parser.add_option("-U",
type="string",
action="callback",
callback=add_option,
help="macros to undefine",
metavar="NAME")
parser.add_option("-I",
type="string",
action="callback",
callback=add_option,
dest="gccxml_options",
help="additional include directories",
metavar="DIRECTORY")
parser.add_option("-o",
dest="xmlfile",
help="XML output filename",
default=None)
parser.add_option("-c", "--cpp-symbols",
dest="cpp_symbols",
action="store_true",
help="try to find #define symbols - this may give compiler errors, " \
"so it's off by default.",
default=False)
parser.add_option("-k",
dest="keep_temporary_files",
action="store_true",
help="don't delete the temporary files created "\
"(useful for finding problems)",
default=False)
options, files = parser.parse_args(argv[1:])
if not files:
print "Error: no files to process"
print >> sys.stderr, __doc__
return 1
options.flags = options.gccxml_options
options.verbose = not options.quiet
parser = cparser.IncludeParser(options)
parser.parse(files)
def main(argv=None):
if argv is None:
argv = sys.argv
try:
compile_to_xml(argv)
except cparser.CompilerError, detail:
print >> sys.stderr, "CompilerError:", detail
return 1
if __name__ == "__main__":
sys.exit(main())
I have GCC-XML installed, and the ctypeslib was pasted into:
/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/
(my local Python library path)
If any additional information would be useful, please let me know. Thanks in advance for any help.
A:
The problem was with the installation of gcc-xml. Make sure when installing gcc-xml to also install the cmake function (cmake.org). Properly using cmake on the gcc install folder solved all the directory problems I was experiencing.
| Raise child_exception Errno 2 | I'm attempting to convert a C header into a Python library using ctypes and ctypeslib. I'm running Python 2.7, on OSX 10.6.4 (Snow Leopard).
The header-file I am converting is mcbcio32.h, located in /header/mcbcio32.h
I wish to create an xml output in the same folder, named mcbcio32.xml.
I run h2xml.py (which converts the c header into an wrapped xml file) from the ctypeslib folder with the following command:
$ python h2xml.py /header/mcbcio32.h -o mcbcio32.xml -q -c
Output:
Traceback (most recent call last):
File "h2xml.py", line 92, in <module>
sys.exit(main())
File "h2xml.py", line 86, in main
compile_to_xml(argv)
File "h2xml.py", line 79, in compile_to_xml
parser.parse(files)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/ctypeslib/codegen/cparser.py", line 306, in parse
self.create_final_xml(include_files, types, None)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/ctypeslib/codegen/cparser.py", line 265, in create_final_xml
self.create_xml(source, xmlfile)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/ctypeslib/codegen/cparser.py", line 97, in create_xml
stdin=subprocess.PIPE)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/subprocess.py", line 672, in __init__
errread, errwrite)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/subprocess.py", line 1201, in _execute_child
raise child_exception
OSError: [Errno 2] No such file or directory
From what I can tell, three main scripts are called upon, h2xml.py, cparser.py, and finally subprocess.py. All of these scripts are written by Python developers, and thus I'm guessing the error lies somewhere in how I run the issuing command.
h2xml.py Code:
"""h2xml - convert C include file(s) into an xml file by running gccxml."""
import sys, os, ConfigParser
from ctypeslib.codegen import cparser
from optparse import OptionParser
def compile_to_xml(argv):
def add_option(option, opt, value, parser):
parser.values.gccxml_options.extend((opt, value))
# Hm, should there be a way to disable the config file?
# And then, this should be done AFTER the parameters are processed.
config = ConfigParser.ConfigParser()
try:
config.read("h2xml.cfg")
except ConfigParser.ParsingError, detail:
print >> sys.stderr, detail
return 1
parser = OptionParser("usage: %prog includefile ... [options]")
parser.add_option("-q", "--quiet",
dest="quiet",
action="store_true",
default=False)
parser.add_option("-D",
type="string",
action="callback",
callback=add_option,
dest="gccxml_options",
help="macros to define",
metavar="NAME[=VALUE]",
default=[])
parser.add_option("-U",
type="string",
action="callback",
callback=add_option,
help="macros to undefine",
metavar="NAME")
parser.add_option("-I",
type="string",
action="callback",
callback=add_option,
dest="gccxml_options",
help="additional include directories",
metavar="DIRECTORY")
parser.add_option("-o",
dest="xmlfile",
help="XML output filename",
default=None)
parser.add_option("-c", "--cpp-symbols",
dest="cpp_symbols",
action="store_true",
help="try to find #define symbols - this may give compiler errors, " \
"so it's off by default.",
default=False)
parser.add_option("-k",
dest="keep_temporary_files",
action="store_true",
help="don't delete the temporary files created "\
"(useful for finding problems)",
default=False)
options, files = parser.parse_args(argv[1:])
if not files:
print "Error: no files to process"
print >> sys.stderr, __doc__
return 1
options.flags = options.gccxml_options
options.verbose = not options.quiet
parser = cparser.IncludeParser(options)
parser.parse(files)
def main(argv=None):
if argv is None:
argv = sys.argv
try:
compile_to_xml(argv)
except cparser.CompilerError, detail:
print >> sys.stderr, "CompilerError:", detail
return 1
if __name__ == "__main__":
sys.exit(main())
I have GCC-XML installed, and the ctypeslib was pasted into:
/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/
(my local Python library path)
If any additional information would be useful, please let me know. Thanks in advance for any help.
| [
"The problem was with the installation of gcc-xml. Make sure when installing gcc-xml to also install the cmake function (cmake.org). Properly using cmake on the gcc install folder solved all the directory problems I was experiencing. \n"
] | [
1
] | [] | [] | [
"ctypes",
"header_files",
"python"
] | stackoverflow_0003833627_ctypes_header_files_python.txt |
Q:
Help me understand my mod_wsgi Django config file
I was wondering why this works:
sys.path.append('/home/user/django')
sys.path.append('/home/user/django/mysite')
os.environ['DJANGO_SETTINGS_MODULE'] = 'mysite.settings'
but this doesn't?
sys.path.append('/home/user/django')
os.environ['DJANGO_SETTINGS_MODULE'] = 'mysite.settings'
I thought that adding the django folder would automatically make all projects/folders in it available to python? But apparantly I have to add the project as well, or it gives me the error 'settings not found'.
Notice that it doesn't say 'mysite.settings not found' which would indicate it does find my 'mysite' folder..
A:
Maybe if in your settings.py you have an import to a module that's inside the mysite directory, this import fails and that's why you get the ImportError.
A:
Have a look at http://codespatter.com/2009/04/10/how-to-add-locations-to-python-path-for-reusable-django-apps/
| Help me understand my mod_wsgi Django config file | I was wondering why this works:
sys.path.append('/home/user/django')
sys.path.append('/home/user/django/mysite')
os.environ['DJANGO_SETTINGS_MODULE'] = 'mysite.settings'
but this doesn't?
sys.path.append('/home/user/django')
os.environ['DJANGO_SETTINGS_MODULE'] = 'mysite.settings'
I thought that adding the django folder would automatically make all projects/folders in it available to python? But apparantly I have to add the project as well, or it gives me the error 'settings not found'.
Notice that it doesn't say 'mysite.settings not found' which would indicate it does find my 'mysite' folder..
| [
"Maybe if in your settings.py you have an import to a module that's inside the mysite directory, this import fails and that's why you get the ImportError.\n",
"Have a look at http://codespatter.com/2009/04/10/how-to-add-locations-to-python-path-for-reusable-django-apps/\n"
] | [
0,
0
] | [] | [] | [
"django",
"mod_wsgi",
"python"
] | stackoverflow_0003995830_django_mod_wsgi_python.txt |
Q:
Can't get argspec for Python callables?
I'm playing with Python callable. Basically you can define a python class and implement __call__ method to make the instance of this class callable. e.g.,
class AwesomeFunction(object):
def __call__(self, a, b):
return a+b
Module inspect has a function getargspec, which gives you the argument specification of a function. However, it seems I cannot use it on a callable object:
fn = AwesomeFunction()
import inspect
inspect.getargspec(fn)
Unfortunately, I got a TypeError:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lib/python2.6/inspect.py", line 803, in getargspec
raise TypeError('arg is not a Python function')
TypeError: arg is not a Python function
I think it's quite unfortunate that you can't treat any callable object as function, unless I'm doing something wrong here?
A:
If you need this functionality, it is absolutely trivial to write a wrapper function that will check to see if fn has an attribute __call__ and if it does, pass its __call__ function to getargspec.
A:
If you look at the definition of getargspec in the inspect module code on svn.python.org. You will see that it calls isfunction which itself calls:
isinstance(object, types.FunctionType)
Since, your AwesomeFunction clearly is not an instance of types.FunctionType it fails.
If you want it to work you should try the following:
inspect.getargspec(fn.__call__)
A:
__call__ defines something that can be called by a class instance. You're not giving getargspec a valid function because you're passing a class instance to it.
The difference between __init and __call__ is this:
fn = AwesomeFunction() # call to __init__
fn(1, 2) # call to __call__
| Can't get argspec for Python callables? | I'm playing with Python callable. Basically you can define a python class and implement __call__ method to make the instance of this class callable. e.g.,
class AwesomeFunction(object):
def __call__(self, a, b):
return a+b
Module inspect has a function getargspec, which gives you the argument specification of a function. However, it seems I cannot use it on a callable object:
fn = AwesomeFunction()
import inspect
inspect.getargspec(fn)
Unfortunately, I got a TypeError:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lib/python2.6/inspect.py", line 803, in getargspec
raise TypeError('arg is not a Python function')
TypeError: arg is not a Python function
I think it's quite unfortunate that you can't treat any callable object as function, unless I'm doing something wrong here?
| [
"If you need this functionality, it is absolutely trivial to write a wrapper function that will check to see if fn has an attribute __call__ and if it does, pass its __call__ function to getargspec.\n",
"If you look at the definition of getargspec in the inspect module code on svn.python.org. You will see that it... | [
10,
7,
0
] | [] | [] | [
"inspection",
"python",
"reflection"
] | stackoverflow_0003999463_inspection_python_reflection.txt |
Q:
Suggestions for writing this small piece of code more elegantly
Although it looks terrible, I am not finding a nicer/more efficient way of doing this:
ae = np.arange(0.0,1,0.05)
aee = np.arange(0.3,1.01,0.345)
aef = np.arange(0.3,1.01,0.345)
random.shuffle(ae)
random.shuffle(aee)
random.shuffle(aef)
for item_a in aee:
for item_b in ae:
for item_c in aef:
rlist.append(colorsys.hsv_to_rgb(item_b,item_a,item_c))
Ideas?
A:
If you do not want to shuffle the rlist, but the initial lists, then you can try to put the last four lines into a list comprehension:
rlist = [ colorsys.hsv_to_rgb(b, a, c) for c in aef for b in ae for a in aee ]
A:
import numpy as np
import random
import itertools
import colorsys
hue, saturation, value = np.arange(0.0,1,0.05), np.arange(0.3,1.01,0.345), np.arange(0.3,1.01,0.345)
rlist= [colorsys.hsv_to_rgb(hue, saturation, value) for hue, saturation, value in
itertools.product(random.sample(hue,len(hue)), random.sample(saturation, len(saturation)), random.sample(value, len(value)))]
print rlist
EDIT: random.sample from full population to avoid inplace separate shuffles
The version without itertools:
# without itertools
import numpy as np
import random
from pprint import pprint
import colorsys
hues, saturations, values = np.arange(0.0,1,0.05), np.arange(0.3,1.01,0.345), np.arange(0.3,1.01,0.345)
rlist= [colorsys.hsv_to_rgb(hue, saturation, value)
for hue in random.sample(hues,len(hues))
for saturation in random.sample(saturations, len(saturations))
for value in random.sample(values, len(values))]
pprint(rlist)
You can also include the definition of itertools.product from documentation (I did that in module called it.py in my server and used it instead of itertools):
product = None
from itertools import *
if not product:
def product(*args, **kwds):
# product('ABCD', 'xy') --> Ax Ay Bx By Cx Cy Dx Dy
# product(range(2), repeat=3) --> 000 001 010 011 100 101 110 111
pools = map(tuple, args) * kwds.get('repeat', 1)
result = [[]]
for pool in pools:
result = [x+[y] for x in result for y in pool]
for prod in result:
yield tuple(prod)
I use itertools normally as:
import itertools as it
But in the server it is replaced by
import it
A:
you don't need to shuffle each list in the first place because you will do a Cartesian product ... ;
import itertools
import colorsys
hsv_iter = itertools.product(np.arange((0, 1, 0.05),
np.arange((0.3,1.01,0.345),
np.arange((0.3,1.01,0.345))
rlist = [colorsys.hsv_to_rgb(hue, lightness, saturation)
for hue, lightness, saturation in hsv_ite]
# you can shuffle now the list if you want
random.shuffle(rlist)
A:
Stupid oneliner:
rlist = [colorsys.hsv_to_rgb(b, a, c) for c in random.sample(aef,len(aef))
for b in random.sample(ae,len(ae))
for a in random.sample(aee,len(aee))]
random.sample(x,len(x)) (new in 2.3) is pretty much equivalent to random.shuffle(x), but it returns a randomized copy of the list instead of None.
This is probably much slower than shuffling or whatever else, and you don't keep a copy of the randomized lists (if you care).
| Suggestions for writing this small piece of code more elegantly | Although it looks terrible, I am not finding a nicer/more efficient way of doing this:
ae = np.arange(0.0,1,0.05)
aee = np.arange(0.3,1.01,0.345)
aef = np.arange(0.3,1.01,0.345)
random.shuffle(ae)
random.shuffle(aee)
random.shuffle(aef)
for item_a in aee:
for item_b in ae:
for item_c in aef:
rlist.append(colorsys.hsv_to_rgb(item_b,item_a,item_c))
Ideas?
| [
"If you do not want to shuffle the rlist, but the initial lists, then you can try to put the last four lines into a list comprehension:\nrlist = [ colorsys.hsv_to_rgb(b, a, c) for c in aef for b in ae for a in aee ] \n\n",
"import numpy as np\nimport random\nimport itertools\nimport colorsys\nhue, saturation, val... | [
4,
4,
1,
1
] | [] | [] | [
"python"
] | stackoverflow_0003971916_python.txt |
Q:
How can I write a python script equivalent of mdfind using PyObjC bindings and NSMetadataQuery?
I want to write the python equivalent of mdfind. I want to use the .Spotlight-V100 metadata and I cannot find a description for the metadata db format used, but NSMetadataQuery seems to be what I need. I'd like to do this in python using the built in Obj-C bindings, but have not been able to figure out the correct incantation to get it to work. Not sure if the problem is the asynchronous nature of the call or I'm just wiring things together incorrectly.
A simple example giving the equivalent of of "mdfind " would be fine for a start.
A:
I got a very simple version working. I don't quite have the predicate correct, as the equivalent mdfind call has additional results. Also, it requires two args, the first is the base pathname to work from with the second being the search term.
Here is the code:
from Cocoa import *
import sys
query = NSMetadataQuery.alloc().init()
query.setPredicate_(NSPredicate.predicateWithFormat_("(kMDItemTextContent = \"" + sys.argv[2] + "\")"))
query.setSearchScopes_(NSArray.arrayWithObject_(sys.argv[1]))
query.startQuery()
NSRunLoop.currentRunLoop().runUntilDate_(NSDate.dateWithTimeIntervalSinceNow_(5))
query.stopQuery()
print "count: ", len(query.results())
for item in query.results():
print "item: ", item.valueForAttribute_("kMDItemPath")
The query call is asynchronous, so to be more complete, I should register a callback and have the run loop go continuously. As it is, I do a search for 5 seconds, so if we have a query that would take longer, we will get only partial results.
| How can I write a python script equivalent of mdfind using PyObjC bindings and NSMetadataQuery? | I want to write the python equivalent of mdfind. I want to use the .Spotlight-V100 metadata and I cannot find a description for the metadata db format used, but NSMetadataQuery seems to be what I need. I'd like to do this in python using the built in Obj-C bindings, but have not been able to figure out the correct incantation to get it to work. Not sure if the problem is the asynchronous nature of the call or I'm just wiring things together incorrectly.
A simple example giving the equivalent of of "mdfind " would be fine for a start.
| [
"I got a very simple version working. I don't quite have the predicate correct, as the equivalent mdfind call has additional results. Also, it requires two args, the first is the base pathname to work from with the second being the search term.\nHere is the code:\nfrom Cocoa import *\n\nimport sys\n\nquery = NSMeta... | [
1
] | [] | [] | [
"macos",
"pyobjc",
"python",
"spotlight"
] | stackoverflow_0003992003_macos_pyobjc_python_spotlight.txt |
Q:
Non-critical unittest failures
I'm using Python's built-in unittest module and I want to write a few tests that are not critical.
I mean, if my program passes such tests, that's great! However, if it doesn't pass, it's not really a problem, the program will still work.
For example, my program is designed to work with a custom type "A". If it fails to work with "A", then it's broken. However, for convenience, most of it should also work with another type "B", but that's not mandatory. If it fails to work with "B", then it's not broken (because it still works with "A", which is its main purpose). Failing to work with "B" is not critical, I will just miss a "bonus feature" I could have.
Another (hypothetical) example is when writing an OCR. The algorithm should recognize most images from the tests, but it's okay if some of them fails. (and no, I'm not writing an OCR)
Is there any way to write non-critical tests in unittest (or other testing framework)?
A:
As a practical matter, I'd probably use print statements to indicate failure in that case. A more correct solution is to use warnings:
http://docs.python.org/library/warnings.html
You could, however, use the logging facility to generate a more detailed record of your test results (i.e. set your "B" class failures to write warnings to the logs).
http://docs.python.org/library/logging.html
Edit:
The way we handle this in Django is that we have some tests we expect to fail, and we have others that we skip based on the environment. Since we can generally predict whether a test SHOULD fail or pass (i.e. if we can't import a certain module, the system doesn't have it, and so the test won't work), we can skip failing tests intelligently. This means that we still run every test that will pass, and have no tests that "might" pass. Unit tests are most useful when they do things predictably, and being able to detect whether or not a test SHOULD pass before we run it makes this possible.
A:
Asserts in unit tests are binary: they will work or they will fail, there's no mid-term.
Given that, to create those "non-critical" tests you should not use assertions when you don't want the tests to fail. You should do this carefully so you don't compromise the "usefulness" of the test.
My advice to your OCR example is that you use something to record the success rate in your tests code and then create one assertion like: "assert success_rate > 8.5", and that should give the effect you desire.
A:
Thank you for the great answers. No only one answer was really complete, so I'm writing here a combination of all answers that helped me. If you like this answer, please vote up the people who were responsible for this.
Conclusions
Unit tests (or at least unit tests in unittest module) are binary. As Guilherme Chapiewski says: they will work or they will fail, there's no mid-term.
Thus, my conclusion is that unit tests are not exactly the right tool for this job. It seems that unit tests are more concerned about "keep everything working, no failure is expected", and thus I can't (or it's not easy) to have non-binary tests.
So, unit tests don't seem the right tool if I'm trying to improve an algorithm or an implementation, because unit tests can't tell me how better is one version when compared to the other (supposing both of them are correctly implemented, then both will pass all unit tests).
My final solution
My final solution is based on ryber's idea and code shown in wcoenen answer. I'm basically extending the default TextTestRunner and making it less verbose. Then, my main code call two test suits: the critical one using the standard TextTestRunner, and the non-critical one, with my own less-verbose version.
class _TerseTextTestResult(unittest._TextTestResult):
def printErrorList(self, flavour, errors):
for test, err in errors:
#self.stream.writeln(self.separator1)
self.stream.writeln("%s: %s" % (flavour,self.getDescription(test)))
#self.stream.writeln(self.separator2)
#self.stream.writeln("%s" % err)
class TerseTextTestRunner(unittest.TextTestRunner):
def _makeResult(self):
return _TerseTextTestResult(self.stream, self.descriptions, self.verbosity)
if __name__ == '__main__':
sys.stderr.write("Running non-critical tests:\n")
non_critical_suite = unittest.TestLoader().loadTestsFromTestCase(TestSomethingNonCritical)
TerseTextTestRunner(verbosity=1).run(non_critical_suite)
sys.stderr.write("\n")
sys.stderr.write("Running CRITICAL tests:\n")
suite = unittest.TestLoader().loadTestsFromTestCase(TestEverythingImportant)
unittest.TextTestRunner(verbosity=1).run(suite)
Possible improvements
It should still be useful to know if there is any testing framework with non-binary tests, like Kathy Van Stone suggested. Probably I won't use it this simple personal project, but it might be useful on future projects.
A:
Im not totally sure how unittest works, but most unit testing frameworks have something akin to categories. I suppose you could just categorize such tests, mark them to be ignored, and then run them only when your interested in them. But I know from experience that ignored tests very quickly become...just that ignored tests that nobody ever runs and are therefore a waste of time and energy to write them.
My advice is for your app to do, or do not, there is no try.
A:
From unittest documentation which you link:
Instead of unittest.main(), there are
other ways to run the tests with a
finer level of control, less terse
output, and no requirement to be run
from the command line. For example,
the last two lines may be replaced
with:
suite = unittest.TestLoader().loadTestsFromTestCase(TestSequenceFunctions)
unittest.TextTestRunner(verbosity=2).run(suite)
In your case, you can create separate TestSuite instances for the criticial and non-critical tests. You could control which suite is passed to the test runner with a command line argument. Test suites can also contain other test suites so you can create big hierarchies if you want.
A:
Python 2.7 (and 3.1) added support for skipping some test methods or test cases, as well as marking some tests as expected failure.
http://docs.python.org/library/unittest.html#skipping-tests-and-expected-failures
Tests marked as expected failure won't be counted as failure on a TestResult.
A:
There are some test systems that allow warnings rather than failures, but test_unit is not one of them (I don't know which ones do, offhand) unless you want to extend it (which is possible).
You can make the tests so that they log warnings rather than fail.
Another way to handle this is to separate out the tests and only run them to get the pass/fail reports and not have any build dependencies (this depends on your build setup).
A:
Take a look at Nose : http://somethingaboutorange.com/mrl/projects/nose/0.11.1/
There are plenty of command line options for selecting tests to run, and you can keep your existing unittest tests.
A:
Another possibility is to create a "B" branch (you ARE using some sort of version control, right?) and have your unit tests for "B" in there. That way, you keep your release version's unit tests clean (Look, all dots!), but still have tests for B. If you're using a modern version control system like git or mercurial (I'm partial to mercurial), branching/cloning and merging are trivial operations, so that's what I'd recommend.
However, I think you're using tests for something they're not meant to do. The real question is "How important to you is it that 'B' works?" Because your test suite should only have tests in it that you care whether they pass or fail. Tests that, if they fail, it means the code is broken. That's why I suggested only testing "B" in the "B" branch, since that would be the branch where you are developing the "B" feature.
You could test using logger or print commands, if you like. But if you don't care enough that it's broken to have it flagged in your unit tests, I'd seriously question whether you care enough to test it at all. Besides, that adds needless complexity (extra variables to set debug level, multiple testing vectors that are completely independent of each other yet operate within the same space, causing potential collisions and errors, etc, etc). Unless you're developing a "Hello, World!" app, I suspect your problem set is complicated enough without adding additional, unnecessary complications.
| Non-critical unittest failures | I'm using Python's built-in unittest module and I want to write a few tests that are not critical.
I mean, if my program passes such tests, that's great! However, if it doesn't pass, it's not really a problem, the program will still work.
For example, my program is designed to work with a custom type "A". If it fails to work with "A", then it's broken. However, for convenience, most of it should also work with another type "B", but that's not mandatory. If it fails to work with "B", then it's not broken (because it still works with "A", which is its main purpose). Failing to work with "B" is not critical, I will just miss a "bonus feature" I could have.
Another (hypothetical) example is when writing an OCR. The algorithm should recognize most images from the tests, but it's okay if some of them fails. (and no, I'm not writing an OCR)
Is there any way to write non-critical tests in unittest (or other testing framework)?
| [
"As a practical matter, I'd probably use print statements to indicate failure in that case. A more correct solution is to use warnings:\nhttp://docs.python.org/library/warnings.html\nYou could, however, use the logging facility to generate a more detailed record of your test results (i.e. set your \"B\" class failu... | [
8,
4,
4,
3,
3,
3,
1,
0,
0
] | [
"You could write your test so that they count success rate. \nWith OCR you could throw at code 1000 images and require that 95% is successful. \nIf your program must work with type A then if this fails the test fails. If it's not required to work with B, what is the value of doing such a test ?\n"
] | [
-1
] | [
"python",
"python_unittest"
] | stackoverflow_0001406552_python_python_unittest.txt |
Q:
What's more resource intensive? PHP or Python?
My current web host allows for up to 25 processes running at once. From what I can figure, Python scripts take up a spot in processes, but PHP doesn't?
I get a 500 error if more than 25 processes are running at once (unlikely, but still a hassle), so I was wondering if it would be easier on the server if I were to port my site over to PHP?
Thanks!
A:
It depends on how you have PHP/Python set up. If you have, say, Apache loading PHP via mod_php, then it doesn't actually spawn a new process. Likewise, if you were using, say, Tornado to handle web requests, then the webserver itself is already running the Python process, and thus there's no additional Python processes required.
Basically... don't change languages just to alter the number of processes you have running. Instead, figure out what methods your current language has to reduce the process count.
A:
You are using HostGator. Switch hosts. Their shared server offerings should be used by very low traffic, brochure sites as they cram 100's of vhosts onto each server.
If you can't switch, ensure you're setup to use mod_php (not suPHP or cgi) or Python equivalent. Otherwise, new processes will be spawned on each request and you'll be serving up blank pages in no time.
| What's more resource intensive? PHP or Python? | My current web host allows for up to 25 processes running at once. From what I can figure, Python scripts take up a spot in processes, but PHP doesn't?
I get a 500 error if more than 25 processes are running at once (unlikely, but still a hassle), so I was wondering if it would be easier on the server if I were to port my site over to PHP?
Thanks!
| [
"It depends on how you have PHP/Python set up. If you have, say, Apache loading PHP via mod_php, then it doesn't actually spawn a new process. Likewise, if you were using, say, Tornado to handle web requests, then the webserver itself is already running the Python process, and thus there's no additional Python proc... | [
2,
2
] | [] | [] | [
"php",
"python",
"resources"
] | stackoverflow_0003999679_php_python_resources.txt |
Q:
python style: inline function that needs no inlining?
I'mm writing gtk code. I often have short callbacks that don't need to be closures, as they are passed all the parameters they need. For example, I have this in a loop when creating some gtk.TreeViewColumns:
def widthChanged(MAINCOL, SPEC, SUBCOL, expandable):
if expandable: return
w = MAINCOL.get_width()
SUBCOL.set_fixed_width(w)
cl.connect("notify::width", widthChanged, pnlcl, expand)
This is probably inefficient, since the function is being created on every iteration of the loop (side-question: is it actually, or is it optimized?). However, I feel like if I moved all these one-liners to the top level, the code would be more confusing. Any opinions?
A:
Go with whatever style is most readable. Don't worry about speed unless your code profiling tools have told you that the area is a hotspot.
| python style: inline function that needs no inlining? | I'mm writing gtk code. I often have short callbacks that don't need to be closures, as they are passed all the parameters they need. For example, I have this in a loop when creating some gtk.TreeViewColumns:
def widthChanged(MAINCOL, SPEC, SUBCOL, expandable):
if expandable: return
w = MAINCOL.get_width()
SUBCOL.set_fixed_width(w)
cl.connect("notify::width", widthChanged, pnlcl, expand)
This is probably inefficient, since the function is being created on every iteration of the loop (side-question: is it actually, or is it optimized?). However, I feel like if I moved all these one-liners to the top level, the code would be more confusing. Any opinions?
| [
"Go with whatever style is most readable. Don't worry about speed unless your code profiling tools have told you that the area is a hotspot.\n"
] | [
4
] | [] | [] | [
"closures",
"coding_style",
"performance",
"python"
] | stackoverflow_0003999989_closures_coding_style_performance_python.txt |
Q:
Profile python program that forks itself as a daemon
Is it possible to run cprofile on a mult-threaded python program that forks itself into a daemon process? I know you can make it work on multi thread, but I haven't seen anything on profiling a daemon.
A:
Well you can always profile it for a single process or single thread & optimize. After which make it multi-thread. Am I missing something here?
| Profile python program that forks itself as a daemon | Is it possible to run cprofile on a mult-threaded python program that forks itself into a daemon process? I know you can make it work on multi thread, but I haven't seen anything on profiling a daemon.
| [
"Well you can always profile it for a single process or single thread & optimize. After which make it multi-thread. Am I missing something here?\n"
] | [
0
] | [] | [] | [
"cprofile",
"daemon",
"profiling",
"python"
] | stackoverflow_0003999938_cprofile_daemon_profiling_python.txt |
Q:
Suitable kind of database to track a high volume of changes
I am trying to implement a python script which writes and reads to a database to track changes within a 3d game (Minecraft) These changes are done by various clients and can be represented by player name, coordinates (x,y,z), and a description. I am storing a high volume of changes and would like to know what would be an easy and preferably fast way to store and retrieve these changes. What kinds of databases that would be suited to this job?
A:
Any kind. A NoSQL option like MongoDB might be especially interesting.
A:
PostgreSQL has a cube module that supports simple storage, indexing and spatial operations on 3D points and cubes.
| Suitable kind of database to track a high volume of changes | I am trying to implement a python script which writes and reads to a database to track changes within a 3d game (Minecraft) These changes are done by various clients and can be represented by player name, coordinates (x,y,z), and a description. I am storing a high volume of changes and would like to know what would be an easy and preferably fast way to store and retrieve these changes. What kinds of databases that would be suited to this job?
| [
"Any kind. A NoSQL option like MongoDB might be especially interesting.\n",
"PostgreSQL has a cube module that supports simple storage, indexing and spatial operations on 3D points and cubes.\n"
] | [
0,
0
] | [] | [] | [
"change_tracking",
"database",
"python"
] | stackoverflow_0004000072_change_tracking_database_python.txt |
Q:
Python list help
simple Python question:
Example list: A = [1,2,3,4,5]
I need to generate another list B which is a shallow copy of list A such that B is a new list containing the same elements in the same order (so that I can substitute one of B's elements w/o affecting A). How can I do this?
clarification: I want to do something like
def some_func(A)
B = {what do I do here to get a copy of A's elements?}
B[0] = some_other_func(B[0])
yet_another_func(B)
based on all your answers + the Python docs, a better way to do what I want is the following:
def some_func(A)
B = [some_other_func(A[0])] + A[1:]
yet_another_func(B)
thanks for pointing me in the right direction!
A:
That would be a deep copy, not a shallow one.
Lists copy shallow by default. That's why there's a deepcopy command in the copy module.
B = copy.deepcopy(A)
Optionally, B = A[:] will do. But keep deepcopy in mind for future. More complex data types can benefit from it.
Added Info about copy:
A shallow copy:
b = [1,2]
a = b
b[0] = 11
print a // [1,11]
A deep copy:
b = [1,2]
a = b[:]
b[0] = 11
print a // [1,2]
But, furthermore:
>>> a = [[1,2]]
>>> b = a[:]
>>> b
[[1, 2]]
>>> a
[[1, 2]]
>>> a[0][0] = 11
>>> a
[[11, 2]]
>>> b
[[11, 2]]
>>>
So, the elements themselves are shallow copies in this case.
A:
Here are 3 ways to make a copy of list A:
Use slice notation:
copy_of_A = A[:]
Use the list constructor:
copy_of_A = list(A)
Use the copy module:
from copy import copy
copy_of_A = copy(A)
As you requested these copies are all shallow copies. To learn about the difference between shallow copy and deep copy read the documentation of the copy module.
A:
B=A[:] suffices:
In [22]: A=[1,2]
In [23]: B=A[:]
In [24]: B[0]=100
In [25]: A
Out[25]: [1, 2]
In [26]: B
Out[26]: [100, 2]
A[:] uses slice notation to get the slice with all the elements of A.
Since slices of Python lists always return new lists, you get a copy of A.
Note that the elements inside B are identical to the elements inside A.
If the elements are mutable, mutating them through B will affect A.
A:
Like this:
B = A[:]
A:
You can perform that copy in the following way:
B = A[:]
A:
import copy
A=[1,2,3,4,5]
B=copy.copy(A)
B[0]=9999
print B[0]
print A[0]
import copy and use copy.copy() for copying.
see this for reference.
| Python list help | simple Python question:
Example list: A = [1,2,3,4,5]
I need to generate another list B which is a shallow copy of list A such that B is a new list containing the same elements in the same order (so that I can substitute one of B's elements w/o affecting A). How can I do this?
clarification: I want to do something like
def some_func(A)
B = {what do I do here to get a copy of A's elements?}
B[0] = some_other_func(B[0])
yet_another_func(B)
based on all your answers + the Python docs, a better way to do what I want is the following:
def some_func(A)
B = [some_other_func(A[0])] + A[1:]
yet_another_func(B)
thanks for pointing me in the right direction!
| [
"That would be a deep copy, not a shallow one.\nLists copy shallow by default. That's why there's a deepcopy command in the copy module.\nB = copy.deepcopy(A)\nOptionally, B = A[:] will do. But keep deepcopy in mind for future. More complex data types can benefit from it.\n\nAdded Info about copy:\nA shallow copy:\... | [
8,
5,
3,
3,
3,
2
] | [] | [] | [
"list",
"python"
] | stackoverflow_0004000345_list_python.txt |
Q:
Most efficient way to remove duplicates from Python list while preserving order and removing the oldest element
I've seen a bunch of solutions on the site to remove duplicates while preserving the oldest element. I'm interested in the opposite: removing duplicates while preserving the newest element, for example:
list = ['1234','2345','3456','1234']
list.append('1234')
>>> ['1234','2345','3456','1234','1234']
list = unique(list)
>>> ['2345','3456','1234']
How would something like this work?
Thanks.
A:
Requires the items (or keys) to be hashable, works in-place on list-likes:
def inplace_unique_latest(L, key=None):
if key is None:
def key(x):
return x
seen = set()
n = iter(xrange(len(L) - 1, -2, -1))
for x in xrange(len(L) - 1, -1, -1):
item = L[x]
k = key(item)
if k not in seen:
seen.add(k)
L[next(n)] = item
L[:next(n) + 1] = []
| Most efficient way to remove duplicates from Python list while preserving order and removing the oldest element | I've seen a bunch of solutions on the site to remove duplicates while preserving the oldest element. I'm interested in the opposite: removing duplicates while preserving the newest element, for example:
list = ['1234','2345','3456','1234']
list.append('1234')
>>> ['1234','2345','3456','1234','1234']
list = unique(list)
>>> ['2345','3456','1234']
How would something like this work?
Thanks.
| [
"Requires the items (or keys) to be hashable, works in-place on list-likes:\ndef inplace_unique_latest(L, key=None):\n if key is None:\n def key(x):\n return x\n seen = set()\n n = iter(xrange(len(L) - 1, -2, -1))\n for x in xrange(len(L) - 1, -1, -1):\n item = L[x]\n k = key(item)\n if k not i... | [
3
] | [] | [] | [
"list",
"python"
] | stackoverflow_0004000621_list_python.txt |
Q:
Linux: Pipe into Python (ncurses) script, stdin and termios
Apparently this is almost a duplicate of "Bad pipe filedescriptor when reading from stdin in python - Stack Overflow"; however, I believe this case is slightly more complicated (and it is not Windows specific, as the conclusion of that thread was).
I'm currently trying to experiment with a simple script in Python: I'd like to supply input to the script - either through command line arguments; or by 'pipe'-ing a string to this script - and have the script show this input string using a curses terminal interface.
The full script, here called testcurses.py, is given below. The problem is that whenever I try the actual piping, that seems to mess up stdin, and the curses window never shows. Here is a terminal output:
## CASE 1: THROUGH COMMAND LINE ARGUMENT (arg being stdin):
##
$ ./testcurses.py -
['-'] 1
stdout/stdin (obj): <open file '<stdout>', mode 'w' at 0xb77dc078> <open file '<stdin>', mode 'r' at 0xb77dc020>
stdout/stdin (fn): 1 0
env(TERM): xterm xterm
stdin_termios_attr [27906, 5, 1215, 35387, 15, 15, ['\x03', ... '\x00']]
stdout_termios_attr [27906, 5, 1215, 35387, 15, 15, ['\x03', ... '\x00']]
opening -
obj <open file '<stdin>', mode 'r' at 0xb77dc020>
TYPING blabla HERE
wr TYPING blabla HERE
at end
before curses TYPING blabla HERE
#
# AT THIS POINT:
# in this case, curses window is shown, with the text 'TYPING blabla HERE'
# ################
## CASE 2: THROUGH PIPE
##
## NOTE I get the same output, even if I try syntax as in SO1057638, like:
## python -c "print 'TYPING blabla HERE'" | python testcurses.py -
##
$ echo "TYPING blabla HERE" | ./testcurses.py -
['-'] 1
stdout/stdin (obj): <open file '<stdout>', mode 'w' at 0xb774a078> <open file '<stdin>', mode 'r' at 0xb774a020>
stdout/stdin (fn): 1 0
env(TERM): xterm xterm
stdin_termios_attr <class 'termios.error'>::(22, 'Invalid argument')
stdout_termios_attr [27906, 5, 1215, 35387, 15, 15, ['\x03', '\x1c', '\x7f', '\x15', '\x04', '\x00', '\x01', '\xff', '\x11', '\x13', '\x1a', '\xff', '\x12', '\x0f', '\x17', '\x16', '\xff', '\x00', '\x00', '\x00', '\x00', '\x00', '\x00', '\x00', '\x00', '\x00', '\x00', '\x00', '\x00', '\x00', '\x00', '\x00']]
opening -
obj <open file '<stdin>', mode 'r' at 0xb774a020>
wr TYPING blabla HERE
at end
before curses TYPING blabla HERE
#
# AT THIS POINT:
# script simply exits, nothing is shown
# ################
As far as I can see, the issue is: - whenever we pipe strings into the Python script, the Python script loses the reference to the terminal as stdin, and notices that the replaced stdin is not a termios structure anymore - and since stdin is no longer a terminal, curses.initscr() exits immediately without rendering anything.
So, my question is - in brief: can I somehow achieve, that the syntax echo "blabla" | ./testcurses.py - ends up showing the piped string in curses? More specifically: is it possible to retrieve a reference to the calling terminal's stdin from a Python script, even if this script is being "piped" to?
Thanks in advance for any pointers,
Cheers!
PS: the testcurses.py script:
#!/usr/bin/env python
# http://www.tuxradar.com/content/code-project-build-ncurses-ui-python
# http://diveintopython.net/scripts_and_streams/stdin_stdout_stderr.html
# http://bytes.com/topic/python/answers/42283-curses-disable-readline-replace-stdin
#
# NOTE: press 'q' to exit curses - Ctrl-C will screw up yer terminal
# ./testcurses.py "blabla" # works fine (curseswin shows)
# ./testcurses.py - # works fine, (type, enter, curseswins shows):
# echo "blabla" | ./testcurses.py "sdsd" # fails to raise curses window
#
# NOTE: when without pipe: termios.tcgetattr(sys.__stdin__.fileno()): [27906, 5, 1215, 35387, 15, 15, ['\x03',
# NOTE: when with pipe | : termios.tcgetattr(sys.__stdin__.fileno()): termios.error: (22, 'Invalid argument')
import curses
import sys
import os
import atexit
import termios
def openAnything(source):
"""URI, filename, or string --> stream
http://diveintopython.net/xml_processing/index.html#kgp.divein
This function lets you define parsers that take any input source
(URL, pathname to local or network file, or actual data as a string)
and deal with it in a uniform manner. Returned object is guaranteed
to have all the basic stdio read methods (read, readline, readlines).
Just .close() the object when you're done with it.
"""
if hasattr(source, "read"):
return source
if source == '-':
import sys
return sys.stdin
# try to open with urllib (if source is http, ftp, or file URL)
import urllib
try:
return urllib.urlopen(source)
except (IOError, OSError):
pass
# try to open with native open function (if source is pathname)
try:
return open(source)
except (IOError, OSError):
pass
# treat source as string
import StringIO
return StringIO.StringIO(str(source))
def main(argv):
print argv, len(argv)
print "stdout/stdin (obj):", sys.__stdout__, sys.__stdin__
print "stdout/stdin (fn):", sys.__stdout__.fileno(), sys.__stdin__.fileno()
print "env(TERM):", os.environ.get('TERM'), os.environ.get("TERM", "unknown")
stdin_term_attr = 0
stdout_term_attr = 0
try:
stdin_term_attr = termios.tcgetattr(sys.__stdin__.fileno())
except:
stdin_term_attr = "%s::%s" % (sys.exc_info()[0], sys.exc_info()[1])
try:
stdout_term_attr = termios.tcgetattr(sys.__stdout__.fileno())
except:
stdout_term_attr = `sys.exc_info()[0]` + "::" + `sys.exc_info()[1]`
print "stdin_termios_attr", stdin_term_attr
print "stdout_termios_attr", stdout_term_attr
fname = ""
if len(argv):
fname = argv[0]
writetxt = "Python curses in action!"
if fname != "":
print "opening", fname
fobj = openAnything(fname)
print "obj", fobj
writetxt = fobj.readline(100) # max 100 chars read
print "wr", writetxt
fobj.close()
print "at end"
sys.stderr.write("before ")
print "curses", writetxt
try:
myscreen = curses.initscr()
#~ atexit.register(curses.endwin)
except:
print "Unexpected error:", sys.exc_info()[0]
sys.stderr.write("after initscr") # this won't show, even if curseswin runs fine
myscreen.border(0)
myscreen.addstr(12, 25, writetxt)
myscreen.refresh()
myscreen.getch()
#~ curses.endwin()
atexit.register(curses.endwin)
sys.stderr.write("after end") # this won't show, even if curseswin runs fine
# run the main function - with arguments passed to script:
if __name__ == "__main__":
main(sys.argv[1:])
sys.stderr.write("after main1") # these won't show either,
sys.stderr.write("after main2") # (.. even if curseswin runs fine ..)
A:
The problem is that whenever I try the actual piping, that seems to mess up stdin, and the curses window never shows.
[...snip...]
As far as I can see, the issue is: - whenever we pipe strings into the Python script, the Python script loses the reference to the terminal as stdin, and notices that the replaced stdin is not a termios structure anymore - and since stdin is no longer a terminal, curses.initscr() exits immediately without rendering anything.
Actually, the curses window does show, but since there is no more input on your brave new stdin, myscreen.getch() returns immediately. So it has nothing to do with curses testing whether stdin is a terminal.
So if you want to use myscreen.getch() and other curses input functions, you'll have to reopen your terminal. On Linux and *nix systems there is usually a device called /dev/tty that refers to the current terminal. So you can do something like:
f=open("/dev/tty")
os.dup2(f.fileno(), 0)
before your call to myscreen.getch().
A:
This can't be done without getting the parent process involved. Fortunately, there's a way to get bash involved using I/O redirection :
$ (echo "foo" | ./pipe.py) 3<&0
That will pipe foo to pipe.py in a subshell with stdin duplicated into file descriptor 3. Now all we need to do is use that extra help from our parent process in the python script (since we'll inherit fd 3):
#!/usr/bin/env python
import sys, os
import curses
output = sys.stdin.readline(100)
# We're finished with stdin. Duplicate inherited fd 3,
# which contains a duplicate of the parent process' stdin,
# into our stdin, at the OS level (assigning os.fdopen(3)
# to sys.stdin or sys.__stdin__ does not work).
os.dup2(3, 0)
# Now curses can initialize.
screen = curses.initscr()
screen.border(0)
screen.addstr(12, 25, output)
screen.refresh()
screen.getch()
curses.endwin()
Finally, you can work around the ugly syntax on the command line by running the subshell first:
$ exec 3<&0 # spawn subshell
$ echo "foo" | ./pipe.py # works
$ echo "bar" | ./pipe.py # still works
That solves your problem, if you have bash.
| Linux: Pipe into Python (ncurses) script, stdin and termios | Apparently this is almost a duplicate of "Bad pipe filedescriptor when reading from stdin in python - Stack Overflow"; however, I believe this case is slightly more complicated (and it is not Windows specific, as the conclusion of that thread was).
I'm currently trying to experiment with a simple script in Python: I'd like to supply input to the script - either through command line arguments; or by 'pipe'-ing a string to this script - and have the script show this input string using a curses terminal interface.
The full script, here called testcurses.py, is given below. The problem is that whenever I try the actual piping, that seems to mess up stdin, and the curses window never shows. Here is a terminal output:
## CASE 1: THROUGH COMMAND LINE ARGUMENT (arg being stdin):
##
$ ./testcurses.py -
['-'] 1
stdout/stdin (obj): <open file '<stdout>', mode 'w' at 0xb77dc078> <open file '<stdin>', mode 'r' at 0xb77dc020>
stdout/stdin (fn): 1 0
env(TERM): xterm xterm
stdin_termios_attr [27906, 5, 1215, 35387, 15, 15, ['\x03', ... '\x00']]
stdout_termios_attr [27906, 5, 1215, 35387, 15, 15, ['\x03', ... '\x00']]
opening -
obj <open file '<stdin>', mode 'r' at 0xb77dc020>
TYPING blabla HERE
wr TYPING blabla HERE
at end
before curses TYPING blabla HERE
#
# AT THIS POINT:
# in this case, curses window is shown, with the text 'TYPING blabla HERE'
# ################
## CASE 2: THROUGH PIPE
##
## NOTE I get the same output, even if I try syntax as in SO1057638, like:
## python -c "print 'TYPING blabla HERE'" | python testcurses.py -
##
$ echo "TYPING blabla HERE" | ./testcurses.py -
['-'] 1
stdout/stdin (obj): <open file '<stdout>', mode 'w' at 0xb774a078> <open file '<stdin>', mode 'r' at 0xb774a020>
stdout/stdin (fn): 1 0
env(TERM): xterm xterm
stdin_termios_attr <class 'termios.error'>::(22, 'Invalid argument')
stdout_termios_attr [27906, 5, 1215, 35387, 15, 15, ['\x03', '\x1c', '\x7f', '\x15', '\x04', '\x00', '\x01', '\xff', '\x11', '\x13', '\x1a', '\xff', '\x12', '\x0f', '\x17', '\x16', '\xff', '\x00', '\x00', '\x00', '\x00', '\x00', '\x00', '\x00', '\x00', '\x00', '\x00', '\x00', '\x00', '\x00', '\x00', '\x00']]
opening -
obj <open file '<stdin>', mode 'r' at 0xb774a020>
wr TYPING blabla HERE
at end
before curses TYPING blabla HERE
#
# AT THIS POINT:
# script simply exits, nothing is shown
# ################
As far as I can see, the issue is: - whenever we pipe strings into the Python script, the Python script loses the reference to the terminal as stdin, and notices that the replaced stdin is not a termios structure anymore - and since stdin is no longer a terminal, curses.initscr() exits immediately without rendering anything.
So, my question is - in brief: can I somehow achieve, that the syntax echo "blabla" | ./testcurses.py - ends up showing the piped string in curses? More specifically: is it possible to retrieve a reference to the calling terminal's stdin from a Python script, even if this script is being "piped" to?
Thanks in advance for any pointers,
Cheers!
PS: the testcurses.py script:
#!/usr/bin/env python
# http://www.tuxradar.com/content/code-project-build-ncurses-ui-python
# http://diveintopython.net/scripts_and_streams/stdin_stdout_stderr.html
# http://bytes.com/topic/python/answers/42283-curses-disable-readline-replace-stdin
#
# NOTE: press 'q' to exit curses - Ctrl-C will screw up yer terminal
# ./testcurses.py "blabla" # works fine (curseswin shows)
# ./testcurses.py - # works fine, (type, enter, curseswins shows):
# echo "blabla" | ./testcurses.py "sdsd" # fails to raise curses window
#
# NOTE: when without pipe: termios.tcgetattr(sys.__stdin__.fileno()): [27906, 5, 1215, 35387, 15, 15, ['\x03',
# NOTE: when with pipe | : termios.tcgetattr(sys.__stdin__.fileno()): termios.error: (22, 'Invalid argument')
import curses
import sys
import os
import atexit
import termios
def openAnything(source):
"""URI, filename, or string --> stream
http://diveintopython.net/xml_processing/index.html#kgp.divein
This function lets you define parsers that take any input source
(URL, pathname to local or network file, or actual data as a string)
and deal with it in a uniform manner. Returned object is guaranteed
to have all the basic stdio read methods (read, readline, readlines).
Just .close() the object when you're done with it.
"""
if hasattr(source, "read"):
return source
if source == '-':
import sys
return sys.stdin
# try to open with urllib (if source is http, ftp, or file URL)
import urllib
try:
return urllib.urlopen(source)
except (IOError, OSError):
pass
# try to open with native open function (if source is pathname)
try:
return open(source)
except (IOError, OSError):
pass
# treat source as string
import StringIO
return StringIO.StringIO(str(source))
def main(argv):
print argv, len(argv)
print "stdout/stdin (obj):", sys.__stdout__, sys.__stdin__
print "stdout/stdin (fn):", sys.__stdout__.fileno(), sys.__stdin__.fileno()
print "env(TERM):", os.environ.get('TERM'), os.environ.get("TERM", "unknown")
stdin_term_attr = 0
stdout_term_attr = 0
try:
stdin_term_attr = termios.tcgetattr(sys.__stdin__.fileno())
except:
stdin_term_attr = "%s::%s" % (sys.exc_info()[0], sys.exc_info()[1])
try:
stdout_term_attr = termios.tcgetattr(sys.__stdout__.fileno())
except:
stdout_term_attr = `sys.exc_info()[0]` + "::" + `sys.exc_info()[1]`
print "stdin_termios_attr", stdin_term_attr
print "stdout_termios_attr", stdout_term_attr
fname = ""
if len(argv):
fname = argv[0]
writetxt = "Python curses in action!"
if fname != "":
print "opening", fname
fobj = openAnything(fname)
print "obj", fobj
writetxt = fobj.readline(100) # max 100 chars read
print "wr", writetxt
fobj.close()
print "at end"
sys.stderr.write("before ")
print "curses", writetxt
try:
myscreen = curses.initscr()
#~ atexit.register(curses.endwin)
except:
print "Unexpected error:", sys.exc_info()[0]
sys.stderr.write("after initscr") # this won't show, even if curseswin runs fine
myscreen.border(0)
myscreen.addstr(12, 25, writetxt)
myscreen.refresh()
myscreen.getch()
#~ curses.endwin()
atexit.register(curses.endwin)
sys.stderr.write("after end") # this won't show, even if curseswin runs fine
# run the main function - with arguments passed to script:
if __name__ == "__main__":
main(sys.argv[1:])
sys.stderr.write("after main1") # these won't show either,
sys.stderr.write("after main2") # (.. even if curseswin runs fine ..)
| [
" The problem is that whenever I try the actual piping, that seems to mess up stdin, and the curses window never shows. \n\n[...snip...]\n\n\nAs far as I can see, the issue is: - whenever we pipe strings into the Python script, the Python script loses the reference to the terminal as stdin, and notices that the rep... | [
12,
1
] | [] | [] | [
"linux",
"ncurses",
"pipe",
"python",
"termios"
] | stackoverflow_0003999114_linux_ncurses_pipe_python_termios.txt |
Q:
Sum one row of a NumPy array
I'd like to sum one particular row of a large NumPy array. I know the function array.max() will give the maximum across the whole array, and array.max(1) will give me the maximum across each of the rows as an array. However, I'd like to get the maximum in a certain row (for example, row 7, or row 29). I have a large array, so getting the maximum for all rows will give me a significant time penalty.
A:
You can easily access a row of a two-dimensional array using the indexing operator. The row itself is an array, a view of a part of the original array, and exposes all array methods, including sum() and max(). Therefore you can easily get the maximum per row like this:
x = arr[7].max() # Maximum in row 7
y = arr[29].sum() # Sum of the values in row 29
Just for completeness, you can do the same for columns:
z = arr[:, 5].sum() # Sum up all values in column 5.
| Sum one row of a NumPy array | I'd like to sum one particular row of a large NumPy array. I know the function array.max() will give the maximum across the whole array, and array.max(1) will give me the maximum across each of the rows as an array. However, I'd like to get the maximum in a certain row (for example, row 7, or row 29). I have a large array, so getting the maximum for all rows will give me a significant time penalty.
| [
"You can easily access a row of a two-dimensional array using the indexing operator. The row itself is an array, a view of a part of the original array, and exposes all array methods, including sum() and max(). Therefore you can easily get the maximum per row like this:\nx = arr[7].max() # Maximum in row 7\ny =... | [
23
] | [] | [] | [
"arrays",
"numpy",
"performance",
"python"
] | stackoverflow_0004001067_arrays_numpy_performance_python.txt |
Q:
How to get a gtkDialog's default response to trigger of the space bar as well
I have a messageDialog set up so that its default response is gtk.RESPONSE_OK so the okay button is clicked when the user hits enter even if the okay button does not have focus. I would like to also have the space bar trigget the default_response. What is the best way to do this?
This is with python 2.4 in a linux environment. Unfortunately I don't have permission to upgrade python.
A:
Connect to the key-press-event signal on the message dialog:
def on_dialog_key_press(dialog, event):
if event.string == ' ':
dialog.response(gtk.RESPONSE_OK)
return True
return False
dialog = gtk.MessageDialog(message_format='Some message', buttons=gtk.BUTTONS_OK_CANCEL)
dialog.add_events(gtk.gdk.KEY_PRESS_MASK)
dialog.connect('key-press-event', on_dialog_key_press)
dialog.run()
Bear in mind, though, that changing users' expectations of the user interface is generally considered Not Cool.
A:
I'm a total noob at pygtk, but I could not get @ptomato's example + "hello world" boilerplate to work unless I responded to space and return plus added a call to dialog.destroy(). Take it for what it is worth.
#!/usr/bin/env python
# example helloworld.py
import pygtk
pygtk.require('2.0')
import gtk
def md_event(dialog, event):
if event.keyval in (gtk.keysyms.Return, gtk.keysyms.space):
dialog.response(gtk.RESPONSE_OK)
dialog.destroy()
return True
elif event.keyval == gtk.keysyms.Escape:
dialog.response(gtk.RESPONSE_CANCEL)
dialog.destroy()
return True
return False
class HelloWorld:
# This is a callback function. The data arguments are ignored
# in this example. More on callbacks below.
def hello(self, widget, data=None):
print "Hello World"
# Another callback
def destroy(self, widget, data=None):
gtk.main_quit()
def create_message_dialog(self, x, y):
md = gtk.MessageDialog(buttons=gtk.BUTTONS_OK_CANCEL, message_format="wawawawaaaaa")
md.add_events(gtk.gdk.KEY_PRESS_MASK)
md.connect("key-press-event", md_event)
result = md.run()
print result
def __init__(self):
# create a new window
self.window = gtk.Window(gtk.WINDOW_TOPLEVEL)
# Here we connect the "destroy" event to a signal handler.
# This event occurs when we call gtk_widget_destroy() on the window,
# or if we return FALSE in the "delete_event" callback.
self.window.connect("destroy", self.destroy)
# Sets the border width of the window.
self.window.set_border_width(10)
self.button2 = gtk.Button("Message Dialog")
self.button2.connect("clicked", self.create_message_dialog, None)
self.window.add(self.button2)
self.button2.show()
# and the window
self.window.show()
def main(self):
# All PyGTK applications must have a gtk.main(). Control ends here
# and waits for an event to occur (like a key press or mouse event).
gtk.main()
def run_hello():
hello = HelloWorld()
hello.main()
# If the program is run directly or passed as an argument to the python
# interpreter then create a HelloWorld instance and show it
if __name__ == "__main__":
run_hello()
| How to get a gtkDialog's default response to trigger of the space bar as well | I have a messageDialog set up so that its default response is gtk.RESPONSE_OK so the okay button is clicked when the user hits enter even if the okay button does not have focus. I would like to also have the space bar trigget the default_response. What is the best way to do this?
This is with python 2.4 in a linux environment. Unfortunately I don't have permission to upgrade python.
| [
"Connect to the key-press-event signal on the message dialog:\ndef on_dialog_key_press(dialog, event):\n if event.string == ' ':\n dialog.response(gtk.RESPONSE_OK)\n return True\n return False\n\ndialog = gtk.MessageDialog(message_format='Some message', buttons=gtk.BUTTONS_OK_CANCEL)\ndialog.add... | [
2,
0
] | [] | [] | [
"dialog",
"gtk",
"key",
"key_bindings",
"python"
] | stackoverflow_0004000097_dialog_gtk_key_key_bindings_python.txt |
Q:
what's the max range for colors in this format: #AABBCC?
How can I convert colors from (N, N, N) format to #AABBCC (and #AAABBBCCC) ?
thanks
A:
#FFFFFF, so simple
every single char has 0..F range. That is 0..15. So two chars is 0..(16*16-1) -> 0-255
To convert between formats just think about:
#AABBCC are three values AA BB CC. Every single value represents a channel (red, green, blue) that can span from 0 to 255 or from 0 to FF or from 0.0 to 1.0
if you have for example #123456 you can do
12 -> 1*16 + 2 = .. (result in range 0-255)
34 -> 3*16 + 4 = ..
56 -> 5*16 + 6 = ..
in general a two digits hex number composed by XY can be converted to an decimal value by multiplying X by 16 and adding Y, taking care of converting digits that are over 9 (A, B, C, D, E, F) to their counterparts (10, 11, 12, 13, 14, 15). So for example AC would be A*16 + C = 10*16 + 12.
(To be really precise a n digit hex number is converted by multiplying the i-th digit from right by 16^i and adding all of them together)
A:
From 00 to FF. It is hexacecimal for 0 to 255.
A:
RRGGBB RRGGBB
#000000 - #FFFFFF
Black - White
RR = 00 - FF or 0 - 255
GG = 00 - FF or 0 - 255
BB = 00 - FF or 0 - 255
A:
Those are hexidecimal representations of 16 bit numbers for the Red Green and Blue channels. So 0 to 255 for each channel. FF (hex) is equal to 255 decimal.
A:
As others have said, 00-FF.
Here's an overview of HTML colors in hex notation:
http://www.w3schools.com/Html/html_colors.asp
You can find out how to convert from hex to decimal here:
http://en.wikipedia.org/wiki/Hexadecimal
And, for Python:
Converting hex color to RGB and vice-versa
Or search for "convert hex decimal"
A:
This is hexadecimal (base 16) notation, where each digit goes from 0 to 15 (F).
The range of 0 to FF in hexadecimal is 0 to 255 in decimal.
If you want to convert from one to another, there are plenty of sites that will do that for you - like this one.
A:
Using Python? Try this:
c = (0., 1., 0.)
rgb = '#%02X%02X%02X' % (c[0] * 255, c[1] * 255, c[2] * 255)
A:
The min and max values for colors in the #AABBCC format is #000000...#FFFFFF, or 0...16777215 in decimal. Each individual color component ranges from #00..#FF, which is 0..255 in decimal and requires 8-bits or 1 byte of storage. For #AAABBBCCC the range of components is #000-#FFF or 0..4095 each and they require 12-bits or 1½ bytes of storage.
Not sure what the range of values is for N in (N, N, N), but if it's 0..1 then these two functions will convert from it to either 8-bit component #AABBCC or 12-bit component #AAABBBCCC color values (without rounding). Note that the output of each function is a string with the value shown after each print statement below. ITOH8 and ITOH12 are constant lookup tables used by the corresponding function.
ITOH8 = [('%02X' % i) for i in range(0x100)]
rgbconv8 = lambda c: ''.join( ['#'] + [ITOH8[int(v*0xFF)] for v in c] )
print rgbconv8((0., 1., 0.)) #00FF00
print rgbconv8((.2, .6, .75)) #3399BF
ITOH12 = [('%03X' % i) for i in range(0x1000)]
rgbconv12 = lambda c: ''.join( ['#'] + [ITOH12[int(v*0xFFF)] for v in c] )
print rgbconv12((0., 1., 0.)) #000FFF000
print rgbconv12((.2, .6, .75)) #333999BFF
| what's the max range for colors in this format: #AABBCC? | How can I convert colors from (N, N, N) format to #AABBCC (and #AAABBBCCC) ?
thanks
| [
"#FFFFFF, so simple\nevery single char has 0..F range. That is 0..15. So two chars is 0..(16*16-1) -> 0-255\nTo convert between formats just think about:\n#AABBCC are three values AA BB CC. Every single value represents a channel (red, green, blue) that can span from 0 to 255 or from 0 to FF or from 0.0 to 1.0\nif ... | [
2,
1,
0,
0,
0,
0,
0,
0
] | [] | [] | [
"colors",
"python"
] | stackoverflow_0004000137_colors_python.txt |
Q:
can't 'import time' in python, get 'AttributeError: struct_time' How to solve?
Running python on Snow Leopard, and I can't import the 'time' module. Works in ipython. Don't have any .pythonrc files being loaded. Scripts that 'import time' using the same interpreter run fine. Have no idea how to troubleshoot this. Anyone have an idea?
[wiggles@bananas ~]$ python2.6
Python 2.6.6 (r266:84292, Sep 1 2010, 14:27:13)
[GCC 4.2.1 (Apple Inc. build 5646) (dot 1)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import time
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "time.py", line 4, in <module>
t = now.strftime("%d-%m-%Y-%H-%M")
AttributeError: struct_time
>>>
[wiggles@bananas ~]$ ipython-2.6
Python 2.6.6 (r266:84292, Sep 1 2010, 14:27:13)
Type "copyright", "credits" or "license" for more information.
IPython 0.10 -- An enhanced Interactive Python.
? -> Introduction and overview of IPython's features.
%quickref -> Quick reference.
help -> Python's own help system.
object? -> Details about 'object'. ?object also works, ?? prints more.
In [1]: import time
In [2]:
A:
Look for a file called time.py. It looks like Python is importing that, instead of the one from the standard library:
File "time.py", line 4, in <module>
The solution is to rename the file something other than "time.py".
By the way, you can find the path to the offending file by opening a Python REPL and typing.
In [1]: import time
In [2]: time.__file__
or
In [3]: time # This shows the path as part of the repr
| can't 'import time' in python, get 'AttributeError: struct_time' How to solve? | Running python on Snow Leopard, and I can't import the 'time' module. Works in ipython. Don't have any .pythonrc files being loaded. Scripts that 'import time' using the same interpreter run fine. Have no idea how to troubleshoot this. Anyone have an idea?
[wiggles@bananas ~]$ python2.6
Python 2.6.6 (r266:84292, Sep 1 2010, 14:27:13)
[GCC 4.2.1 (Apple Inc. build 5646) (dot 1)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import time
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "time.py", line 4, in <module>
t = now.strftime("%d-%m-%Y-%H-%M")
AttributeError: struct_time
>>>
[wiggles@bananas ~]$ ipython-2.6
Python 2.6.6 (r266:84292, Sep 1 2010, 14:27:13)
Type "copyright", "credits" or "license" for more information.
IPython 0.10 -- An enhanced Interactive Python.
? -> Introduction and overview of IPython's features.
%quickref -> Quick reference.
help -> Python's own help system.
object? -> Details about 'object'. ?object also works, ?? prints more.
In [1]: import time
In [2]:
| [
"Look for a file called time.py. It looks like Python is importing that, instead of the one from the standard library:\n File \"time.py\", line 4, in <module>\n\nThe solution is to rename the file something other than \"time.py\".\nBy the way, you can find the path to the offending file by opening a Python REPL an... | [
8
] | [] | [] | [
"import",
"python",
"time"
] | stackoverflow_0004001211_import_python_time.txt |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.