qid
int64 46k
74.7M
| question
stringlengths 54
37.8k
| date
stringlengths 10
10
| metadata
listlengths 3
3
| response_j
stringlengths 29
22k
| response_k
stringlengths 26
13.4k
| __index_level_0__
int64 0
17.8k
|
|---|---|---|---|---|---|---|
48,490,272
|
I'm trying to launch Safari with Selenium in python with all my sessions logged in (e.g. gmail) so I don't have to login manually.
The easy solution would be to launch safari with the default user profile, but I can't find documentation on how to do this.
```
from selenium import webdriver
driver = webdriver.Safari()
url = 'https://www.gmail.com/'
driver.get(url)
```
Just for reference, the code below is the code for Chrome. What is the safari equivalent?
```
options.add_argument("user-data-dir=/Users/alexiseggermont/Library/Application Support/Google/Chrome/Default/") #Path to your chrome profile
driver = webdriver.Chrome(chrome_options=options)
driver.get(url)
```
|
2018/01/28
|
[
"https://Stackoverflow.com/questions/48490272",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2827060/"
] |
In the "[Creating Builds](https://dojotoolkit.org/documentation/tutorials/1.10/build/)" tutorial, it says:
>
> You might be asking yourself "if we build everything we need into a
> layer, why do we worry about the rest of the modules?" If you were to
> only keep the layer files and not have the rest of the modules
> available, you would lose that as an option to keep your application
> working without having to do a whole build again to access those
> modules.
>
>
>
That said, I agree with you, it would be nice to be able to build a minimal distribution containing only the layers - because even though the browser may only download the dojo/dojo.js layer, it's annoying to have to distribute the big 100MB directory.
However, even if the build script only copied the layer files, the layers may need various resource files which are not declared in the AMD dependency graph (e.g. images or fonts).
In my dojo projects, I've usually resorted to manually specifying and copying the required to a "minimal build" directory at the end of my build script. As long as it's a small application, this is usually manageable. It is certainly a bit annoying and error-prone though, so if anyone knows of a better way to do what you're asking, I'd love to hear about it.
```bash
node ../../dojo/dojo.js load=build --profile "$PROFILE" --releaseDir "$DISTDIR" $@
# ...
FILES=(
index.html
myapp/resources/myapp.css
myapp/resources/logo.svg
dojo/dojo.js
dojo/resources/blank.gif
dijit/themes/claro/form/images/buttonArrows.png
)
for file in ${FILES[*]}; do
mkdir -p $MINIMAL_DIST_DIR/`dirname $file`
cp $DISTDIR/myapp/$file $MINIMAL_DIST_DIR/$file
done
```
(The file myapp.css `@imports` dojo.css etc, so all the CSS is built into that single file.)
|
I don't know if this is a useful but I have a situation where I am creating a layer which loads from a completely different location to the core dojo app.
This means that I actually don't need the `dojo`, `dijit` and `dojox` to be in my build. I was having the issue of all files being bundled in to my location whether I wanted them or not.
What I have opted for is to change the destination of these files to a folder I can just ignore which is outside of my application folder by using `destLocation`.
So I have this in my build script
```
packages: [
{
name: "dojo",
location: "./dtk/dojo",
destLocation: '../directory/outside/of/codebase/dojo'
},
{
name: "dijit", location: "./dtk/dijit",
destLocation: '../directory/outside/of/codebase/dijit' },
{
name: "dojox",
location: "./dtk/dojox",
destLocation: '../directory/outside/of/codebase/dojox'
},
{
name: "applayer",
location: "./location/of/my/release/folder/",
destLocation: './'
}
],
```
It's not perfect but it at least keeps the necessary packages out of my directory. I think the idea of bundling all files in to the directory is for the event where you run a require outside of your core layers.
If you did a hotfix for example in the client and you require a dojo module which isn't already in `require.cache` then this request would fail. If you know that won't happen then you don't need the packages (Hopefully).
| 14,280
|
57,978,333
|
While training a job on a SageMaker instance using H2o AutoML a message "This H2OFrame is empty" has come up after running the code, what should I do to fix the problem?
```
/opt/ml/input/config/hyperparameters.json
All Parameters:
{'nfolds': '5', 'training': "{'classification': 'true', 'target': 'y'}", 'max_runtime_secs': '3600'}
/opt/ml/input/config/resourceconfig.json
All Resources:
{'current_host': 'algo-1', 'hosts': ['algo-1'], 'network_interface_name': 'eth0'}
Waiting until DNS resolves: 1
10.0.182.83
Starting up H2O-3
Creating Connection to H2O-3
Attempt 0: H2O-3 not running yet...
Connecting to H2O server at http://127.0.0.1:54321... successful.
-------------------------- ----------------------------------------
-------------------------- ----------------------------------------
Beginning Model Training
Parse progress: |βββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 100%
Classification - If you want to do a regression instead, set "classification":"false" in "training" params, inhyperparamters.json
Converting specified columns to categorical values:
[]
AutoML progress: |ββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 100%
This H2OFrame is empty.
Exception during training: Argument `model` should be a ModelBase, got NoneType None
Traceback (most recent call last):
File "/opt/program/train", line 138, in _train_model
h2o.save_model(aml.leader, path=model_path)
File "/root/.local/lib/python3.7/site-packages/h2o/h2o.py", line 969, in save_model
assert_is_type(model, ModelBase)
File "/root/.local/lib/python3.7/site-packages/h2o/utils/typechecks.py", line 457, in assert_is_type
skip_frames=skip_frames)
h2o.exceptions.H2OTypeError: Argument `model` should be a ModelBase, got NoneType None
H2O session _sid_8aba closed.
```
I'm wondering if it's a problem because of the max\_runtime\_secs, my data has around 500 rows and 250000 columns.
|
2019/09/17
|
[
"https://Stackoverflow.com/questions/57978333",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8836963/"
] |
thanks @Marcel Mendes Reis for following up on your solution in the comments. I will repost here for others to easily find:
*I realized the issue was due to the max\_runtime. When I trained the model with more time I didn't have the problem.*
|
Doing some tests I realized that the problem was because of the max\_runtime, I believe I didn't allow the model to train enough.
| 14,281
|
72,122,475
|
I have a custom field in employee module in Odoo to display the age.
That field is calculated from birtday field
```
for record in self:
today = datetime.date.today()
record['x_studio_age_2'] = today.year - record['birthday'].year - ((today.month, today.day) < (record['birthday'].month, record['birthday'].day))
```
The age field works, but I get an error when I try to import a CSV:
```
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/odoo/tools/safe_eval.py", line 330, in safe_eval
return unsafe_eval(c, globals_dict, locals_dict)
File "", line 3, in <module>
AttributeError: 'bool' object has no attribute 'year'
```
So, i have to remove the code but now I have to update the age of all employess.
Is any wrong with the code?
|
2022/05/05
|
[
"https://Stackoverflow.com/questions/72122475",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7259851/"
] |
I think you should do someting like that.
`False` means value is not set.
```py
today = datetime.date.today()
for record in self:
if record['birthday']:
record['x_studio_age_2'] = today.year - record['birthday'].year - ((today.month, today.day) < (record['birthday'].month, record['birthday'].day))
else:
record['x_studio_age_2'] = False
```
|
you have to check the birthday first , because if it's not set it will return false value as boolean
| 14,282
|
64,583,022
|
I imported a csv file with the variable βHEIGHTβ which has 10 values.
```
HEIGHT
62
58
72
63
66
62
63
62
62
67
```
I want to use numpy and numpy only to count the number of times the value β62β does not occur. The answer should be 6.
```
import numpy
import csv
with open(βmeasurements.csvβ),βrβ) as f:
rows=f.readline()
rows=f.split(β,β)
rows=numpy.array([rows[2:4]])
print(rows)
```
Iβm a beginner python learner practicing numpy, so I am not quite sure how to approach this problem.
|
2020/10/28
|
[
"https://Stackoverflow.com/questions/64583022",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14458035/"
] |
Using numpy you can do:
```
data = np.array([62, 58, 72, 63, 66, 62, 63, 62, 62, 67])
(data != 62).sum()
```
That is, `data != 62` will make a numpy Boolean array, and `sum` will add these up, with `True` as `1`, giving the total count.
|
If you want to use *numpy and numpy only*,
Load the file using numpy:
```
dataset = np.loadtxt('measurements.csv', delimiter=',')
```
Seems like the height variable is in the 3rd column (index *2*). When you use `loadtxt`, you'll get a 2D array that looks like a table. You need the column with index 2, and you can then use @tom10's solution:
```
(dataset[:, 2] != 62).sum()
```
And you have a complete numpy workflow.
**Note:** Read docs to understand functions used better.
* [numpy.loadtxt](https://numpy.org/doc/stable/reference/generated/numpy.loadtxt.html)
* [Comparisons in numpy arrays](https://www.python-course.eu/numpy_masking.php) (Tutorial - opinionated!)
* [Official docs on indexing](https://numpy.org/doc/stable/user/basics.indexing.html)
| 14,283
|
2,896,179
|
Can anyone help me out in fitting a gamma distribution in python? Well, I've got some data : X and Y coordinates, and I want to find the gamma parameters that fit this distribution... In the [Scipy doc](http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.gamma.html#scipy.stats.gamma), it turns out that a fit method actually exists but I don't know how to use it :s.. First, in which format the argument "data" must be, and how can I provide the second argument (the parameters) since that's what I'm looking for?
|
2010/05/24
|
[
"https://Stackoverflow.com/questions/2896179",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/348838/"
] |
Generate some gamma data:
```
import scipy.stats as stats
alpha = 5
loc = 100.5
beta = 22
data = stats.gamma.rvs(alpha, loc=loc, scale=beta, size=10000)
print(data)
# [ 202.36035683 297.23906376 249.53831795 ..., 271.85204096 180.75026301
# 364.60240242]
```
Here we fit the data to the gamma distribution:
```
fit_alpha, fit_loc, fit_beta=stats.gamma.fit(data)
print(fit_alpha, fit_loc, fit_beta)
# (5.0833692504230008, 100.08697963283467, 21.739518937816108)
print(alpha, loc, beta)
# (5, 100.5, 22)
```
|
If you want a long example including a discussion about estimating or fixing the support of the distribution, then you can find it in <https://github.com/scipy/scipy/issues/1359> and the linked mailing list message.
Preliminary support to fix parameters, such as location, during fit has been added to the trunk version of scipy.
| 14,284
|
59,273,273
|
I am attempting to open a serial connection to a usb device using PySerial, and with the following code I am getting the following error:
```
import serial
ser = serial.Serial('/dev/tty.usbserial-EN270425')
```
```
File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/serial/serialposix.py", line 265, in open
self.fd = os.open(self.portstr, os.O_RDWR | os.O_NOCTTY | os.O_NONBLOCK)
OSError: [Errno 16] Resource busy: '/dev/tty.usbserial-EN270425'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<pyshell#2>", line 1, in <module>
ser = serial.Serial('/dev/tty.usbserial-EN270425')
File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/serial/serialutil.py", line 240, in __init__
self.open()
File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/serial/serialposix.py", line 268, in open
raise SerialException(msg.errno, "could not open port {}: {}".format(self._port, msg))
serial.serialutil.SerialException: [Errno 16] could not open port /dev/tty.usbserial-EN270425: [Errno 16] Resource busy: '/dev/tty.usbserial-EN270425'
```
I have checked to see if there was a process using the resource via
`lsof | grep "/dev/tty.usbserial-EN270425"` and got no return value.
I was able to connect to the port on a different machine, the only difference being operating system and python version. The machine that CAN connect is running Mac OS Mojave and Python 3.6, the machine that CANNOT connect is running Mac OS Catalina and Python 3.8. Does anyone have any idea on where I can move forward from here?
|
2019/12/10
|
[
"https://Stackoverflow.com/questions/59273273",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11313162/"
] |
This is an interesting problem, for which I believe there is no standard function yet. This is not a huge problem because the hash itself contains an identifier telling us which hash algorithm was used. The important thing to note here is that `PASSWORD_DEFAULT` is a **constant**. Constants do not change.
To figure out which algorithm is used when using the default constant (which was and still is bcrypt), you need to generate some dummy hash and look at the beginning of it. We can even use a nice helper function [`password_get_info()`](https://www.php.net/manual/en/function.password-get-info.php)
```
$hashInfo = password_get_info(password_hash('pass', PASSWORD_DEFAULT, [ 'cost' => 4 ] ));
echo $hashInfo['algo']; // should return either 1 or 2y
if($hashInfo['algo'] === PASSWORD_BCRYPT) {
// will be true for PHP <= 7.4
}
```
|
***Edit***
As of PHP 7.4.3 you can continue using `PASSWORD_DEFAULT === PASSWORD_BCRYPT`
*<https://3v4l.org/nN4Qi>*
---
You don't actually have to use `password_hash` twice. A better and faster way is to provide an already hashed value with `Bcrypt` and check it against `PASSWORD_DEFAULT` with
[password\_needs\_rehash](https://www.php.net/manual/en/function.password-needs-rehash.php) function to see if the default algo has changed or not.
>
> bcrypt algorithm is the default as of PHP 5.5.0
>
>
>
So for example:
```
$hash = '$2y$10$ra4VedcLU8bv3jR0AlpEau3AZevkQz4Utm7F8EqUNE0Jqx0s772NG'; // Bcrypt hash
// if it doesn't need rehash then the default algo is absolutely Bcrypt
if (! password_needs_rehash($hash, PASSWORD_DEFAULT)) {
// do some clean up
}
```
>
> **Note**: make sure that the hash value($hash) has the same cost provided in `password_needs_rehash`'s third parameter, otherwise it will consider the hash outdated and need rehash since the cost has changed.
>
>
>
| 14,294
|
48,238,171
|
I'm getting string data into my python code.Some time data is coming with an extra "and" or " or for example
```
Tom and Mark and
```
in this case I need to remove the last "and" & final outcome will look like
```
Tom and Mark
```
But when data will come like this
```
Harry and John
```
Then I will consider the data without removing the "and"
Can you suggest me how to do that?
|
2018/01/13
|
[
"https://Stackoverflow.com/questions/48238171",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9184713/"
] |
```
[2, 4] [4, -3, -4, 4]
```
Sorted: `[2, 4] [-4, -3, 4, 4]`
```
qs are negative, p is positive:
2 + 4 + 2 + 3 = sum(4, 3) + 2*2
qs are larger than p:
4 - 2 + 4 - 2 = sum(4, 4) - 2*2
qs are negative, p is positive:
4 + 4 + 4 + 3 = sum(4, 3) + 2*4
qs are equal to p:
4 - 4 + 4 - 4 = 0
qs are smaller than p (not in our example):
5 - 3 + 5 - 2 = -sum(3, 2) + 2*5
qs and p are both negative (not in our example):
p is larger:
|-5 - (-7)| + |-5 - (-6)| = 7 - 5 + 6 - 5 = sum(7, 6) - 2*5
q is larger:
|-5 - (-3)| + |-5 - (-2)| = 5 - 3 + 5 - 2 = -sum(3, 2) + 2*5
p is negative, qs are positive (not in our example):
|7 - (-5)| + |6 - (-5)| = 7 + 5 + 6 + 5 = sum(7, 6) + 2*5
```
There are a few cases here, all of which can reuse already computed sums, as well as multiplication, if the arrays are ascending.
Sort and compute a prefix-sum array for `Q`, also recording the index where `q`s turn to positive if there is one. For each `p` find the start and end section of each case described above in either `O(log n)` or `O(1)` time. Add to the total by using the prefix-sum and a multiple of `p`. Complexity: `O(n log n)` time, `O(n)` space.
|
It may or may not be more efficient to add or subtract the (unmodified) difference depending on sign instead of ("unconditionally") "adding `abs()`".
I'd expect a contemporary compiler, even JIT, to detect the equivalence, though.
```
! Sum of Absolute Differences between every pair of elements of two arrays;
INTEGER PROCEDURE SAD2(a, b);
INTEGER ARRAY a, b;
BEGIN
INTEGER sum, i, j; ! by the book, declare j locally just like diff;
sum := 0;
FOR i := LOWERBOUND(a, 1) STEP 1 UNTIL UPPERBOUND(a, 1) DO
FOR j := LOWERBOUND(b, 1) STEP 1 UNTIL UPPERBOUND(b, 1) DO BEGIN
INTEGER diff;
diff := a(i) - b(j);
sum := if diff < 0 then sum - diff
else sum + diff;
END;
SAD2 := sum;
END SAD2;
```
For a sub-quadratic algorithm, see [ΧΧΧ’Χ ΧΧ¨Χ§Χ's answer](https://stackoverflow.com/a/48239655/3789665).
This may well be code for what ΧΧΧ’Χ ΧΧ¨Χ§Χ intended, not following to PEP8 to the dot:
```python
''' Given sequences A and B, SAD2 computes the sum of absolute differences
for every element b from B subtracted from every element a from A.
The usual SAD sums up absolute differences of pairs with like index, only.
'''
from bisect import bisect_right
class state(list):
''' Hold state for one sequence: sorted elements & processing state. '''
def __init__(self, a):
self.extend(sorted(a))
self.total = 0
''' sum(self[:self.todo]) '''
self.todo = 0
''' next index to do/#elements done '''
def __str__(self):
return list.__str__(self) + str(self.todo) + ', ' + str(self.total)
def SAD2(a, b):
''' return Sum of Absolute Differences of all pairs (a[x], b[y]). '''
nPairs = len(a) * len(b)
if nPairs < 2:
return abs(a[0] - b[0]) if 0 < nPairs else None
a = state(a)
b = state(b)
sad = 0
while True:
key = a[a.todo]
identical = bisect_right(a, key, a.todo)
local = 0
# iterate 'til not lower
# going to need i: no takewhile(lambda x: x < key, b[todo:])
i = b.todo
while i < len(b):
val = b[i]
if key <= val:
break
local += val
i += 1
# update SAD
# account for elements in a[:a.todo] paired with b[b.todo:i]
sad += local*a.todo - a.total*(i - b.todo)
b.todo = i
n_key = identical - a.todo
local += b.total
b.total = local
# account for elements in a[a.todo:identical] paired with b[:i]
sad += (key*i - local)*n_key
if len(b) <= b.todo:
rest = len(a) - identical
if 0 < rest:
sad += sum(a[identical:])*len(b) - b.total*rest
return sad
a.todo = identical
a.total += key * n_key
a, b = b, a
```
| 14,295
|
12,797,274
|
I've just installed Python 2.7 on windows along with IPython.
I'm used to running IPython from within Emacs on Linux, e.g.
```
M-x shell
```
Then type '`ipython`' at the prompt.
This works fine under Linux, but under Windows it hangs after printing the IPython banner text, i.e. it looks like it's working, but then you never get an IPython prompt.
I can load IPython (and Python) under Windows no problem from a standard cmd terminal, just not from within Emacs.
Anyone else experienced or hopefully solved this issue?
I get the same problem when trying to start plain old Python as well.
|
2012/10/09
|
[
"https://Stackoverflow.com/questions/12797274",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1041868/"
] |
In Windows it won't work as easily, and it's very annoying. First, make sure you have installed pyreadline.
Next, I have got it working with a bat file that is in my system path containing:
```
[ipython.bat]
@python.exe -i C:\devel\Python\2.7-bin\Scripts\ipython.py --pylab %*
```
Next get python-mode.el and delete ipython.el. The latter is deprecated and python-mode includes its functionality.
Change the path of the ipython.py file accordingly and save it. Now in your Emacs do the following in your init.el file.
```
(custom-set-variables
'(py-shell-name "ipython.bat"))
```
Alternatively, to achieve the last step, do C-h v <RET> py-shell-name and customize it and change it to ipython.bat or the full path of your ipython.bat if the scripts directory is not in your system path. Save for future sessions.
That should get your IPython shell working. There is one more caveat if you want multiple interactive matplotlib figures without hanging your IPython console. The only way I could get around this issue was to use IPython 0.10 instead of the current version.
|
I wish I had seen this post a while ago. My experiences with running python, ipython from within Emacs are the following:
I tried a number of options (Using python(x,y)-2.7.3.0 on Windows 7)
Using ipython.el:
It still works, provided that you change ipython-command. I do not recommend this, since you loose some functionality from ipython.el by changing ipython-command
You can show figures, but the prompt is frozen until you close the figure
Bare python doesn't work
Changing py-shell-name (as above):
My figures are always hanging, when I add the --pylab option
I always get an IPython terminal
Introducing ipython.bat containing
```
@C:\\Python27\\python.exe -i -u C:\\Python27\\Scripts\\ipython-script.py %*
```
and changing the path such that ipython.bat i found before ipython.exe
IPython works, but still only one thread (you have to close figure to return to shell)
Python also works and all the remaining functions from python-mode.el
I still have to figure out how to return the shell after opening a figure (within emacs). I got to work using Python 2.6 and and older version of IPython
| 14,296
|
44,123,641
|
I am using the python libraries from the Assistant SDK for speech recognition via gRPC. I have the speech recognized and returned as a string calling the method `resp.result.spoken_request_text` from `\googlesamples\assistant\__main__.py` and I have the answer as an audio stream from the assistant API with the method `resp.audio_out.audio_data` also from `\googlesamples\assistant\__main__.py`
I would like to know if it is possible to have the answer from the service as a string as well (hoping it is available in the service definition or that it could be included), and how I could access/request the answer as string.
Thanks in advance.
|
2017/05/22
|
[
"https://Stackoverflow.com/questions/44123641",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3415195/"
] |
Currently (Assistant SDK Developer Preview 1), there is no direct way to do this. You can probably feed the audio stream into a Speech-to-Text system, but that really starts getting silly.
Speaking to the engineers on this subject while at Google I/O, they indicated that there are some technical complications on their end to doing this, but they understand the use cases. They need to see questions like this to know that people want the feature.
Hopefully it will make it into an upcoming Developer Preview.
|
Update: for
>
> google.assistant.embedded.v1alpha2
>
>
>
the assistant SDK includes the field `supplemental_display_text`
>
> which is meant to extract the assistant response as text which aids
> the user's understanding
>
>
>
or to be displayed on screens. Still making the text available to the developer. [Goolge assistant documentation](https://developers.google.com/assistant/sdk/guides/service/integrate#text-response)
| 14,298
|
36,911,421
|
If a class contains two constructors that take in different types of arguments as shown here:
```
public class Planet {
public double xxPos; //its current x position
public double yyPos; //its current y position
public double xxVel; //its current veolicity in the x direction
public double yyVel; //its current veolicity in the y direction
public double mass; //its mass
public String imgFileName; //The name of an image in the images directory that depicts the planet
// constructor is like __init__ from python, this sets up the object when called like: Planet(arguments)
public Planet(double xP, double yP, double xV, double yV, double m, String img) {
xxPos = xP;
yyPos = yP;
xxVel = xV;
yyVel = yV;
mass = m;
imgFileName = img;
}
// second constructor
// how come testplanetconstructor knows to use this second one?
// does it know based on the argument type its being passed?
public Planet(Planet p) {
xxPos = p.xxPos;
yyPos = p.yyPos;
xxVel = p.xxVel;
yyVel = p.yyVel;
mass = p.mass;
imgFileName = p.imgFileName;
}
}
```
My primary question is:
1) How does another class with a main that calls this class determine which constructor to use?
If that is the case, what would happen if you have two constructors with:
2) the same type and number of arguments?
3) the same type but different number of arguments?
I realize that follow up questions are something that you should probably never do (aka something messy). I am just curious.
|
2016/04/28
|
[
"https://Stackoverflow.com/questions/36911421",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5249753/"
] |
>
> 1) How does another class with a main that calls this class determine
> which constructor to use?
>
>
>
Compiler follows same process as overloaded method for static binding by checking unique method signature. To know about method signature [see this](https://docs.oracle.com/javase/tutorial/java/javaOO/methods.html)
```
public static void main(String args[]) {
double d1 = 0;
double d2 = 0;
double d3 = 0;
double d4 = 0;
double d5 = 0;
String img = "";
Planet p = new Planet(d1, d2, d3, d4, d5, img);// constructor with valid argument set
}
```
>
> 2) the same type and number of arguments?
>
>
>
This is actually not possible to write two method/constructor with same signature in a single class. e.g. following code never compile
```
Planet(int i) {// compilation error
return 0;
}
Planet(int j) {// compilation error
return 0;
}
```
>
> 3) the same type but different number of arguments?
>
>
>
This is possible, just like method creating / calling with different signature.
e.g.
```
Planet p1 = new Planet(d1, d2, d3, d4, d5, img);
Planet p2 = new Planet(p1);
```
|
### 1) How does another class with a main that calls this class determine which constructor to use?
Classes don't determine anything, the programmer do, and he does it by placing the appropriate parameters. For example, if you have 2 constructors `public Test (int i)`and `public Test()`, when you call `new Test(5)` it will call the first, and if you do `new Test()` it will call the second.
### What if..? 2) the same type and number of arguments?
You can't. It will not compile.
### What if..? 3) the same type but different number of arguments?
The same that in 1)
| 14,299
|
27,849,023
|
**HTML**
--------
This is a form that accepts a user's input (url):
```
<form method="post" action="/" accept-charset="utf-8">
<input type="search" name="url" placeholder="Enter a url" />
<button>Go</button>
</form>
```
**PHP (Laravel)**
-----------------
This controller stores the value of the user's input (url) into a variable for use in the python script.
```
$url = Input::get('url');
$name = shell_exec('path/to/python ' . base_path() . '/test.py ' . $url);
return $name;
```
**Python**
----------
The script passes what the user input and takes the text after a forward slash and displays it.
```
url = sys.argv[1]
name = url.split('/')[-1]
print(name)
```
Going back to the PHP, it returns the value of what was executed in the Python script. If a user inputs the url: `http://example.com/file.png` it will successfully return the string `file.png`
Everything about this works, but I'm realizing that I'm limiting myself, since the `shell_exec` command is only going to paste out the string that is returned from the python script. What do I need to do if I want to have multiple variables returned?
**Python in question**
----------------------
```
url = sys.argv[1]
name = url.split('/')[-1]
test = "pass this text too!"
# ? what goes here ?
```
So the HTML page should now return `file.png` and `pass this text too!`
**Am I going to need to return an array or json in the python and then extract the array/json in PHP?**
*NOTE: I am aware of security flaws/injections in these examples, these are stripped down versions of my actual code.*
|
2015/01/08
|
[
"https://Stackoverflow.com/questions/27849023",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2097870/"
] |
You can return a an array, but be sure that elements don't contain the delimiter.
Like `print(','.join(name, test))`.
Or you can encode to json like `json.dumps([name, test])`, then parse json in PHP. Second one is better, of course.
|
In this case, your Python script has to return a formatted response, that PHP script can parse properly and determine variables with their values
for example, your python code has to return something like that :
```
file_name=file.png;file_extension=png;creation_date=1/1/2015;
```
After that in your php code yo do the necessary stuff
```
$varValues = explode(";", $PythonResult);
foreach($varValues as $vv) {
$temp = explode("=", $vv);
print 'Parameter : ' . $temp[0] . ' value : ' . $temp[1] . '<br />';
}
```
| 14,301
|
62,015,339
|
```
x = Flatten()(vgg.output)
variable = function (variable)
```
I can't find this type of expressions in python , can anyone help me to understand the above expression
Thanks in advance
|
2020/05/26
|
[
"https://Stackoverflow.com/questions/62015339",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5952135/"
] |
The `Flatten()` function here, returns another function, which takes `vgg.output` as argument. This happens because everything in python is a first class object. So you can return a function as the return value of function. This will be clear with an example:
Let's say we have a function `square` which returns the square of a number:
```
def square(number):
return number**2
```
So calling square on a number would give:
```
>>> square(3)
9
```
Now, let us define another function that returns the `square` function, and only the function:
```
def return_func():
return square
```
Calling `return_func` we will get the `square` function back:
```
>>> some_func = return_func()
>>> some_func
<function __main__.square(number)>
>>> some_func == square == return_func()
True
```
So calling `square(number)` should be equivalent to `return_func()(number)`, i.e.
```
>>> return_func()(3)
9
```
So in your example, `Flatten()` is equivalent to `return_func()` and `number` is equivalent to `vgg.output`.
|
`x = Flattern()` is calling this function and assign returned data to `x`.
`v = function` like `v` is `function` alias. And you can do `v()` to call it.
| 14,302
|
72,531,611
|
I have Miniconda3 on a Linux system (Ubuntu 22.04). The environment has Python 3.10 as well as a functioning (in Python) installation of PyTorch (installed following official instructions).
I would like to setup a CMake project that uses PyTorch C++ API. The reason is not important and also I am aware that it's beta (the official documentation states that), so instability and major changes are not excluded.
Currently I have this very minimal `CMakeLists.txt`:
```
cmake_minimum_required(VERSION 3.19) # or whatever version you use
project(PyTorch_Cpp_HelloWorld CXX)
set(PYTORCH_ROOT "/home/$ENV{USER}/miniconda/envs/ML/lib/python3.10/site-packages/torch")
list(APPEND CMAKE_PREFIX_PATH "${PYTORCH_ROOT}/share/cmake/Torch/")
find_package(Torch REQUIRED CONFIG)
...
# Add executable
# Link against PyTorch library
```
When I try to configure the project I'm getting error:
>
> CMake Error at CMakeLists.txt:21 (message): message called with
> incorrect number of arguments
>
>
> -- Could NOT find Protobuf (missing: Protobuf\_LIBRARIES Protobuf\_INCLUDE\_DIR)
> -- Found Threads: TRUE CMake Warning at /home/USER/miniconda/envs/ML/lib/python3.10/site-packages/torch/share/cmake/Caffe2/public/protobuf.cmake:88
> (message): Protobuf cannot be found. Depending on whether you are
> building Caffe2 or a Caffe2 dependent library, the next warning /
> error will give you more info. Call Stack (most recent call first):
> /home/USER/miniconda/envs/ML/lib/python3.10/site-packages/torch/share/cmake/Caffe2/Caffe2Config.cmake:56
> (include)
>
> /home/USER/miniconda/envs/ML/lib/python3.10/site-packages/torch/share/cmake/Torch/TorchConfig.cmake:68
> (find\_package) CMakeLists.txt:23 (find\_package)
>
>
> CMake Error at
> /home/USER/miniconda/envs/ML/lib/python3.10/site-packages/torch/share/cmake/Caffe2/Caffe2Config.cmake:58
> (message): Your installed Caffe2 version uses protobuf but the
> protobuf library cannot be found. Did you accidentally remove it,
> or have you set the right CMAKE\_PREFIX\_PATH? If you do not have
> protobuf, you will need to install protobuf and set the library path
> accordingly. Call Stack (most recent call first):
>
> /home/USER/miniconda/envs/ML/lib/python3.10/site-packages/torch/share/cmake/Torch/TorchConfig.cmake:68
> (find\_package) CMakeLists.txt:23 (find\_package)
>
>
>
I installed `libprotobuf` (again via `conda`) but, while I can find the library files, I can't find any `*ProtobufConfig.cmake` or anything remotely related to protobuf and its CMake setup.
Before I go fight against wind mills I would like to ask here what the proper setup would be. I am guessing building from source is always an option, however this will pose a huge overhead on people, who I collaborate with.
|
2022/06/07
|
[
"https://Stackoverflow.com/questions/72531611",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1559401/"
] |
```
import os
import pandas as pd
base_dir = '/path/to/dir'
#Get all files in the directory
data_list = []
for file in os.listdir(base_dir):
#If file is a json, construct it's full path and open it, append all json data to list
if file.endswith('json'):
json_path = os.path.join(base_dir, file)
json_data = pd.read_json(json_path, lines=True)
data_list.append(json_data)
print(data_list)
```
|
You probably need to build a list of DataFrames. You may not be able to process every file in the given directory so try this:
```
import pandas as pd
from glob import glob
from os.path import join
BASEDIR = 'Datasets'
dataframes = []
for file in glob(join(BASEDIR, '*.json')):
try:
dataframes.append(pd.read_json(file))
except ValueError:
print(f'Unable to process {file}')
print(f'Successfully constructed {len(dataframes)} dataframes')
```
| 14,304
|
38,690,035
|
I have a Python script that creates a Lambda script in AWS along with all the policies and triggers. I use python boto3 library for that. I create the zip file for the lambda as on-the-fly rather than uploading a static zip file from the hard drive. I use this simple code from below to create my zip file. It creates the zip file without any problems and my python code uploads this zip file as a lambda script and I can view my lambda script in the AWS without any problems. But when I run my lambda script it gives me the module not found error even though I can clearly see that both the module name and the file name does exist and is view-able.
**Unable to import module 'xxxx': No module named xxxx**
In the file system I double click that zip file that was created by this code and see that the content is created and everything looks normal.
If I bypass zipping on the fly and create the zip statically using WinZip and let the rest of the Python & boto3 script upload this file then it works just fine.
```
def CreateLambdaZip(self, fileName, fileContent):
with zipfile.ZipFile('Lambda/' + fileName + '.zip', 'w') as myzipc:
myzipc.writestr( fileName + '.py', fileContent)
myzipc.close()
```
It kinda looks like for the zip file I'm skipping some special headers that is needed by Aws Lambda. Is there such thing? Because in the file system the zip file that is created by Python code and the other one that is created by WinZip are exactly the same. So I know there's nothing wrong with the lambda script.
**Update**: I'm uploading the zip file using the below code that reads the zip file which was created using the above snippet.
```
with open('Lambda/'+ fileName +'.zip', 'rb') as zipFile:
func = boto3.client("Lambda").create_function(
FunctionName=lambdaFunction,
Runtime='python2.7',
Role=role['Role']['Arn'],
Handler= fileName + "." + functionName,
Description=description,
Timeout=10,
MemorySize=256,
Publish=True,
Code={'ZipFile': zipFile.read()},
)
```
When I use zipFile.read() I get 2 different headers for the same content when I zip it using WinZip and when I zip it using Python's module.
Zip file that's created programmatically using Python
```
b'PK\x03\x04\x14\x00\x00\x00\x00\x00\xe4~\x01IO\x96J=Z\x07\x00\x00Z\x07\x00\x00\x19\x00\x00\x00schedule-ec2-snapshots.pyimport json\nimport boto3\nimport time\nfrom datetime import date, timedelta\n\nprint(\'Loading scheduled EC2 backup actions\')\n\ndef create_snapshots(event, context):\n """\n Lambda function that executes daily snapshots for the instances that
```
and zipfile created by WinZip
```
b'PK\x03\x04\x14\x00\x02\x00\x08\x004X\xfcH\x88\x1f\xce\xb5&\x03\x00\x00b\x07\x00\x00\x19\x00\x00\x00schedule-ec2-snapshots.py\x8dU]k\xdb@\x10|7\xf4?,\nA\x12qL\xda\x06B\r~I\x93Bh\x9b\x87&\xf4E\x15\xe1\xac[\xdb\xd7HwBw2\t\xc1\xff\xbd{+\xeb\xcb.\xb4\n\xc4\xba\xdb\xd1\xec\xce\xdc\xae\xa4\x8a\xd2T\x0e~[\xa3\'\xaa\xb9_\x1ag>\xb6\x0b\xa7\n\x9c\xac*S\x80\x14\x0e\xfd\n\xf6\x11\xbf\x9er\\b\xee\xc4dRVJ\xbb(\xfcf\x84Tz\r6\xdb\xa0\xacs\x94p\xfb\xf9\x03,E\xf6\\\x97
```
|
2016/08/01
|
[
"https://Stackoverflow.com/questions/38690035",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4163842/"
] |
With the info above I was able to start the in-memory solution. The deployment of that zip file worked but I could not use the resulting function. Got error:
```
Unable to import module '<function-name>': No module named <function-name>
```
I got it to work by specifying the file permissions.
I then use the in-mem-zip to create an AWS lambda function.
Setup:
**file\_map** is a dictionary of full\_path->file\_bytes.
**files** is a list of full\_paths
```
def create_lambda_function(function_name, desc, role, handler, file_map, files)
zip_contents = create_in_mem_zip_archive(file_map, files)
result = lambda_code.create_function(
FunctionName=function_name,
Runtime="python2.7",
Description=desc,
Role=role,
Handler=handler,
Code={'ZipFile': zip_contents},
)
return result
def create_in_mem_zip_archive(file_map, files):
buf = io.BytesIO()
logger.info("Building zip file: " + str(files))
with zipfile.ZipFile(buf, 'w', zipfile.ZIP_DEFLATED) as zfh:
for file_name in files:
file_blob = file_map.get(file_name)
if file_blob is None:
logger.error("Missing file {} from files".format(file_name))
continue
try:
info = zipfile.ZipInfo(file_name)
info.date_time = time.localtime()
info.compress_type = zipfile.ZIP_DEFLATED
info.external_attr = 0777 << 16L # give full access
# info.external_attr = 0644 << 16L # -r-wr--r--
# info.external_attr = 0755 << 16L # -rwxr-xr-x
zfh.writestr(info, file_blob)
except Exception as ex:
logger.info("Error reading file: " + file_name + ", error: " + ex.message)
buf.seek(0)
return buf.read()
```
|
I have experienced the exactly same problem you have. My solution is do NOT use on the fly zip file. Create a real zip file and add real file into it, and it just works. You can do that even in the lambda environment, by create filepath like "/tmp/yourfile.txt" you can create temp real file when lambda execute.
| 14,306
|
67,542,736
|
I'm trying to make lists of companies from long strings.
The company names tend to be randomly dispersed through the strings, but they always have a comma and a space before the names ', ', and they always end in Inc, LLC, Corporation, or Corp.
In addition, there is always a company listed at the very beginning of the string. It goes something like:
```
Companies = 'Apples Inc, xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx, Bananas LLC,
Carrots Corp, xxxx.'
```
I've been trying to use regex to crack this nut, but I am too inexperienced with python.
My closest attempt went like this:
```
r = re.compile(r' .*? Inc | .*? LLC | .*? Corporation | .*? Corp',
flags = re.I | re.X)
r.findall(Companies)
```
But my output is always some variation of
```
['Apples Inc', ', xxxxxxxxxxxxxxxxxxx, Bananas LLC', ', Carrots Corp']
```
When I need it to be like
```
['Apples Inc', 'Bananas LLC', 'Carrots Corp']
```
I am vexed and I humbly ask for assistance.
\*\*\*\*EDIT
I have figured out a method to find the company name if it includes a comma, like Apples, Inc.
Before I run any analysis on the long string, I will have the program look if any commas exist 2 spaces before the Inc., and then delete them.
Then I will run the program to list out the company names.
|
2021/05/15
|
[
"https://Stackoverflow.com/questions/67542736",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/15930763/"
] |
In C, the `for` loop is a "check before body" operation, you want the "check after body" variant, a `do while` loop, something like:
```c
int xs[] = {1,2,3,4,5};
{
int i = 0;
do {
foo(xs[i]);
} while (xs[i++] != 4);
}
```
You'll notice I've enclosed the entire chunk in its own scope (the outermost `{}` braces). This is just to limit the existence of `i` to make it conform more with the `for` loop behaviour.
In terms of a complete program showing this, the following code:
```c
#include <stdio.h>
void foo(int x) {
printf("%d\n", x);
}
int main(void) {
int xs[] = {1,2,3,4,5};
{
int i = 0;
do {
foo(xs[i]);
} while (xs[i++] != 4);
}
return 0;
}
```
outputs:
```none
1
2
3
4
```
---
As an aside, like you, I'm *also* not that keen of the two other solutions you've seen.
For the first solution, that won't actually work in this case since the lifetime of `i` is limited to the `for` loop itself (the `int` in the `for` statement initialisation section makes this so).
That means `i` will not have the value you expect after the loop. Either there will *be* no `i` (a compile-time error) or there *will* be an `i` which was hidden within the `for` loop and therefore unlikely to have the value you expect, leading to insidious bugs.
For the second, I will sometimes break loops within the body but generally only at the start of the body so that the control logic is still visible in a single area. I tend to do that if the `for` condition would be otherwise very lengthy but there are other ways to do this.
|
Try processing the loop as long as *the previous* element (if available) is not `4`:
```
int xs[] = {1,2,3,4,5};
for (int i = 0; i == 0 || xs[i - 1] != 4; i++) {
foo(xs[i]);
}
```
| 14,307
|
29,205,052
|
Iam create project in django-oscar with the help of <http://django-oscar.readthedocs.org/en/latest/internals/getting_started.html> tutorial ,
i installed every packages which they mention in doc . after i run my project i getting (A server error occurred. Please contact the administrator. this error in ui) and the error throwing in terminal is
```
File "/usr/lib/python2.7/wsgiref/handlers.py", line 85, in run
self.result = application(self.environ, self.start_response)
File "/home/spericorn/global/oscar/local/lib/python2.7/site-packages/django/contrib/staticfiles/handlers.py", line 64, in __call__
return self.application(environ, start_response)
File "/home/spericorn/global/oscar/local/lib/python2.7/site-packages/django/core/handlers/wsgi.py", line 168, in __call__
self.load_middleware()
File "/home/spericorn/global/oscar/local/lib/python2.7/site-packages/django/core/handlers/base.py", line 44, in load_middleware
mw_class = import_string(middleware_path)
File "/home/spericorn/global/oscar/local/lib/python2.7/site-packages/django/utils/module_loading.py", line 26, in import_string
module = import_module(module_path)
File "/usr/lib/python2.7/importlib/__init__.py", line 37, in import_module
__import__(name)
File "/home/spericorn/global/oscar/local/lib/python2.7/site-packages/oscar/apps/basket/middleware.py", line 8, in <module>
Applicator = get_class('offer.utils', 'Applicator')
File "/home/spericorn/global/oscar/local/lib/python2.7/site-packages/oscar/core/loading.py", line 67, in get_class
return get_classes(module_label, [classname])[0]
File "/home/spericorn/global/oscar/local/lib/python2.7/site-packages/oscar/core/loading.py", line 124, in get_classes
oscar_module = _import_module(oscar_module_label, classnames)
File "/home/spericorn/global/oscar/local/lib/python2.7/site-packages/oscar/core/loading.py", line 159, in _import_module
return __import__(module_label, fromlist=classnames)
File "/home/spericorn/global/oscar/local/lib/python2.7/site-packages/oscar/apps/offer/utils.py", line 10, in <module>
ConditionalOffer = get_model('offer', 'ConditionalOffer')
File "/home/spericorn/global/oscar/local/lib/python2.7/site-packages/oscar/core/loading.py", line 321, in get_model
return apps.get_model(app_label, model_name)
File "/home/spericorn/global/oscar/local/lib/python2.7/site-packages/django/apps/registry.py", line 202, in get_model
return self.get_app_config(app_label).get_model(model_name.lower())
File "/home/spericorn/global/oscar/local/lib/python2.7/site-packages/django/apps/registry.py", line 150, in get_app_config
raise LookupError("No installed app with label '%s'." % app_label)
LookupError: No installed app with label 'offer'.
[23/Mar/2015 07:22:20] "GET / HTTP/1.1" 500 59
Traceback (most recent call last):
File "/usr/lib/python2.7/wsgiref/handlers.py", line 85, in run
self.result = application(self.environ, self.start_response)
File "/home/spericorn/global/oscar/local/lib/python2.7/site-packages/django/contrib/staticfiles/handlers.py", line 64, in __call__
return self.application(environ, start_response)
File "/home/spericorn/global/oscar/local/lib/python2.7/site-packages/django/core/handlers/wsgi.py", line 168, in __call__
self.load_middleware()
File "/home/spericorn/global/oscar/local/lib/python2.7/site-packages/django/core/handlers/base.py", line 44, in load_middleware
mw_class = import_string(middleware_path)
File "/home/spericorn/global/oscar/local/lib/python2.7/site-packages/django/utils/module_loading.py", line 26, in import_string
module = import_module(module_path)
File "/usr/lib/python2.7/importlib/__init__.py", line 37, in import_module
__import__(name)
File "/home/spericorn/global/oscar/local/lib/python2.7/site-packages/oscar/apps/basket/middleware.py", line 8, in <module>
Applicator = get_class('offer.utils', 'Applicator')
File "/home/spericorn/global/oscar/local/lib/python2.7/site-packages/oscar/core/loading.py", line 67, in get_class
return get_classes(module_label, [classname])[0]
File "/home/spericorn/global/oscar/local/lib/python2.7/site-packages/oscar/core/loading.py", line 124, in get_classes
oscar_module = _import_module(oscar_module_label, classnames)
File "/home/spericorn/global/oscar/local/lib/python2.7/site-packages/oscar/core/loading.py", line 159, in _import_module
return __import__(module_label, fromlist=classnames)
File "/home/spericorn/global/oscar/local/lib/python2.7/site-packages/oscar/apps/offer/utils.py", line 10, in <module>
ConditionalOffer = get_model('offer', 'ConditionalOffer')
File "/home/spericorn/global/oscar/local/lib/python2.7/site-packages/oscar/core/loading.py", line 321, in get_model
return apps.get_model(app_label, model_name)
File "/home/spericorn/global/oscar/local/lib/python2.7/site-packages/django/apps/registry.py", line 202, in get_model
return self.get_app_config(app_label).get_model(model_name.lower())
File "/home/spericorn/global/oscar/local/lib/python2.7/site-packages/django/apps/registry.py", line 150, in get_app_config
raise LookupError("No installed app with label '%s'." % app_label)
LookupError: No installed app with label 'offer'.
```
my settings.py is :
"""
Django settings for frobshop project.
""
```
from oscar.defaults import *
from oscar import OSCAR_MAIN_TEMPLATE_DIR
import os
def root(x):
return os.path.join(os.path.abspath(os.path.dirname(__file__)), '..',x)
SECRET_KEY = 'XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX'
DEBUG = True
TEMPLATE_DEBUG = True
ALLOWED_HOSTS = ['*']
# Application definition
INSTALLED_APPS = (
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.sites',
'django.contrib.messages',
'django.contrib.staticfiles',
'django.contrib.flatpages',
'compressor',
)
MIDDLEWARE_CLASSES = (
'django.middleware.common.CommonMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'django.middleware.clickjacking.XFrameOptionsMiddleware',
'oscar.apps.basket.middleware.BasketMiddleware',
'django.contrib.flatpages.middleware.FlatpageFallbackMiddleware'
)
AUTHENTICATION_BACKENDS = (
'oscar.apps.customer.auth_backends.EmailBackend',
'django.contrib.auth.backends.ModelBackend',
)
ROOT_URLCONF = 'frobshop.urls'
WSGI_APPLICATION = 'frobshop.wsgi.application'
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.sqlite3',
'NAME': root( 'db.sqlite3'),
'ATOMIC_REQUESTS': True,
}
}
LANGUAGE_CODE = 'en-us'
TIME_ZONE = 'UTC'
USE_I18N = True
USE_L10N = True
USE_TZ = True
SITE_ID = 1
TEMPLATE_CONTEXT_PROCESSORS = (
"django.contrib.auth.context_processors.auth",
"django.core.context_processors.request",
"django.core.context_processors.debug",
"django.core.context_processors.i18n",
"django.core.context_processors.media",
"django.core.context_processors.static",
"django.core.context_processors.tz",
"django.contrib.messages.context_processors.messages",
'oscar.apps.search.context_processors.search_form',
'oscar.apps.promotions.context_processors.promotions',
'oscar.apps.checkout.context_processors.checkout',
'oscar.apps.customer.notifications.context_processors.notifications',
'oscar.core.context_processors.metadata',
)
MEDIA_ROOT = root('media')
MEDIA_URL = '/media/'
STATIC_ROOT = root('staticstorage')
STATIC_URL = '/static/'
STATICFILES_DIRS = (
root('static'),
)
TEMPLATE_DIRS = [
root('templates'),
OSCAR_MAIN_TEMPLATE_DIR
]
#
# HAYSTACK_CONNECTIONS = {
# 'default': {
# 'ENGINE': 'haystack.backends.solr_backend.SolrEngine',
# 'URL': 'http://127.0.0.1:8983/solr',
# 'INCLUDE_SPELLING': True,
# },
# }
HAYSTACK_CONNECTIONS = {
'default': {
'ENGINE': 'haystack.backends.simple_backend.SimpleEngine',
},
}
```
Thanks in advance,please help me.
|
2015/03/23
|
[
"https://Stackoverflow.com/questions/29205052",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3632705/"
] |
Try this
```
from oscar import get_core_apps
INSTALLED_APPS = [
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.sites',
'django.contrib.messages',
'django.contrib.staticfiles',
'django.contrib.flatpages',
'compressor',
] + get_core_apps()
```
`get_core_apps` will include
```
OSCAR_CORE_APPS = [
'oscar',
'oscar.apps.analytics',
'oscar.apps.checkout',
'oscar.apps.address',
'oscar.apps.shipping',
'oscar.apps.catalogue',
'oscar.apps.catalogue.reviews',
'oscar.apps.partner',
'oscar.apps.basket',
'oscar.apps.payment',
'oscar.apps.offer',
'oscar.apps.order',
'oscar.apps.customer',
'oscar.apps.promotions',
'oscar.apps.search',
'oscar.apps.voucher',
'oscar.apps.wishlists',
'oscar.apps.dashboard',
'oscar.apps.dashboard.reports',
'oscar.apps.dashboard.users',
'oscar.apps.dashboard.orders',
'oscar.apps.dashboard.promotions',
'oscar.apps.dashboard.catalogue',
'oscar.apps.dashboard.offers',
'oscar.apps.dashboard.partners',
'oscar.apps.dashboard.pages',
'oscar.apps.dashboard.ranges',
'oscar.apps.dashboard.reviews',
'oscar.apps.dashboard.vouchers',
'oscar.apps.dashboard.communications',
# 3rd-party apps that oscar depends on
'haystack',
'treebeard',
'sorl.thumbnail',
'django_tables2',
]
```
|
You dont import Oscar core apps. You need to import them and add to the INSTALLED\_APPS.
You can doing this by importing `from oscar import get_core_apps`
`get_core_apps` is a function that return (list) a default and required Oscar core apps.
So you need to concatenate it.
```
INSTALLED_APPS = [
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.sites',
'django.contrib.messages',
'django.contrib.staticfiles',
'django.contrib.flatpages',
'compressor',
] + get_core_apps()
```
**Note:** If your INSTALLED\_APPS is a tuple (not list) you got a error, because `get_core_apps()` return a list. So you need to change your INSTALLED\_APPS to a list type.
| 14,309
|
24,331,551
|
I was wondering how I would be able to sort a whole array by the values in one of its columns.
I have :
```
array([5,2,8,2,4])
```
and:
```
array([[ 0, 1, 2, 3, 4],
[ 5, 6, 7, 8, 9],
[10, 11, 12, 13, 14],
[15, 16, 17, 18, 19],
[20, 21, 22, 23, 24]])
```
I want to append the first array to the second one like this:
```
array([[ 0, 1, 2, 3, 4],
[ 5, 6, 7, 8, 9],
[10, 11, 12, 13, 14],
[15, 16, 17, 18, 19],
[20, 21, 22, 23, 24],
[5, 2, 8, 2, 4]])
```
And then sort the array by the appended row to get either this:
```
array([[1, 3, 4, 0, 2],
[6, 8, 9, 5, 7],
[11, 13, 14, 10, 12],
[16, 18, 19, 15, 17],
[21, 23, 24, 20, 22],
[2, 2, 4, 5, 8]])
```
or this:
```
array([[ 2, 1, 3, 4, 0],
[ 7, 6, 8, 9, 5],
[12, 11, 13, 14, 10],
[17, 16, 18, 19, 15],
[22, 21, 23, 24, 20],
[ 8, 5, 4, 2, 2]])
```
And then remove the appended column to get:
```
array([[1, 3, 4, 0, 2],
[6, 8, 9, 5, 7],
[11, 13, 14, 10, 12],
[16, 18, 19, 15, 17],
[21, 23, 24, 20, 22]])
```
or:
```
array([[ 2, 1, 3, 4, 0],
[ 7, 6, 8, 9, 5],
[12, 11, 13, 14, 10],
[17, 16, 18, 19, 15],
[22, 21, 23, 24, 20]])
```
Is there a code to carry out this procedure. I am very new to python. Thanks a lot!
|
2014/06/20
|
[
"https://Stackoverflow.com/questions/24331551",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3712008/"
] |
You can use [numpy.argsort](http://docs.scipy.org/doc/numpy/reference/generated/numpy.argsort.html) to get a list with the sorted indices of your array. Using that you can then rearrange the columns of the matrix.
```
import numpy as np
c = np.array([5,2,8,2,4])
a = np.array([[ 0, 1, 2, 3, 4],
[ 5, 6, 7, 8, 9],
[10, 11, 12, 13, 14],
[15, 16, 17, 18, 19],
[20, 21, 22, 23, 24]])
i = np.argsort(c)
a = a[:,i]
```
|
You don't need numpy to do this; (although if you are using numpy, you can just use the `.transpose()` method of the array class.
What this essentially does, is transpose your array so that it's `array[column][row]`, and then takes each columns, and pairs them with the sortKeys you provided in a list of tuples (the `zip(sortKeys, a)` bit). Then it sorts this list of tuples. By default, sort with sort tuples based on their first value, then their second value, then third, etc. Now you have the columns in order.
then `aNew = [...]` just extracts the columns and creates your new array, still in the `array[column][row]` format, and then transposes it again.
```
a = [[ 0, 1, 2, 3, 4],
[ 5, 6, 7, 8, 9],
[10, 11, 12, 13, 14],
[15, 16, 17, 18, 19],
[20, 21, 22, 23, 24]]
#transpose a
a = zip(*a)
sortKeys = [5,2,8,2,4]
b = zip(sortKeys, a)
aNew = [row[1] for row in sorted(b)]
#transpose a back
aNew = zip(*aNew)
print aNew
```
| 14,310
|
54,648,040
|
I would like to get ELK version through REST API or parse html.
I search in API documentation without finding anything
Re-edit:
In python ... i'm not found better than
```
re.findall(r"version":"(\d\.\d\.\d)"", requests.get(my_elk).content.decode())[0]
```
|
2019/02/12
|
[
"https://Stackoverflow.com/questions/54648040",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3671796/"
] |
Elasticsearch gives JSON, not HTML. So, you could use `jq`
```
$ curl -s localhost:9200 | jq '.version.number'
6.6.0
```
In Python, please don't use `re` module... Use `json` module and actually parse that content
|
There's no HTML, but if you call `GET /` in Kibana's Console or `curl -XGET http://localhost:9200/`, the return will be:
```
{
"name" : "instance-0000000039",
"cluster_name" : "c2edd39f6fa24b0d8e5c34e8d1d19849",
"cluster_uuid" : "VBkvp8OmTCaVuVvMioS3SA",
"version" : {
"number" : "6.6.0",
"build_flavor" : "default",
"build_type" : "tar",
"build_hash" : "a9861f4",
"build_date" : "2019-01-24T11:27:09.439740Z",
"build_snapshot" : false,
"lucene_version" : "7.6.0",
"minimum_wire_compatibility_version" : "5.6.0",
"minimum_index_compatibility_version" : "5.0.0"
},
"tagline" : "You Know, for Search"
}
```
So all you need to do is get `version.number` from the JSON response.
| 14,311
|
62,388,691
|
I am newer for a python language. I want to be read the data from the text file(multiple lines in the text file), Then use the data that read from the text file to execute with the dictionary function.
```
def readCmd():
f = open('cmd.txt', "r")
line = f.readline()
for line in f:
print (line)
time.sleep(1)
return str(line)
f.close()
def zero():
print( "Hi 0")
def one():
print( "Test 1")
def two():
print( "end 2")
def num_to_func_to_str(argument):
switcher = {
"Hi": zero,
"test": one,
"end": two,
}
print(switcher.get(argument,"Please enter only 'Hi', 'test' and 'end'"))
def main():
#readCmd()
while 1 :
time.sleep(0.5)
num_to_func_to_str(readCmd())
if __name__ == "__main__":
main()
```
The above code is the code that I tried. it showed just the second line and not go to the dictionary(switcher) condition. This code is skipped to `print(switcher.get(argument,"Please enter only 'Hi', 'test' and 'end'"))`
The data in text file as below.
```
start
Hi
test
Hi
test
Hi
test
Hi
test
Hi
test
Hi
test
end
```
Can anyone suggest to me for the how to solve this?
thanks
|
2020/06/15
|
[
"https://Stackoverflow.com/questions/62388691",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11240783/"
] |
You need to parse the input field value to an integer:
```js
parseInt($('#corporateA').val(), 10) // 89 (integer)
```
Also, as a tip, avoid prefixing the dollar-sign to non-jQuery objects.
### Proper usage
```js
let $corporateA = $('#corporateA'); // Storing a jQuery DOM object
let valA = parseInt($corporateA.val(), 10);
```
---
As a jQuery plugin
------------------
You could also write your own jQuery plugin, to make the call convenient and succinct.
```js
(function($) {
$.fn.intVal = function(radix) {
return parseInt(this.val(), radix || 10);
};
})(jQuery);
let corporateA = $('#corporate-a').intVal();
let corporateB = $('#corporate-b').intVal();
let corporateE = $('#corporate-e').intVal();
```
---
Working demo
------------
HTML conventions dictate that element IDs and classes should be kebab-case or snake-case, rather than lower camel-case.
```js
$('.my-cor-opt').change(function(e) {
let corporateA = parseInt($('#corporate-a').val(), 10);
let corporateB = parseInt($('#corporate-b').val(), 10);
let corporateE = parseInt($('#corporate-e').val(), 10);
let corElevators = Math.ceil(corporateE * (corporateA + corporateB) / 1000);
let corColumns = Math.ceil((corporateA + corporateB) / 20);
let elevatorsPerColumnsCor = Math.ceil(corElevators / corColumns);
let corTotalElevators = elevatorsPerColumnsCor * corColumns;
alert(corTotalElevators); // 25
});
$('.my-cor-opt').trigger('change');
```
```html
<script src="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.3.1/jquery.min.js"></script>
<input id="corporate-a" value="89" />
<input id="corporate-b" value="6" />
<input id="corporate-e" value="240" />
<select class="my-cor-opt"></select>
```
|
You need to parse the values of the input fields, which are of type string into numbers.
There are two main strategies to achieve that.
1. Prefix every input field with a unary plus operator
2. Parse with the `parseInt` function
If you fail to parse the string into numbers, the binary `plus` operator will do string concatenation instead of addition
```
'1' + '2' + '3' // '123'
1 + 2 + 3 // 6
+'1' + +'2' + +'3' // 6
parseInt('1', 10) + parseInt('2', 10) + parseInt('3', 10) // 6
```
See here for more details: [parseInt vs unary plus, when to use which?](https://stackoverflow.com/questions/17106681/parseint-vs-unary-plus-when-to-use-which)
Unary plus operator
===================
```js
"use strict";
console.clear();
$(".myCorOpt").change(function () {
var $corporateA = +$("#corporateA").val();
var $corporateB = +$("#corporateB").val();
var $corporateE = +$("#corporateE").val();
var $corElevators = Math.ceil(
($corporateE * ($corporateA + $corporateB)) / 1000
);
var $corColumns = Math.ceil(($corporateA + $corporateB) / 20);
var $elevatorsPerColumnsCor = Math.ceil($corElevators / $corColumns);
var $corTotalElevators = $elevatorsPerColumnsCor * $corColumns;
$('#corElevators').val($corElevators);
$('#corColumns').val($corColumns);
$('#elevatorsPerColumnsCor').val($elevatorsPerColumnsCor);
$('#corTotalElevators').val($corTotalElevators);
});
```
```html
<script src="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.3.1/jquery.min.js"></script>
<form class="myCorOpt">
<table>
<tr><td>Corporate A</td><td><input id="corporateA" placeholder="corporateA" value="89"></td></tr>
<tr><td>Corporate B</td><td><input id="corporateB" placeholder="corporateB" value="6"></td></tr>
<tr><td>Corporate E</td><td><input id="corporateE" placeholder="corporateE"></td></tr>
<tr><td colspan="2"><hr></td></tr>
<tr><td>Elevators</td><td><input id="corElevators"></td></tr>
<tr><td>Columns</td><td><input id="corColumns"></td></tr>
<tr><td>Elevators per Columns</td><td><input id="elevatorsPerColumnsCor"></td></tr>
<tr><td>Total Elevators</td><td><input id="corTotalElevators"></td></tr>
</form>
```
`parseInt`
==========
```js
"use strict";
console.clear();
$(".myCorOpt").change(function () {
var $corporateA = parseInt($("#corporateA").val(), 10);
var $corporateB = parseInt($("#corporateB").val(), 10);
var $corporateE = parseInt($("#corporateE").val(), 10);
var $corElevators = Math.ceil(
($corporateE * ($corporateA + $corporateB)) / 1000
);
var $corColumns = Math.ceil(($corporateA + $corporateB) / 20);
var $elevatorsPerColumnsCor = Math.ceil($corElevators / $corColumns);
var $corTotalElevators = $elevatorsPerColumnsCor * $corColumns;
$('#corElevators').val($corElevators);
$('#corColumns').val($corColumns);
$('#elevatorsPerColumnsCor').val($elevatorsPerColumnsCor);
$('#corTotalElevators').val($corTotalElevators);
});
```
```html
<script src="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.3.1/jquery.min.js"></script>
<form class="myCorOpt">
<table>
<tr><td>Corporate A</td><td><input id="corporateA" placeholder="corporateA" value="89"></td></tr>
<tr><td>Corporate B</td><td><input id="corporateB" placeholder="corporateB" value="6"></td></tr>
<tr><td>Corporate E</td><td><input id="corporateE" placeholder="corporateE"></td></tr>
<tr><td colspan="2"><hr></td></tr>
<tr><td>Elevators</td><td><input id="corElevators"></td></tr>
<tr><td>Columns</td><td><input id="corColumns"></td></tr>
<tr><td>Elevators per Columns</td><td><input id="elevatorsPerColumnsCor"></td></tr>
<tr><td>Total Elevators</td><td><input id="corTotalElevators"></td></tr>
</form>
```
| 14,313
|
63,881,599
|
I am trying to use the [kubernetes-client](https://github.com/kubernetes-client) in python in order to create an horizontal pod autoscaler in kubernetes. To do so i make use of the [create\_namespaced\_horizontal\_pod\_autoscaler](https://github.com/kubernetes-client/python/blob/master/kubernetes/docs/AutoscalingV2beta2Api.md#create_namespaced_horizontal_pod_autoscaler)() function that belongs to the [AutoscalingV2beta2Api](https://github.com/kubernetes-client/python/tree/master/kubernetes).
My code is the following:
```
my_metrics = []
my_metrics.append(client.V2beta2MetricSpec(type='Pods', pods= client.V2beta2PodsMetricSource(metric=client.V2beta2MetricIdentifier(name='cpu'), target=client.V2beta2MetricTarget(average_utilization='50',type='Utilization'))))
body = client.V2beta2HorizontalPodAutoscaler(
api_version='autoscaling/v2beta2',
kind='HorizontalPodAutoscaler',
metadata=client.V1ObjectMeta(name='php-apache'),
spec= client.V2beta2HorizontalPodAutoscalerSpec(
max_replicas=10,
min_replicas=1,
metrics = my_metrics,
scale_target_ref = client.V2beta2CrossVersionObjectReference(kind='Object',name='php-apache')
))
v2 = client.AutoscalingV2beta2Api()
ret = v2.create_namespaced_horizontal_pod_autoscaler(namespace='default', body=body, pretty=True)
```
And the response i get:
```
HTTP response body: {
"kind": "Status",
"apiVersion": "v1",
"metadata": {
},
"status": "Failure",
"message": "HorizontalPodAutoscaler in version \"v2beta2\" cannot be handled as a HorizontalPodAutoscaler: v2beta2.HorizontalPodAutoscaler.Spec: v2beta2.HorizontalPodAutoscalerSpec.Metrics: []v2beta2.MetricSpec: v2beta2.MetricSpec.Pods: v2beta2.PodsMetricSource.Target: v2beta2.MetricTarget.AverageUtilization: readUint32: unexpected character: \ufffd, error found in #10 byte of ...|zation\": \"50\", \"type|..., bigger context ...|\"name\": \"cpu\"}, \"target\": {\"averageUtilization\": \"50\", \"type\": \"Utilization\"}}, \"type\": \"Pods\"}], \"m|...",
"reason": "BadRequest",
"code": 400
}
```
I just want to use the kubernetes-client to call the equivalent of:
```
kubectl autoscale deployment php-apache --cpu-percent=50 --min=1 --max=10
```
Any working example would be more than welcome.
|
2020/09/14
|
[
"https://Stackoverflow.com/questions/63881599",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1488184/"
] |
The above example permits the creation of a memory based hpa via the latest version of the [k8s python client](https://github.com/kubernetes-client/python) (v12.0.1).
```
my_metrics = []
my_metrics.append(client.V2beta2MetricSpec(type='Resource', resource= client.V2beta2ResourceMetricSource(name='memory',target=client.V2beta2MetricTarget(average_utilization= 30,type='Utilization'))))
my_conditions = []
my_conditions.append(client.V2beta2HorizontalPodAutoscalerCondition(status = "True", type = 'AbleToScale'))
status = client.V2beta2HorizontalPodAutoscalerStatus(conditions = my_conditions, current_replicas = 1, desired_replicas = 1)
body = client.V2beta2HorizontalPodAutoscaler(
api_version='autoscaling/v2beta2',
kind='HorizontalPodAutoscaler',
metadata=client.V1ObjectMeta(name=self.app_name),
spec= client.V2beta2HorizontalPodAutoscalerSpec(
max_replicas=self.MAX_PODS,
min_replicas=self.MIN_PODS,
metrics = my_metrics,
scale_target_ref = client.V2beta2CrossVersionObjectReference(kind = 'Deployment', name = self.app_name, api_version = 'apps/v1'),
),
status = status)
v2 = client.AutoscalingV2beta2Api()
ret = v2.create_namespaced_horizontal_pod_autoscaler(namespace='default', body=body, pretty=True)
```
|
Following error:
>
>
> ```
> v2beta2.MetricTarget.AverageUtilization: readUint32: unexpected character: \ufffd, error found in #10 byte of ..
>
> ```
>
>
means that its expecting `Uint32` and you are passing a `string`:
```
target=client.V2beta2MetricTarget(average_utilization='50',type='Utilization')
HERE ------^^^^
```
Change it to integer value and it should work.
---
The same information you can find in [python client documentation](https://github.com/kubernetes-client/python/blob/master/kubernetes/docs/V2beta2MetricTarget.md) (notice the type column).
| 14,314
|
12,439,762
|
Given this (among more...):
```
compile_coffee() {
echo "Compile COFFEESCRIPT files..."
i=0
for folder in ${COFFEE_FOLDER[*]}
do
for file in $folder/*.coffee
do
file_name=$(echo "$file" | awk -F "/" '{print $NF}' | awk -F "." '{print $1}')
file_destination_path=${COFFEE_DESTINATION_FOLDER[${i}]}
file_destination="$file_destination_path/$file_name.js"
if [ -f $file_path ]; then
echo "+ $file -> $file_destination"
$COFFEE_CMD $COFFEE_PARAMS $file > $file_destination #FAIL
#$COFFEE_CMD $COFFEE_PARAMS $file > testfile
fi
done
i=$i+1
done
echo "done!"
compress_javascript
}
```
And just to clarify, everything except the #FAIL line works flawessly, if I'm doing something wrong just tell me, the problem I have is:
* the line executes and does what it have to do, but dont write the file that I put in "file\_destination".
* if a delete a folder in that route (it's relative to this script, see below), bash throws and error saying that the folder do not exist.
* If I make the folder again, no errors, but no file either.
* If I change the $file\_destination to "testfile", it create the file with correct contents.
* The $file\_destination path its ok -as you can see, my script echoes it-
* if I echo the entire line, copy the exact command with params and execute it onto a shell in the same directory the script is, it
works.
I don't know what is wrong with this, been wondering for two hours...
Script output (real paths):
```
(alpha)[pyron@vps herobrine]$ ./deploy.sh compile && ls -l database/static/js/
===============================
=== Compile ===
Compile COFFEESCRIPT files...
+ ./database/static/coffee/test.coffee -> ./database/static/js/test.js
done!
Linking static files to django staticfiles folder... done!
total 0
```
Complete command:
```
coffee --compile --print ./database/static/coffee/test.coffee > ./database/static/js/test.js
```
What am I missing?
**EDIT** I've made some progression through this.
In the shell, If I deactivate the python virtualenv the script works, but If I call deactivate from the script it says command not found.
|
2012/09/15
|
[
"https://Stackoverflow.com/questions/12439762",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/697150/"
] |
This will give you 0.85 and 1.00 from your specified string, stored in `$values[1]` and `$values[2]` respectively.
```
$values = array();
preg_match('/Chop Suey<\/a><\/td><td align="right">([\d]+\.[\d]+)<\/td><td align="right">([\d]+\.[\d]+)<\/td>/', 'Chop Suey</a></td><td align="right">0.85</td><td align="right">1.00</td>', $values);
```
|
You could also be more dynamic with it. Instead of statically looking for "chop suey" why not look for other alignments.
Here is a sample to that. (very basic).
```
preg_match("/\d+.\d+/",$content,$output);
```
(above match, would give you all the decimals you need in correct order.)
```
$output[0] (is the array you can loop)
for the exact numbers above, you'd use $output[0][0] and $output[0][1]
```
as seen in the regex example [here](http://lumadis.be/regex/test_regex.php?id=1301)
| 14,315
|
37,219,045
|
I have a python script I run using Cygwin and I'd like to create a clickable icon on the windows desktop that could run this script without opening Cygwin and entering in the commands by hand. how can I do this?
|
2016/05/13
|
[
"https://Stackoverflow.com/questions/37219045",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/-1/"
] |
This comes straight from Python Docs (<https://docs.python.org/3.3/using/windows.html>):
**3.3.5. Executing scripts without the Python launcher**
Without the Python launcher installed, Python scripts (files with the extension .py) will be executed by python.exe by default. This executable opens a terminal, which stays open even if the program uses a GUI. If you do not want this to happen, use the extension .pyw which will cause the script to be executed by pythonw.exe by default (both executables are located in the top-level of your Python installation directory). This suppresses the terminal window on startup.
You can also make all .py scripts execute with pythonw.exe, setting this through the usual facilities, for example (might require administrative rights):
Launch a command prompt.
Associate the correct file group with .py scripts:
```
assoc .py=Python.File
```
Redirect all Python files to the new executable:
```
ftype Python.File=C:\Path\to\pythonw.exe "%1" %*
```
|
The solution that worked like a charm for me >
From <https://www.tutorialexample.com/convert-python-script-to-exe-using-auto-py-to-exe-library-python-tutorial/>
`pip install auto-py-to-exe`
The GUI is available just by typing:
`auto-py-to-exe`
Then, I used this command to generate the desired output:
`pyinstaller --noconfirm --onedir --windowed --icon "path/favicon.ico" "path/your_python_script"`
Now I have my script as executable on taskbar
| 14,316
|
53,780,882
|
I have multiple times as a string:
```
"2018-12-14 11:20:16","2018-12-14 11:14:01","2018-12-14 11:01:58","2018-12-14 10:54:21"
```
I want to calculate the average time difference between all these times. The above example would be:
```
2018-12-14 11:20:16 - 2018-12-14 11:14:01 = 6 minutes 15 seconds
2018-12-14 11:14:01 - 2018-12-14 11:01:58 = 12 minutes 3 seconds
2018-12-14 11:14:01 - 2018-12-14 11:01:58 = 7 minutes 37 seconds
```
(375s+723s+457s)/3 = 518.3 seconds = **8 minutes 38.33 seconds** (If my hand math is correct.)
I have all the datestrings in an array. I assume it's easy to convert it to a timestamp, loop over it to calculate the differences and then the average?
I just began to learn python and this is the first program that I want to do for myself.
|
2018/12/14
|
[
"https://Stackoverflow.com/questions/53780882",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1323427/"
] |
You have a bunch of problems. I'm not going to give you solutions to all of these. Just hints to get you started.
I assume you know...
* ... how to convert a time string to a `datetime.datetime` object,
* ... how to calculate the difference between two, i.e., a `datetime.timedelta`,
* ... how to compute a mean value (`datetime.timedelta` supports both addition and division by `int` or `float`).
So the only complicated step in this task will be, how to calculate the differences between adjacent items in a list. I'm going to show you this.
When you have a list of items, you can *slice* and *zip* to produce an iterator to iterate the pairs of adjacent elements in the list.
For example, try this (untested code):
```
l = range(10)
for i,j in zip(l, l[1:]):
print(i, j)
```
Then build a list to contain the differences, e.g., using a *list comprehension*:
```
deltas = [i-j for i,j in zip(l, l[1:])]
```
Now [calculate your mean value](https://stackoverflow.com/q/3617170/1025391) and you're done.
For reference:
* <https://docs.python.org/3/library/datetime.html#datetime-objects>
* <https://docs.python.org/3/library/datetime.html#timedelta-objects>
* [Understanding Python's slice notation](https://stackoverflow.com/questions/509211/understanding-pythons-slice-notation)
* [Iterate a list as pair (current, next) in Python](https://stackoverflow.com/q/5434891/1025391)
* <https://docs.python.org/3/tutorial/datastructures.html#list-comprehensions>
* [Average timedelta in list](https://stackoverflow.com/q/3617170/1025391)
|
```
from datetime import datetime
import numpy as np
ts_list = ["2018-12-14 11:20:16","2018-12-14 11:14:01","2018-12-14 11:01:58","2018-12-14 10:54:21"]
dif_list = []
for i in range(len(ts_list)-1):
dif_list.append((datetime.strptime(ts_list[i], '%Y-%m-%d %H:%M:%S')-datetime.strptime(ts_list[i+1], '%Y-%m-%d %H:%M:%S' )).total_seconds())
avg = np.mean(dif_list)
print(avg)
```
result is 518.3333333333334 seconds.
| 14,326
|
44,367,508
|
I have a for loop which runs a Python script ~100 times on 100 different input folders. The python script is most efficient on 2 cores, and I have 50 cores available. So I'd like to use GNU parallel to run the script on 25 folders at a time.
Here's my for loop (works fine, but is sequential of course), the python script takes a bunch of input variables including the `-p 2` which runs it on two cores:
```
for folder in $(find /home/rob/PartitionFinder/ -maxdepth 2 -type d); do
python script.py --raxml --quick --no-ml-tree $folder --force -p 2
done
```
and here's my attempt to parallelise it, which does not work:
```
folders=$(find /home/rob/PartitionFinder/ -maxdepth 2 -type d)
echo $folders | parallel -P 25 python script.py --raxml --quick --no-ml-tree {} --force -p 2
```
The issue I'm hitting (perhaps it's just the first of many though) is that my `folders` variable is not a list, so it's really just passing a long string of 100 folders as the `{}` to the script.
All hints gratefully received.
|
2017/06/05
|
[
"https://Stackoverflow.com/questions/44367508",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1084470/"
] |
Replace `echo $folders | parallel ...` with `echo "$folders" | parallel ...`.
Without the double quotes, the shell parses spaces in `$folders` and passes them as separate arguments to `echo`, which causes them to be printed on one line. `parallel` provides each line as argument to the job.
To avoid such quoting issues altogether, it is always a good idea to pipe `find` to `parallel` directly, and use the null character as the delimiter:
```
find ... -print0 | parallel -0 ...
```
This will work even when encountering file names that contain multiple spaces or a newline character.
|
you can pipe find directly to parallel:
```
find /home/rob/PartitionFinder/ -maxdepth 2 -type d | parallel -P 25 python script.py --raxml --quick --no-ml-tree {} --force -p 2
```
If you want to keep the string in `$folder`, you can pipe the echo to xargs.
```
echo $folders | xargs -n 1 | parallel -P 25 python script.py --raxml --quick --no-ml-tree {} --force -p 2
```
| 14,327
|
12,002,051
|
Recently, I've found that the ipad can run python with a special python interpret. But editing the code on ipad is a terrible nightmare. So how can I push the python code which has been edited completely on PC into the ipad and run it?
|
2012/08/17
|
[
"https://Stackoverflow.com/questions/12002051",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1508702/"
] |
If you are using Python for IOS, the following should work, although I haven't yet tried it myself.
Email the program to your own e-mail account as text. Then read the e-mail message on your iPad in any one of several e-mail applications. Cut and paste the text from the e-mail message into the python editor.
Don't cut and paste the code into the interpreter. Then you can't save it, at least not in the current version of Python for IOS. Instead, click on the second icon on the bottom (I think that's the icon, my iPad is at home and I'm not home now), to open the editor. You can save files from the editor using the menu button on the upper right; there's a "save" menu item that allows saving the code to a file on the iPad.
I'll be trying this tonight. Sorry for posting this before trying it, but I'm not sure I'll return to this question later. It 'should' work. (Famous last words!)
|
I use Python 2.7 for IOS and download source python files through iFunBox into /var/mobile/Applications/Python 2.7 for IOS/Documents/User Scripts. Nevertheless I can't recommend this application as it's quite buggy and very slow when editing code.
| 14,330
|
44,952,354
|
I would like to use segment of the same string in a formatted string.
```python
input_string = 'abcdefhijk'
result_string = "A's name is abcd-defh; he does hijk"
```
The intuitive solution is
```python
"A's name is {0[0:4]}-{0{[3:7]}; he does {0[6:10]}".format(input_string)
```
And this obviously doesn't work.
What works is the round-about way
```python
"A's name is {0}-{1}; he does {2}".format(input_string[0:4], input_string[3:7], input_string[6:10])
```
Is there any easy way to specify the index range in the formatting string?
|
2017/07/06
|
[
"https://Stackoverflow.com/questions/44952354",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5112950/"
] |
In Python 3.6, you can make a similar construct to your "intuitive solution" with [f-strings](https://www.python.org/dev/peps/pep-0498/):
```
>>> input_string = 'abcdefhijk'
>>> f"A's name is {input_string[0:4]}-{input_string[3:7]}; he does {input_string[6:10]}"
"A's name is abcd-defh; he does hijk"
```
|
Have you tried the below
```
result_string = "A's name is %s-%s; he does %s" %(input_string[:4], input_string[3:7], input_string[6:10])
```
| 14,340
|
65,745,683
|
I have successfully installed python 3.9.1 with Numpy and Matplotlib on a new Mac mini with Apple Silicon. However, I cannot install SciPy : I get compilation errors when using
```
python3 -m pip install scipy
```
I also tried installing everything from brew, and `import scipy` works, but using it gives a seg fault. I have installed ARM versions of lapack and openblas, but this does not fix the problem.
Has anyone succeeded? (I am interested in running it natively, not through Rosetta).
|
2021/01/16
|
[
"https://Stackoverflow.com/questions/65745683",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8206216/"
] |
This one worked for me after wasting hours:
```
pip install --pre -i https://pypi.anaconda.org/scipy-wheels-nightly/simple scipy
```
|
The following worked for me.
I'm currently using `Python 3.10.8`, installed using `brew`.
And currently, when installing `numpy==1.23.4`, `setuptools < 60.0.0` is required.
I'm using `(brew --prefix)/bin/python3 -m pip` for explicitly calling the `pip` from `python 3.10` installed by `brew`.
Here are the versions I've just installed.
```
# python 3.10.8
# pip 22.3
# setuptools 59.8.0
# wheel 0.37.1
# numpy 1.23.4
# scipy 1.9.3
# pandas 1.5.1
# scikit-learn 1.1.3
# seaborn 0.12.1
# statsmodels 0.13.2
# gcc 12.2.0
# openblas 0.3.21
# gfortran 12
# pybind11 2.10.0
# Cython 0.29.32
# pythran 0.12.0
```
Here are the steps I followed:
```
# setuptools < 60.0.0 is required for numpy==1.23.4 in Python 3.10.8
$(brew --prefix)/bin/python3 -m pip install --upgrade pip==22.3 setuptools==59.8.0 wheel==0.37.1
# uninstall numpy and pythran first
$(brew --prefix)/bin/python3 -m pip uninstall -y numpy pythran
# uninstall scipy
$(brew --prefix)/bin/python3 -m pip uninstall -y scipy
# install prerequisites (with brew)
brew install gcc
brew install openblas
brew install gfortran
# set environment variables for compilers to find openblas
export LDFLAGS="-L/opt/homebrew/opt/openblas/lib"
export CPPFLAGS="-I/opt/homebrew/opt/openblas/include"
# install the prerequisites (with pip)
$(brew --prefix)/bin/python3 -m pip install pybind11
$(brew --prefix)/bin/python3 -m pip install Cython
# install numpy
$(brew --prefix)/bin/python3 -m pip install --no-binary :all: numpy
# install pythran after installing numpy, before installing scipy
$(brew --prefix)/bin/python3 -m pip install pythran
# install scipy
export OPENBLAS="$(brew --prefix)/opt/openblas/lib/"
$(brew --prefix)/bin/python3 -m pip install scipy
# install pandas
$(brew --prefix)/bin/python3 -m pip install pandas
# install scikit-learn
$(brew --prefix)/bin/python3 -m pip install scikit-learn
# install seaborn
$(brew --prefix)/bin/python3 -m pip install seaborn
# install statsmodels
$(brew --prefix)/bin/python3 -m pip install statsmodels
```
| 14,341
|
65,560,546
|
I have a multi index dataframe like this:
```
PID Fid x y
A 1 2 3
2 6 1
3 4 6
B 1 3 5
2 2 4
3 5 7
```
I would like to delete the rows with the highest x-value per patient (PID). I need to get a new dataframe with the remaining rows and all columns to continue my analysis on these data, for example the mean of the remaining y-values.
The dataframe should look like this:
```
PID Fid x y
A 1 2 3
3 4 6
B 1 3 5
2 2 4
```
I used the code from [Python Multiindex Dataframe remove maximum](https://stackoverflow.com/questions/49669129/python-multiindex-dataframe-remove-maximum)
```
idx = (df.reset_index('Fid')
.groupby('PID')['x']
.max()
.reset_index()
.values.tolist())
df_s = df.loc[df.index.difference(idx)]
```
I can get idx, but not remove them from the dataframe. It says TypeError: unhashable type: 'list'
What do I do wrong?
|
2021/01/04
|
[
"https://Stackoverflow.com/questions/65560546",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14564287/"
] |
You can try this:
```
idx = df.groupby(level=0)['x'].idxmax()
df[~df.index.isin(idx)]
x y
PID Fid
A 1 2 3
3 4 6
B 1 3 5
2 2 4
```
Or
You can use `pd.Index.difference` here.
```
df.loc[df.index.difference(df['x'].groupby(level=0).idxmax())] #Use level=0 if index is unnamed
#('PID').idxmax())]
x y
PID Fid
A 1 2 3
3 4 6
B 1 3 5
2 2 4
```
|
Use [`GroupBy.transform`](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.core.groupby.DataFrameGroupBy.transform.html) for repeat max values per groups, compare by [`Series.ne`](http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.ne.html) for not equal and filter in [`boolean indexing`](http://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#boolean-indexing):
```
df_s = df[df.groupby('PID')['x'].transform('max').ne(df['x'])]
print (df_s)
x y
PID Fid
A 1 2 3
3 4 6
B 1 3 5
2 2 4
```
| 14,351
|
45,619,217
|
I created my own user model by sub-classing AbstractBaseUser as is recommended by the docs. The goal here was to use a new field called mob\_phone as the identifying field for registration and log-in.
It works a charm - for the first user. It sets the username field as nothing - blank.But when I register a second user I get a "UNIQUE constraint failed: user\_account\_customuser.username".
I basically want to do away with the username field altogether. How can I achieve this?
I basically need to either find a way of making the username field not unique or remove it altogether.
models.py
```
from django.contrib.auth.models import AbstractUser, BaseUserManager
class MyUserManager(BaseUserManager):
def create_user(self, mob_phone, email, password=None):
"""
Creates and saves a User with the given mobile number and password.
"""
if not mob_phone:
raise ValueError('Users must mobile phone number')
user = self.model(
mob_phone=mob_phone,
email=email
)
user.set_password(password)
user.save(using=self._db)
return user
def create_superuser(self, mob_phone, email, password):
"""
Creates and saves a superuser with the given email, date of
birth and password.
"""
user = self.create_user(
mob_phone=mob_phone,
email=email,
password=password
)
user.is_admin = True
user.save(using=self._db)
return user
class CustomUser(AbstractUser):
mob_phone = models.CharField(blank=False, max_length=10, unique=True)
is_admin = models.BooleanField(default=False)
objects = MyUserManager()
# override username field as indentifier field
USERNAME_FIELD = 'mob_phone'
EMAIL_FIELD = 'email'
def get_full_name(self):
return self.mob_phone
def get_short_name(self):
return self.mob_phone
def __str__(self): # __unicode__ on Python 2
return self.mob_phone
def has_perm(self, perm, obj=None):
"Does the user have a specific permission?"
# Simplest possible answer: Yes, always
return True
def has_module_perms(self, app_label):
"Does the user have permissions to view the app `app_label`?"
# Simplest possible answer: Yes, always
return True
@property
def is_staff(self):
"Is the user a member of staff?"
# Simplest possible answer: All admins are staff
return self.is_admin
```
Stacktrace:
>
> Traceback (most recent call last):
> File "manage.py", line 22, in
> execute\_from\_command\_line(sys.argv)
> File "/home/dean/.local/lib/python3.5/site-packages/django/core/management/**init**.py", line 363, in execute\_from\_command\_line
> utility.execute()
> File "/home/dean/.local/lib/python3.5/site-packages/django/core/management/**init**.py", line 355, in execute
> self.fetch\_command(subcommand).run\_from\_argv(self.argv)
> File "/home/dean/.local/lib/python3.5/site-packages/django/core/management/base.py", line 283, in run\_from\_argv
> self.execute(\*args, \*\*cmd\_options)
> File "/home/dean/.local/lib/python3.5/site-packages/django/contrib/auth/management/commands/createsuperuser.py", line 63, in execute
> return super(Command, self).execute(\*args, \*\*options)
> File "/home/dean/.local/lib/python3.5/site-packages/django/core/management/base.py", line 330, in execute
> output = self.handle(\*args, \*\*options)
> File "/home/dean/.local/lib/python3.5/site-packages/django/contrib/auth/management/commands/createsuperuser.py", line 183, in handle
> self.UserModel.\_default\_manager.db\_manager(database).create\_superuser(\*\*user\_data)
> File "/home/dean/Development/UrbanFox/UrbanFox/user\_account/models.py", line 43, in create\_superuser
> password=password
> File "/home/dean/Development/UrbanFox/UrbanFox/user\_account/models.py", line 32, in create\_user
> user.save(using=self.\_db)
> File "/home/dean/.local/lib/python3.5/site-packages/django/contrib/auth/base\_user.py", line 80, in save
> super(AbstractBaseUser, self).save(\*args, \*\*kwargs)
> File "/home/dean/.local/lib/python3.5/site-packages/django/db/models/base.py", line 807, in save
> force\_update=force\_update, update\_fields=update\_fields)
> File "/home/dean/.local/lib/python3.5/site-packages/django/db/models/base.py", line 837, in save\_base
> updated = self.\_save\_table(raw, cls, force\_insert, force\_update, using, update\_fields)
> File "/home/dean/.local/lib/python3.5/site-packages/django/db/models/base.py", line 923, in \_save\_table
> result = self.\_do\_insert(cls.\_base\_manager, using, fields, update\_pk, raw)
> File "/home/dean/.local/lib/python3.5/site-packages/django/db/models/base.py", line 962, in \_do\_insert
> using=using, raw=raw)
> File "/home/dean/.local/lib/python3.5/site-packages/django/db/models/manager.py", line 85, in manager\_method
> return getattr(self.get\_queryset(), name)(\*args, \*\*kwargs)
> File "/home/dean/.local/lib/python3.5/site-packages/django/db/models/query.py", line 1076, in \_insert
> return query.get\_compiler(using=using).execute\_sql(return\_id)
> File "/home/dean/.local/lib/python3.5/site-packages/django/db/models/sql/compiler.py", line 1107, in execute\_sql
> cursor.execute(sql, params)
> File "/home/dean/.local/lib/python3.5/site-packages/django/db/backends/utils.py", line 80, in execute
> return super(CursorDebugWrapper, self).execute(sql, params)
> File "/home/dean/.local/lib/python3.5/site-packages/django/db/backends/utils.py", line 65, in execute
> return self.cursor.execute(sql, params)
> File "/home/dean/.local/lib/python3.5/site-packages/django/db/utils.py", line 94, in **exit**
> six.reraise(dj\_exc\_type, dj\_exc\_value, traceback)
> File "/home/dean/.local/lib/python3.5/site-packages/django/utils/six.py", line 685, in reraise
> raise value.with\_traceback(tb)
> File "/home/dean/.local/lib/python3.5/site-packages/django/db/backends/utils.py", line 65, in execute
> return self.cursor.execute(sql, params)
> File "/home/dean/.local/lib/python3.5/site-packages/django/db/backends/sqlite3/base.py", line 328, in execute
> return Database.Cursor.execute(self, query, params)
> django.db.utils.IntegrityError: UNIQUE constraint failed: user\_account\_customuser.username
>
>
>
|
2017/08/10
|
[
"https://Stackoverflow.com/questions/45619217",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5644300/"
] |
Okay I'm an idiot. Literally seconds after posting this the obvious solution occurred to me:
```
username = models.CharField(max_length=40, unique=False, default='')
```
Just override the username field and make it non unique.
Rubber duck theory in action....
|
It might be because you would have already entered some data in database which must be contradicting to the constraint. So try deleting that data or whole database and then run the command again.
| 14,352
|
607,931
|
I've deployed trac using apache/mod\_wsgi (no SSL) (preferable, since the
problem I'm facing with CGI is performance), and it works fine WITHOUT SVN
integration. But I actually need SVN, so when I configure the repository
path (i.e: repository\_dir = c:/projects/svn/my\_project) I can't even get my
project TRAC to even open any of its pages.
On Mozilla Firefox shows a white page and on MS-IE shows a 'The page cannot
be displayed' error as if the server has 'timed out'.
I've tried with mod\_python (3.3.1) and the exact same problem happens. It
works fine with CGI though.
I've also tried disabeling SVN authentication, thinking it might be a
authentication conflict (I'm using Apache Basic Auth).
**Environment:**
* Win 2000 Server SP 4;
* Apache 2.2.10;
* Python 2.5.2;
* mod\_wsgi revision 1018 2.3, py25\_apache22;
* Trac 0.12dev;
* Subversion 1.5.3.
**Configuration files:**
* Apache httpd.conf excerpt:
>
>
> ```
> WSGIScriptAlias /trac "c:/projects/apache/trac.wsgi"
>
> <Directory c:/projects/apache>
> WSGIApplicationGroup %{GLOBAL}
> Order deny,allow
> Allow from all
> </Directory>
>
> ```
>
>
* trac.wsgi:
>
>
> ```
> import sys
> sys.stdout = sys.stderr
>
> import os
> os.environ['TRAC_ENV_PARENT_DIR'] = 'c:/projects/trac'
> os.environ['PYTHON_EGG_CACHE'] = 'c:/projects/eggs'
>
> import trac.web.main
>
> application = trac.web.main.dispatch_request
>
> ```
>
>
* trac.ini excerpt:
>
>
> ```
> repository_type = svn
> repository_dir = c:/projects/svn/my_project
>
> ```
>
>
Any ideas???
|
2009/03/03
|
[
"https://Stackoverflow.com/questions/607931",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/73377/"
] |
Set a break point on the line that reads: NewFaqDropDownCategory.DataBind() and one in your event handler (NewFaqDropDownCategory\_SelectedIndexChanged).
I suspect the databind is being called right before your NewFaqDropDownCategory\_SelectedIndexChanged event fires causing your selected value to change.
If so, you need either to make sure you only databind if you aren't in the middle of your autopostback or instead of using NewFaqDropDownCategory.SelectedIndex on the first line of your event handler you can cast the sender parameter to a DropDownList and use its selected value.
|
I think there is a bug in your LINQ query for the second drop down box
```
Dim faqs = (From f In db.faqs Where f.category = NewFaqDropDownCategory.SelectedValue)
```
Here you are comparing SelectedValue to category. Yet in the first combobox you said that the DataValueField should be category\_id. Try changing f.category to f.category\_id
| 14,353
|
67,366,722
|
I am having issue parsing an xml result using python. I tried using etree.Element(text), but the error says Invalid tag name. Does anyone know if this is actually an xml and any way of parsing the result using a standard package? Thank you!
```
import requests, sys, json
from lxml import etree
response = requests.get("https://eutils.ncbi.nlm.nih.gov/entrez/eutils/efetch.fcgi?db=snp&id=1593319917&report=XML")
text=response.text
print(text)
<?xml version="1.0" ?>
<ExchangeSet xmlns:xsi="https://www.w3.org/2001/XMLSchema-instance" xmlns="https://www.ncbi.nlm.nih.gov/SNP/docsum" xsi:schemaLocation="https://www.ncbi.nlm.nih.gov/SNP/docsum ftp://ftp.ncbi.nlm.nih.gov/snp/specs/docsum_eutils.xsd" ><DocumentSummary uid="1593319917"><SNP_ID>1593319917</SNP_ID><ALLELE_ORIGIN/><GLOBAL_MAFS><MAF><STUDY>SGDP_PRJ</STUDY><FREQ>G=0.5/1</FREQ></MAF></GLOBAL_MAFS><GLOBAL_POPULATION/><GLOBAL_SAMPLESIZE>0</GLOBAL_SAMPLESIZE><SUSPECTED/><CLINICAL_SIGNIFICANCE/><GENES><GENE_E><NAME>FLT3</NAME><GENE_ID>2322</GENE_ID></GENE_E></GENES><ACC>NC_000013.11</ACC><CHR>13</CHR><HANDLE>SGDP_PRJ</HANDLE><SPDI>NC_000013.11:28102567:G:A</SPDI><FXN_CLASS>upstream_transcript_variant</FXN_CLASS><VALIDATED>by-frequency</VALIDATED><DOCSUM>HGVS=NC_000013.11:g.28102568G>A,NC_000013.10:g.28676705G>A,NG_007066.1:g.3001C>T|SEQ=[G/A]|LEN=1|GENE=FLT3:2322</DOCSUM><TAX_ID>9606</TAX_ID><ORIG_BUILD>154</ORIG_BUILD><UPD_BUILD>154</UPD_BUILD><CREATEDATE>2020/04/27 06:19</CREATEDATE><UPDATEDATE>2020/04/27 06:19</UPDATEDATE><SS>3879653181</SS><ALLELE>R</ALLELE><SNP_CLASS>snv</SNP_CLASS><CHRPOS>13:28102568</CHRPOS><CHRPOS_PREV_ASSM>13:28676705</CHRPOS_PREV_ASSM><TEXT/><SNP_ID_SORT>1593319917</SNP_ID_SORT><CLINICAL_SORT>0</CLINICAL_SORT><CITED_SORT/><CHRPOS_SORT>0028102568</CHRPOS_SORT><MERGED_SORT>0</MERGED_SORT></DocumentSummary>
</ExchangeSet>
```
|
2021/05/03
|
[
"https://Stackoverflow.com/questions/67366722",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14304103/"
] |
You're using the wrong method to parse your XML. The `etree.Element`
class is for *creating* a single XML element. For example:
```
>>> a = etree.Element('a')
>>> a
<Element a at 0x7f8c9040e180>
>>> etree.tostring(a)
b'<a/>'
```
As Jayvee has pointed how, to parse XML contained in a string you use
the `etree.fromstring` method (to parse XML content in a file you
would use the `etree.parse` method):
```
>>> response = requests.get("https://eutils.ncbi.nlm.nih.gov/entrez/eutils/efetch.fcgi?db=snp&id=1593319917&report=XML")
>>> doc = etree.fromstring(response.text)
>>> doc
<Element {https://www.ncbi.nlm.nih.gov/SNP/docsum}ExchangeSet at 0x7f8c9040e180>
>>>
```
Note that because this XML document sets a default namespace, you'll
need properly set namespaces when looking for elements. E.g., this
will fail:
```
>>> doc.find('DocumentSummary')
>>>
```
But this works:
```
>>> doc.find('docsum:DocumentSummary', {'docsum': 'https://www.ncbi.nlm.nih.gov/SNP/docsum'})
<Element {https://www.ncbi.nlm.nih.gov/SNP/docsum}DocumentSummary at 0x7f8c8e987200>
```
|
You can check if the xml is well formed by try converting it:
```
import requests, sys, json
from lxml import etree
response = requests.get("https://eutils.ncbi.nlm.nih.gov/entrez/eutils/efetch.fcgi?db=snp&id=1593319917&report=XML")
text=response.text
try:
doc=etree.fromstring(text)
print("valid")
except:
print("not a valid xml")
```
| 14,356
|
73,142,014
|
Am using the following ETL pipeline to get data into BigQuery. Data source are .csv & .xls files from a URL posted daily at 3 pm.
Cloud Scheduler publishes a message to a cloud pub/sub topic at 3:05 pm.
Pub/Sub pushes/triggers the subscribers-cloud functions
When triggered, these cloud functions (python script) downloads the file from the URL, performs transformations (cleaning, formatting, aggregations & filtering) and uploads it to BigQuery.
Is there a cleaner way in GCP to download files from a URL on a schedule, transform & upload it to BigQuery instead of using cloud scheduler + pub/sub + cloud functions ?
I researched Dataflow but cant figure out if it can do all three jobs (downloading from a URL on a schedule, transforming and uploading it to BQ)
|
2022/07/27
|
[
"https://Stackoverflow.com/questions/73142014",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12322651/"
] |
In your architecture, Dataflow can only replace the PubSub + Cloud Functions. You still need a scheduler to run a dataflow (based on a template, maybe your custom template).
But, before using dataflow, why do you need it? I'm in charge of a datalake, to ingest data from different sources, but, because each element to ingest are small enough to be kept in memory (of Cloud Run, but it's very similar to Cloud Functions) there is no problem to keep that pattern if it works!!
|
I do this kind of thing all the time and I can see why you'd wonder if there's a cleaner way. We use Composer (Ariflow) in GCP. In your scenario we would create one DAG with four sequential taks:
1. Copy file from URL to local bucket
2. Load file from local bucket to stage table
3. Merge stage table into final destination table using bigquery SQL script
4. Clean up staging table
The composer job would look something like this:
[](https://i.stack.imgur.com/sHJQb.png)
All the code required to load a table end to end is located one DAG/folder.
*You do need to pay for, and maintain, a Composer instance on GCP. Would be interesting to see how other companies do this kind of thing?*
| 14,357
|
34,405,936
|
When I try to do `python manage.py syncdb` in my Django app, I get the error **ImportError: No module named azure.storage.blob**. But thing is, the following packages are installed if one does `pip freeze`:
`azure-common==1.0.0
azure-mgmt==0.20.1
azure-mgmt-common==0.20.0
azure-mgmt-compute==0.20.0
azure-mgmt-network==0.20.1
azure-mgmt-nspkg==1.0.0
azure-mgmt-resource==0.20.1
azure-mgmt-storage==0.20.0
azure-nspkg==1.0.0
azure-servicebus==0.20.1
azure-servicemanagement-legacy==0.20.1
azure-storage==0.20.3`
Clearly **azure-storage** is installed, as is evident. Why is **azure.storage.blob** not available for import? I even went into my `.virtualenvs` directory, and got in all the way to `azure.storage.blob` (i.e. `~/.virtualenvs/myvirtualenv/local/lib/python2.7/site-packages/azure/storage/blob$`). It exists!
What do I do? This answer here has not helped: [Install Azure Python api on linux: importError: No module named storage.blob](https://stackoverflow.com/questions/32787450/install-azure-python-api-on-linux-importerror-no-module-named-storage-blob)
*Note: please ask for more information in case you need it*
|
2015/12/21
|
[
"https://Stackoverflow.com/questions/34405936",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4936905/"
] |
I had a similar issue. To alleviate that, I followed this discussion here: <https://github.com/Azure/azure-storage-python/issues/51#issuecomment-148151993>
Basically, try `pip install azure==0.11.1` before trying `syncdb`, and I'm confident it will work for you!
|
There is a thread similar with yours, please check my answer for the thread [Unable to use azure SDK in Python](https://stackoverflow.com/questions/34213764/unable-to-use-azure-sdk-in-python).
Based on my experience, Python imports the third-party library packages from some library paths that you can check them thru codes `import sys` & `sys.path` in the python interpreter. So you can try to dynamically add the new path contains the installed `azure` packages into the `sys.path` in the Python runtime to solve the issue. For adding the new library path, you just code `sys.path.append('<the new paths you want to add>')` at the front of the code like `import azure`.
If the way has not helped, I suggest you can try to reinstall Python environment. On Ubuntu, you can use the command `sudo apt-get remove python python-pip` & `sudo apt-get install python python-pip` to reinstall `Python 2.7` & `pip 2.7`.(Note: The current major Linux distributions use Python 2.7 as the system default version.)
If Python 3.4 as your runtime for Django, the apt package names for Ubuntu are `python3` and `python3-pip`, and you can use `sudo pip3 install azure` for `Python 3.4` on Ubuntu.
Any concern, please feel free to let me know.
| 14,358
|
7,548,562
|
I've been getting weird results and I finally noticed that my habit of putting spaces in a tuple is causing the problem. If you can reproduce this problem and tell me why it works this way, you would be saving what's left of my hair. Thanks!
```
jcomeau@intrepid:/tmp$ cat haversine.py
#!/usr/bin/python
def dms_to_float(degrees):
d, m, s, compass = degrees
d, m, s = int(d), float(m), float(s)
float_degrees = d + (m / 60) + (s / 3600)
float_degrees *= [1, -1][compass in ['S', 'W', 'Sw']]
return float_degrees
jcomeau@intrepid:/tmp$ python
Python 2.6.7 (r267:88850, Jun 13 2011, 22:03:32)
[GCC 4.6.1 20110608 (prerelease)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> from haversine import *
>>> dms_to_float((111, 41, 0, 'SW'))
111.68333333333334
>>> dms_to_float((111,41,0,'Sw'))
-111.68333333333334
```
With spaces in the tuple, the answer is wrong. Without, the answer is correct.
|
2011/09/25
|
[
"https://Stackoverflow.com/questions/7548562",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/493161/"
] |
The spaces should make no difference. The difference is due to the case: `SW` vs `Sw`.
You don't check for `SW` here:
```
compass in ['S', 'W', 'Sw']]
```
Perhaps change it to this:
```
compass.upper() in ['S', 'W', 'SW']]
```
|
Presuming that the "degrees" relate to degrees of latitude or longitude, I can't imagine why "SW" is treated as a viable option. Latitude is either N or S. Longitude is either E or W. Please explain.
Based on your sample of size 1, user input is not to be trusted. Consider checking the input, or at least ensuring that bogus input will cause an exception to be raised. You appear to like one-liners; try this:
```
float_degrees *= {'n': 1, 's': -1, 'e': 1, 'w': -1}[compass.strip().lower()]
```
| 14,359
|
64,120,358
|
I try to make simple 3D plot with plot\_surface of matplotlib, below is the minimum example:
```
import numpy as np
import matplotlib.pyplot as plt
from matplotlib import cm
x_test = np.arange(0.001, 0.01, 0.0005)
y_test = np.arange(0.1, 100, 0.05)
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
Xtest, Ytest = np.meshgrid(x_test, y_test)
Ztest = Xtest**-1 + Ytest
surf = ax.plot_surface(Xtest, Ytest, Ztest,
cmap=cm.plasma, alpha=1,
antialiased=False)
fig.colorbar(surf, shrink=0.5, aspect=5)
ax.set_ylabel(r'$ Y $', fontsize=16)
ax.set_xlabel(r'$ X $', fontsize=16)
ax.set_zlabel(r'$ Z $', fontsize=16)
```
The result gives strange colormap, that does not represent the magnitude of the z scale, as you can see from here [3D plot result](https://i.stack.imgur.com/kjxl6.png).
I mean if you take a straight line of constant Z, you don't see the same color.
I've tried to change the `ccount` and `rcount` inside `plot_surface` function or changing the interval of the Xtest or Ytest data, nothing helps.
I've tried some suggestion from [here](https://stackoverflow.com/questions/46260682/how-to-set-the-scale-of-z-axis-equal-to-x-and-y-axises-in-python-plot-surface) and [here](https://stackoverflow.com/questions/46260682/how-to-set-the-scale-of-z-axis-equal-to-x-and-y-axises-in-python-plot-surface). But it seems not related.
Could you help me how to solve this? It seems a simple problem, but I couldn't solve it.
Thanks!
**Edit**
I add example but using the original equation that I don't write it here (it's complicated), please take a look: [Comparison](https://i.stack.imgur.com/7iVB0.png).
While the left figure was done in Matlab (by my advisor),
the right figure by using matplotlib.
You can see clearly the plot on the left really make sense,
the brightest color always on the maximum z-axis. Unfortunately, i don't know matlab. I hope i can do it by using python,
Hopefully this edit makes it more clear of the problem.
**Edit 2**
I'm sure this is not the best solution. That's why I put it here, not as an answer. As suggested by @swatchai to use the `contour3D`.
Since surface is set of lines, I can generate the correct result by plotting a lot of contour lines, by using:
```
surf = ax.contour3D(Xtest, Ytest, Ztest, 500, cmap=cm.plasma,
alpha=0.5, antialiased=False)
```
The colormap is correct as you can see from here[alternative1](https://i.stack.imgur.com/x3Nt3.png)
But the plot is very heavy. When you zoom-in, it doesn't look good, unless you increase again the number of the contour.
Any suggestion are welcome :).
|
2020/09/29
|
[
"https://Stackoverflow.com/questions/64120358",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14361102/"
] |
It's possible to have 2 nested scroll viewers in Xamarin.Forms.
Notice how it says 'scroll viewers ***should not*** be nested, this means that it is certainly possible but it is not recommended. I think nested scroll viewers creates a bad user-experience and makes for a clunky app, especially for Xamarin.Forms; but again, here is a demonstration of a nested scroll viewer:
```
<ScrollView Orientation="Horizontal">
<StackLayout Orientation="Vertical">
<Grid>
</Grid>
<ScrollView Orientation="Vertical">
<Grid>
</Grid>
</ScrollView>
</StackLayout>
</ScrollView>
```
I think once you've played around with nested scroll viewers it should be easy for you to implement what you want. I would recommend using a grid with column and row definitions with your desired measurements.
If I've answered your question, please mark this as the correct answer.
Thanks,
|
According to your screenshot, i make a code sample for your reference.
**Xaml:**
```
<ScrollView Orientation="Vertical">
<StackLayout Orientation="Horizontal">
<BoxView BackgroundColor="Blue" WidthRequest="150" />
<StackLayout Orientation="Vertical">
<ScrollView Orientation="Horizontal" BackgroundColor="Accent">
<StackLayout Orientation="Horizontal">
<Label
HeightRequest="50"
Text="A1"
WidthRequest="50" />
<Label
HeightRequest="50"
Text="A2"
WidthRequest="50" />
<Label
HeightRequest="50"
Text="A3"
WidthRequest="50" />
<Label
HeightRequest="50"
Text="A4"
WidthRequest="50" />
<Label
HeightRequest="50"
Text="A5"
WidthRequest="50" />
<Label
HeightRequest="50"
Text="A6"
WidthRequest="50" />
<Label
HeightRequest="50"
Text="A7"
WidthRequest="50" />
<Label
HeightRequest="50"
Text="A8"
WidthRequest="50" />
<Label
HeightRequest="50"
Text="A9"
WidthRequest="50" />
<Label
HeightRequest="50"
Text="A10"
WidthRequest="50" />
</StackLayout>
</ScrollView>
<Label HeightRequest="150" Text="Elem1" BackgroundColor="Green"/>
<Label HeightRequest="150" Text="Elem2" BackgroundColor="Green"/>
<Label HeightRequest="150" Text="Elem3" BackgroundColor="Green"/>
<Label HeightRequest="150" Text="Elem4" BackgroundColor="Green"/>
<Label HeightRequest="150" Text="Elem5" BackgroundColor="Green"/>
<ScrollView Orientation="Horizontal">
<StackLayout Orientation="Horizontal">
<Label
HeightRequest="50"
Text="B1"
WidthRequest="50" />
<Label
HeightRequest="50"
Text="B2"
WidthRequest="50" />
<Label
HeightRequest="50"
Text="B3"
WidthRequest="50" />
<Label
HeightRequest="50"
Text="B4"
WidthRequest="50" />
<Label
HeightRequest="50"
Text="B5"
WidthRequest="50" />
<Label
HeightRequest="50"
Text="B6"
WidthRequest="50" />
<Label
HeightRequest="50"
Text="B7"
WidthRequest="50" />
<Label
HeightRequest="50"
Text="B8"
WidthRequest="50" />
<Label
HeightRequest="50"
Text="B9"
WidthRequest="50" />
<Label
HeightRequest="50"
Text="B10"
WidthRequest="50" />
</StackLayout>
</ScrollView>
</StackLayout>
</StackLayout>
</ScrollView>
```
**Screenshot:**
[](https://i.stack.imgur.com/XyRJC.gif)
| 14,360
|
50,962,836
|
I am new with python-pptx. But I am familiar with its basic working. I have searched a lot but I could not find a way to change a particular text by another text in all slides. That text may be in any text\_frame of a slide. like all slides in a ppt have 'java' keyword, I want to change it by 'python' using python pptx in slides.
```
for slide in ppt.slides:
if slide.has_text_frame:
#do something with text frames
```
|
2018/06/21
|
[
"https://Stackoverflow.com/questions/50962836",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8847530/"
] |
**Shortcuts**:
* `Shift` + `Command` + `L`: Show Library.
* `Shift` + `Command` + `M`: Show Media Library.
---
Xcode 10 has added a toolbar button to access the Object Library.
[](https://i.stack.imgur.com/3J26u.png)
From a [thread](https://forums.developer.apple.com/thread/103777) on Apple Developer Forum:
>
> Library content has moved from the bottom of the Inspector area to an overlay window, which can be moved and resized like Spotlight search. It dismisses once items are dragged, but holding the Option key before dragging will keep the library open for an additional drag.
>
>
> The library can be opened via a new toolbar button, the `View > Libraries` menu, or the β§βL keyboard shortcut. Content dynamically matches the active editor, so the same UI provides access to code snippets, Interface Builder, SpriteKit, or SceneKit items. The media library is available via a long press on the toolbar button, the `View > Libraries` menu, or the β§βM keyboard shortcut. (37318979, 39885726)
>
>
>
|
The library can be opened via a new toolbar button, the View β Libraries menu, or the `Shift` + `Command` + `L` keyboard shortcut.The media library is available via a long press on the toolbar button, the View β Libraries menu, or the `Shift` + `Command` + `M` keyboard shortcut.
Library content has moved from the bottom of the Inspector area to an overlay window, which can be moved and resized like Spotlight search. It dismisses once items are dragged, but holding the `Option` key before dragging will keep the library open for an additional drag.
| 14,361
|
35,678,885
|
I am following an online course for making web apps with python/mongo/bootstrap.
I install mongodb using default settings
I run mongod in powershell from install directory
```
C:\Program Files\MongoDB\Server\3.2\bin>mongod
2016-02-27T23:31:14.684-0500 I CONTROL [initandlisten] MongoDB starting : pid=2456 port=27017 dbpath=C:\data\db\ 64-bit host=DESKTOP-8LMCN7R
2016-02-27T23:31:14.686-0500 I CONTROL [initandlisten] targetMinOS: Windows 7/Windows Server 2008 R2
2016-02-27T23:31:14.687-0500 I CONTROL [initandlisten] db version v3.2.3
2016-02-27T23:31:14.689-0500 I CONTROL [initandlisten] git version: b326ba837cf6f49d65c2f85e1b70f6f31ece7937
2016-02-27T23:31:14.690-0500 I CONTROL [initandlisten] OpenSSL version: OpenSSL 1.0.1p-fips 9 Jul 2015
2016-02-27T23:31:14.692-0500 I CONTROL [initandlisten] allocator: tcmalloc
2016-02-27T23:31:14.693-0500 I CONTROL [initandlisten] modules: none
2016-02-27T23:31:14.704-0500 I CONTROL [initandlisten] build environment:
2016-02-27T23:31:14.706-0500 I CONTROL [initandlisten] distmod: 2008plus-ssl
2016-02-27T23:31:14.707-0500 I CONTROL [initandlisten] distarch: x86_64
2016-02-27T23:31:14.708-0500 I CONTROL [initandlisten] target_arch: x86_64
2016-02-27T23:31:14.709-0500 I CONTROL [initandlisten] options: {}
2016-02-27T23:31:14.712-0500 I - [initandlisten] Detected data files in C:\data\db\ created by the 'wiredTiger' storage engine, so setting the active storage engine to 'wiredTiger'.
2016-02-27T23:31:14.715-0500 W - [initandlisten] Detected unclean shutdown - C:\data\db\mongod.lock is not empty.
2016-02-27T23:31:14.718-0500 W STORAGE [initandlisten] Recovering data from the last clean checkpoint.
2016-02-27T23:31:14.720-0500 I STORAGE [initandlisten] wiredtiger_open config: create,cache_size=4G,session_max=20000,eviction=(threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000),checkpoint=(wait=60,log_size=2GB),statistics_log=(wait=0),
2016-02-27T23:31:14.954-0500 I NETWORK [HostnameCanonicalizationWorker] Starting hostname canonicalization worker
2016-02-27T23:31:14.954-0500 I FTDC [initandlisten] Initializing full-time diagnostic data capture with directory 'C:/data/db/diagnostic.data'
2016-02-27T23:31:14.962-0500 I NETWORK [initandlisten] waiting for connections on port 27017
2016-02-27T23:31:15.004-0500 I FTDC [ftdc] Unclean full-time diagnostic data capture shutdown detected, found interim file, some metrics may have been lost. OK
2016-02-27T23:32:10.255-0500 I NETWORK [initandlisten] connection accepted from 127.0.0.1:50834 #1 (1 connection now open)
```
I run mongo in a different powershell from install directory
```
C:\Program Files\MongoDB\Server\3.2\bin>mongo
MongoDB shell version: 3.2.3
connecting to: test
```
I confirm that their are some values in my db
```
> show dbs
fullstack 0.000GB
local 0.000GB
> use fullstack
switched to db fullstack
> show collections
students
> db.students.find({})
{ "_id" : ObjectId("56d1b951d14d4af940b77a14"), "name" : "Jose", "mark" : "99" }
{ "_id" : ObjectId("56d211e8d14d4af940b77a16"), "name" : "Jose", "Mark" : 99 }
{ "_id" : ObjectId("56d26bedc83f1574076c732b"), "name" : "Jose", "mark" : 99 }
{ "_id" : ObjectId("56d26c0fc83f1574076c732c"), "name" : "Kris", "mark" : 69 }
{ "_id" : ObjectId("56d271b1c83f1574076c732d"), "name" : "Kelly", "Grade" : 99 }
```
I run this code in pyCharm:
```
import pymongo
uri="mongodb://127.0.0.1:27017"
client = pymongo.MongoClient(uri)
database = client['fullstack']
collection = database['students']
students = collection.find({})
for student in students:
print(students)
```
but the interpreter returns nothing. No errors. It just doesn't return anything. The console does not even return a cursor dump. What am I doing wrong?
|
2016/02/28
|
[
"https://Stackoverflow.com/questions/35678885",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5885288/"
] |
The easiest way to get a variable by name is to search for it in the [`tf.global_variables()`](https://www.tensorflow.org/api_docs/python/tf/global_variables) collection:
```
var_23 = [v for v in tf.global_variables() if v.name == "Variable_23:0"][0]
```
This works well for ad hoc reuse of existing variables. A more structured approachβfor when you want to share variables between multiple parts of a modelβis covered in the [Sharing Variables tutorial](https://www.tensorflow.org/versions/r0.7/how_tos/variable_scope/index.html#sharing-variables).
|
If you want to get any stored variables from a model, use`tf.train.load_variable("model_folder_name","Variable name")`
| 14,371
|
54,404,263
|
I have a python list A and a python list B with words as list elements. I need to check how often the list elements from list B are contained in list A. Is there a python method or how can I implement this efficient?
The python intersection method only tells me that a list element from list B occurs in list A, but not how often.
|
2019/01/28
|
[
"https://Stackoverflow.com/questions/54404263",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10478079/"
] |
```
import collections
counter_A = collections.Counter(A)
for word in B:
print(word, '->', counter_A[word])
```
|
You could convert list B to a set, so that checking if the element is in B is faster.
Then create a dictionary to count the amount of times that the element is in A if the element is also in the set of B
As mentioned in the comments `collections.Counter` does the "heavy lifting" for you
| 14,375
|
51,696,655
|
I am new to python and I have a scenario where there are multiple parquet files with file names in order. ex: par\_file1,par\_file2,par\_file3 and so on upto 100 files in a folder.
I need to read these parquet files starting from file1 in order and write it to a singe csv file. After writing contents of file1, file2 contents should be appended to same csv without header. Note that all files have same column names and only data is split into multiple files.
I learnt to convert single parquet to csv file using pyarrow with the following code:
```
import pandas as pd
df = pd.read_parquet('par_file.parquet')
df.to_csv('csv_file.csv')
```
But I could'nt extend this to loop for multiple parquet files and append to single csv.
Is there a method in pandas to do this? or any other way to do this would be of great help. Thank you.
|
2018/08/05
|
[
"https://Stackoverflow.com/questions/51696655",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10183474/"
] |
I'm having a similar need and I read current Pandas version supports a directory path as argument for the [read\_csv](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.read_parquet.html) function. So you can read multiple parquet files like this:
```
import pandas as pd
df = pd.read_parquet('path/to/the/parquet/files/directory')
```
It concats everything into a single dataframe so you can convert it to a csv right after:
```
df.to_csv('csv_file.csv')
```
Make sure you have the following dependencies according to the doc:
* pyarrow
* fastparquet
|
If you are going to copy the files over to your local machine and run your code you could do something like this. The code below assumes that you are running your code in the same directory as the parquet files. It also assumes the naming of files as your provided above: "order. ex: par\_file1,par\_file2,par\_file3 and so on upto 100 files in a folder." If you need to search for your files then you will need to get the file names using `glob` and explicitly provide the path where you want to save the csv: `open(r'this\is\your\path\to\csv_file.csv', 'a')` Hope this helps.
```
import pandas as pd
# Create an empty csv file and write the first parquet file with headers
with open('csv_file.csv','w') as csv_file:
print('Reading par_file1.parquet')
df = pd.read_parquet('par_file1.parquet')
df.to_csv(csv_file, index=False)
print('par_file1.parquet appended to csv_file.csv\n')
csv_file.close()
# create your file names and append to an empty list to look for in the current directory
files = []
for i in range(2,101):
files.append(f'par_file{i}.parquet')
# open files and append to csv_file.csv
for f in files:
print(f'Reading {f}')
df = pd.read_parquet(f)
with open('csv_file.csv','a') as file:
df.to_csv(file, header=False, index=False)
print(f'{f} appended to csv_file.csv\n')
```
You can remove the print statements if you want.
Tested in `python 3.6` using `pandas 0.23.3`
| 14,377
|
40,413,712
|
I downloaded wheel to the most recent version
But I'm not entirely sure how to make of this semi-cryptic error message
```
Failed building wheel for mysql-python
Command "/Users/username/Desktop/Project/venv/bin/python -u -c "import setuptools,
tokenize;__file__='/private/var/folders/bg/_nsyc_vxasdfx___h11f3jw00000gn/T/pip-build-rBf9R1/mysql-python/setup.py';
f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');
f.close();exec(compile(code, __file__, 'exec'))"
install --record /var/folders/bg/_nsyc_vx4g___xbsh11f3jw00000gn/T/pip-Tjwbij-record/install-record.txt --single-version-externally-managed
--compile --install-headers /Users/username/Desktop/project/venv/include/site/python2.7/mysql-python" failed with error code 1 in
/private/var/folders/bg/_nsyc_vxasdf__xbsh11f3jw00000gn/T/pip-build-rBf9R1/mysql-python/
```
I tried
```
pip install --upgrade wheel
```
and I get
```
Requirement already up-to-date: wheel
```
MySQL version
```
mysql Ver 14.14 Distrib 5.7.10, for osx10.11 (x86_64) using EditLine wrapper
```
|
2016/11/04
|
[
"https://Stackoverflow.com/questions/40413712",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4407093/"
] |
As for me, it is because my system lack of python3 developing lib. It warns that there is no "Python.h" while installing. The following command fix it for me.
`yum install python34-devel -y`
`pip3 install mysqlclient`
|
The topic is quite old but for the people who might be suffered from having this problem,this can be your solution:
First of all,you must open the file where you use Python like 3.5,3.6,Anaconda etc. Then open cmd in that file and run the command below:
```
$ pip install mysqlclient==1.3.12
```
| 14,387
|
56,588,265
|
I've tried replacing each string but I can't get it to work. I can get all the data between `<span>...</span>` but I can't if is closed, how could I do it? I've tried replacing the text afterwards, but I am not able to do it. I am quite new to python.
I have also tried using `for x in soup.find_all('/span', class_ = "textLarge textWhite")` but that won't display anything.
**Relevant html:**
```
<div style="width:100%; display:inline-block; position:relative; text-
align:center; border-top:thin solid #fff; background-image:linear-
gradient(#333,#000);">
<div style="width:100%; max-width:1400px; display:inline-block;
position:relative; text-align:left; padding:20px 15px 20px 15px;">
<a href="/manpower-fit-for-military-service.asp" title="Manpower
Fit for Military Service ranked by country">
<div class="smGraphContainer"><img class="noBorder"
src="/imgs/graph.gif" alt="Small graph icon"></div>
</a>
<span class="textLarge textWhite"><span
class="textBold">FIT-FOR-SERVICE:</span> 18,740,382</span>
</div>
<div class="blockSheen"></div>
</div>
```
**Relevant python code:**
```
for y in soup.find_all('span', class_ = "textBold"):
print(y.text) #this gets FIT-FOR-SERVICE:
for x in soup.find_all('span', class_ = "textLarge textWhite"):
print(x.text) #this gets FIT-FOR-SERVICE: 18,740,382 but i only want the number
```
**Expected result**: `"18,740,382"`
|
2019/06/13
|
[
"https://Stackoverflow.com/questions/56588265",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11644852/"
] |
This problem is basically because the library you are using is in version 1.0.1 [mxgraph-js](https://www.npmjs.com/package/mxgraph-js), this library is very basic with the other features that the latest versions 4.0.6 [mxgraph](https://unpkg.com/mxgraph@4.0.6/javascript/mxClient.js) brings.
In order for your prototype to work, you must import a more current version of mxgraph, and you must do so using a tag.
```
import React, { Component } from "react";
import PropTypes from "prop-types";
import ReactDOM from "react-dom";
class Graph extends React.Component{
constructor(props) {
super(props);
this.state = {};
this.containerId = "divGraph";
this.LoadGraph = this.LoadGraph.bind(this);
}
componentDidMount() {
var that= this;
const script = document.createElement("script");
// script.src = "https://unpkg.com/mxgraph@3.9.8/javascript/mxClient.js";
script.src = "https://unpkg.com/mxgraph@4.0.6/javascript/mxClient.js";
script.async = true;
document.body.appendChild(script);
setTimeout(() => {
if(that.refs && this.refs.divGraph){
console.log(20,mxClient);
that.LoadGraph();
};
},1000)
}
LoadGraph() {
var container = ReactDOM.findDOMNode(this.refs.divGraph);
var zoomPanel = ReactDOM.findDOMNode(this.refs.divZoom);
var xmlString='<mxGraphModel dx="492" dy="658" grid="1" gridSize="10" guides="1" tooltips="1" connect="1" arrows="1" fold="1" page="1" pageScale="1" pageWidth="827" pageHeight="1169"> <root> <mxCell id="0" /> <mxCell id="1" parent="0" /> <mxCell id="2" value="" style="rounded=0;whiteSpace=wrap;html=1;" parent="1" vertex="1"> <mxGeometry x="20" y="20" width="120" height="60" as="geometry" /> </mxCell> <mxCell id="3" value="" style="whiteSpace=wrap;html=1;aspect=fixed;"parent="1" vertex="1"> <mxGeometry x="20" y="100" width="80" height="80" as="geometry" /> </mxCell> <mxCell id="4" value="" style="ellipse;whiteSpace=wrap;html=1;aspect=fixed;" parent="1" vertex="1"> <mxGeometry x="20" y="200" width="80" height="80" as="geometry" /> </mxCell> <mxCell id="5" value="" style="triangle;whiteSpace=wrap;html=1;" parent="1" vertex="1"> <mxGeometry x="20" y="300" width="60" height="80"as="geometry" /> </mxCell> <mxCell id="6" value="" style="shape=hexagon;perimeter=hexagonPerimeter2;whiteSpace=wrap;html=1;" parent="1" vertex="1"> <mxGeometry x="20" y="400" width="120" height="80" as="geometry" /> </mxCell> </root></mxGraphModel>';
if (!mxClient.isBrowserSupported()) {
mxUtils.error("Browser is not supported!", 200, false);
} else {
this.container = document.getElementById(this.containerId);
this.model = new mxGraphModel();
var graph = new mxGraph(this.container, this.model);
const parent = graph.getDefaultParent();
this.model.beginUpdate();
try
{
let xmlString = '<mxGraphModel dx="116" dy="608" grid="1" gridSize="10" guides="1" tooltips="1" connect="1" arrows="1" fold="1" page="1" pageScale="1" pageWidth="850" pageHeight="1100"><root><mxCell id="0"/><mxCell id="1" parent="0"/><mxCell id="2" value="Template" style="rounded=1;whiteSpace=wrap;html=1;strokeColor=#00ACC1;strokeWidth=5;glass=0;comic=0;gradientColor=#b2ebf2;fillColor=#e0f7fa;fontSize=16;fontColor=#000000;" vertex="1" parent="1"><mxGeometry x="210" y="120" width="200" height="120" as="geometry"/></mxCell><mxCell id="3" value="Sub-Process" style="shape=process;whiteSpace=wrap;html=1;backgroundOutline=1;strokeColor=#ffb74d;strokeWidth=4;perimeterSpacing=0;fontSize=16;gradientColor=#fff3e0;fillColor=#FFFFFF;" vertex="1" parent="1"><mxGeometry x="210" y="260" width="200" height="120" as="geometry"/></mxCell><mxCell id="4" value="End" style="ellipse;whiteSpace=wrap;html=1;aspect=fixed;fontFamily=Helvetica;fontSize=16;fillColor=#ffebee;strokeColor=#f44336;strokeWidth=4;gradientColor=#ffcdd2;" vertex="1" parent="1"><mxGeometry x="240" y="400" width="80" height="80" as="geometry"/></mxCell><mxCell id="5" value="Process" style="shape=process;whiteSpace=wrap;html=1;backgroundOutline=1;strokeColor=#ff9800;strokeWidth=4;perimeterSpacing=0;fontSize=16;gradientColor=#ffe0b2;fillColor=#fff3e0;" vertex="1" parent="1"><mxGeometry x="240" y="500" width="200" height="120" as="geometry"/></mxCell><mxCell id="6" value="Start" style="ellipse;whiteSpace=wrap;html=1;aspect=fixed;fontFamily=Helvetica;fontSize=16;fillColor=#e8f5e9;strokeColor=#4caf50;strokeWidth=4;gradientColor=#c8e6c9;" vertex="1" parent="1"><mxGeometry x="270" y="640" width="80" height="80" as="geometry"/></mxCell><mxCell id="7" value="Start" style="ellipse;whiteSpace=wrap;html=1;aspect=fixed;fontFamily=Helvetica;fontSize=16;fillColor=#e8f5e9;strokeColor=#4caf50;strokeWidth=4;gradientColor=#c8e6c9;" vertex="1" parent="1"><mxGeometry x="270" y="740" width="80" height="80" as="geometry"/></mxCell><mxCell id="8" value="Documentation" style="shape=document;whiteSpace=wrap;html=1;boundedLbl=1;strokeWidth=4;strokeColor=#9C27B0;gradientColor=#E1BEE7;fillColor=#F3E5F5;fontSize=16;" vertex="1" parent="1"><mxGeometry x="270" y="840" width="200" height="120" as="geometry"/></mxCell><mxCell id="9" value="Sub-Process" style="shape=process;whiteSpace=wrap;html=1;backgroundOutline=1;strokeColor=#ffb74d;strokeWidth=4;perimeterSpacing=0;fontSize=16;gradientColor=#fff3e0;fillColor=#FFFFFF;" vertex="1" parent="1"><mxGeometry x="300" y="980" width="200" height="120" as="geometry"/></mxCell><mxCell id="10" value="Template" style="rounded=1;whiteSpace=wrap;html=1;strokeColor=#00ACC1;strokeWidth=5;glass=0;comic=0;gradientColor=#b2ebf2;fillColor=#e0f7fa;fontSize=16;fontColor=#000000;" vertex="1" parent="1"><mxGeometry x="330" y="1120" width="200" height="120" as="geometry"/></mxCell><mxCell id="11" value="Decision" style="rhombus;whiteSpace=wrap;html=1;rounded=0;glass=0;comic=0;labelBackgroundColor=none;strokeColor=#333333;strokeWidth=4;fillColor=#eeeeee;gradientColor=#e5e5e5;fontFamily=Helvetica;fontSize=16;fontColor=#000000;align=center;" vertex="1" parent="1"><mxGeometry x="360" y="1260" width="200" height="120" as="geometry"/></mxCell><mxCell id="12" value="End" style="ellipse;whiteSpace=wrap;html=1;aspect=fixed;fontFamily=Helvetica;fontSize=16;fillColor=#ffebee;strokeColor=#f44336;strokeWidth=4;gradientColor=#ffcdd2;" vertex="1" parent="1"><mxGeometry x="390" y="1400" width="80" height="80" as="geometry"/></mxCell><mxCell id="13" value="Process" style="shape=process;whiteSpace=wrap;html=1;backgroundOutline=1;strokeColor=#ff9800;strokeWidth=4;perimeterSpacing=0;fontSize=16;gradientColor=#ffe0b2;fillColor=#fff3e0;" vertex="1" parent="1"><mxGeometry x="390" y="1500" width="200" height="120" as="geometry"/></mxCell><mxCell id="14" value="Documentation" style="shape=document;whiteSpace=wrap;html=1;boundedLbl=1;strokeWidth=4;strokeColor=#9C27B0;gradientColor=#E1BEE7;fillColor=#F3E5F5;fontSize=16;" vertex="1" parent="1"><mxGeometry x="420" y="1640" width="200" height="120" as="geometry"/></mxCell><mxCell id="15" value="Phase" style="swimlane;html=1;horizontal=0;swimlaneLine=0;rounded=0;glass=0;comic=0;labelBackgroundColor=none;strokeColor=#000000;strokeWidth=4;fillColor=#26C6DA;gradientColor=#00ACC1;fontFamily=Helvetica;fontSize=16;fontColor=#FFFFFF;gradientDirection=east;spacing=0;spacingTop=-4;swimlaneFillColor=none;collapsible=0;" vertex="1" parent="1"><mxGeometry x="450" y="1780" width="1020" height="200" as="geometry"/></mxCell><mxCell id="16" value="Phase" style="swimlane;html=1;horizontal=0;swimlaneLine=0;rounded=0;glass=0;comic=0;labelBackgroundColor=none;strokeColor=#000000;strokeWidth=4;fillColor=#26C6DA;gradientColor=#00ACC1;fontFamily=Helvetica;fontSize=16;fontColor=#FFFFFF;gradientDirection=east;spacing=0;spacingTop=-4;swimlaneFillColor=none;collapsible=0;" vertex="1" parent="1"><mxGeometry x="480" y="2000" width="1020" height="200" as="geometry"/></mxCell><mxCell id="17" value="Phase" style="swimlane;html=1;horizontal=0;swimlaneLine=0;rounded=0;glass=0;comic=0;labelBackgroundColor=none;strokeColor=#000000;strokeWidth=4;fillColor=#26C6DA;gradientColor=#00ACC1;fontFamily=Helvetica;fontSize=16;fontColor=#FFFFFF;gradientDirection=east;spacing=0;spacingTop=-4;swimlaneFillColor=none;collapsible=0;" vertex="1" parent="1"><mxGeometry x="510" y="2220" width="1020" height="200" as="geometry"/></mxCell><mxCell id="18" value="Phase" style="swimlane;html=1;horizontal=0;swimlaneLine=0;rounded=0;glass=0;comic=0;labelBackgroundColor=none;strokeColor=#000000;strokeWidth=4;fillColor=#26C6DA;gradientColor=#00ACC1;fontFamily=Helvetica;fontSize=16;fontColor=#FFFFFF;gradientDirection=east;spacing=0;spacingTop=-4;swimlaneFillColor=none;collapsible=0;" vertex="1" parent="1"><mxGeometry x="540" y="2440" width="1020" height="200" as="geometry"/></mxCell><mxCell id="19" value="Process" style="shape=process;whiteSpace=wrap;html=1;backgroundOutline=1;strokeColor=#ff9800;strokeWidth=4;perimeterSpacing=0;fontSize=16;gradientColor=#ffe0b2;fillColor=#fff3e0;" vertex="1" parent="1"><mxGeometry x="570" y="2660" width="200" height="120" as="geometry"/></mxCell><mxCell id="20" value="Start" style="ellipse;whiteSpace=wrap;html=1;aspect=fixed;fontFamily=Helvetica;fontSize=16;fillColor=#e8f5e9;strokeColor=#4caf50;strokeWidth=4;gradientColor=#c8e6c9;" vertex="1" parent="1"><mxGeometry x="600" y="2800" width="80" height="80" as="geometry"/></mxCell><mxCell id="21" value="Process" style="shape=process;whiteSpace=wrap;html=1;backgroundOutline=1;strokeColor=#ff9800;strokeWidth=4;perimeterSpacing=0;fontSize=16;gradientColor=#ffe0b2;fillColor=#fff3e0;" vertex="1" parent="1"><mxGeometry x="600" y="2900" width="200" height="120" as="geometry"/></mxCell><mxCell id="22" value="Action" style="rounded=1;whiteSpace=wrap;html=1;glass=0;comic=0;labelBackgroundColor=none;strokeColor=#e91e63;strokeWidth=4;fillColor=#fce4ec;gradientColor=#f8bbd0;fontFamily=Helvetica;fontSize=16;fontColor=#000000;" vertex="1" parent="1"><mxGeometry x="630" y="3040" width="280" height="40" as="geometry"/></mxCell><mxCell id="23" value="Process" style="shape=process;whiteSpace=wrap;html=1;backgroundOutline=1;strokeColor=#ff9800;strokeWidth=4;perimeterSpacing=0;fontSize=16;gradientColor=#ffe0b2;fillColor=#fff3e0;" vertex="1" parent="1"><mxGeometry x="660" y="3100" width="200" height="120" as="geometry"/></mxCell><mxCell id="24" value="Sub-Process" style="shape=process;whiteSpace=wrap;html=1;backgroundOutline=1;strokeColor=#ffb74d;strokeWidth=4;perimeterSpacing=0;fontSize=16;gradientColor=#fff3e0;fillColor=#FFFFFF;" vertex="1" parent="1"><mxGeometry x="690" y="3240" width="200" height="120" as="geometry"/></mxCell><mxCell id="25" value="End" style="ellipse;whiteSpace=wrap;html=1;aspect=fixed;fontFamily=Helvetica;fontSize=16;fillColor=#ffebee;strokeColor=#f44336;strokeWidth=4;gradientColor=#ffcdd2;" vertex="1" parent="1"><mxGeometry x="720" y="3380" width="80" height="80" as="geometry"/></mxCell><mxCell id="26" value="Decision" style="rhombus;whiteSpace=wrap;html=1;rounded=0;glass=0;comic=0;labelBackgroundColor=none;strokeColor=#333333;strokeWidth=4;fillColor=#eeeeee;gradientColor=#e5e5e5;fontFamily=Helvetica;fontSize=16;fontColor=#000000;align=center;" vertex="1" parent="1"><mxGeometry x="720" y="3480" width="200" height="120" as="geometry"/></mxCell><mxCell id="27" value="Template" style="rounded=1;whiteSpace=wrap;html=1;strokeColor=#00ACC1;strokeWidth=5;glass=0;comic=0;gradientColor=#b2ebf2;fillColor=#e0f7fa;fontSize=16;fontColor=#000000;" vertex="1" parent="1"><mxGeometry x="750" y="3620" width="200" height="120" as="geometry"/></mxCell><mxCell id="28" value="Process" style="shape=process;whiteSpace=wrap;html=1;backgroundOutline=1;strokeColor=#ff9800;strokeWidth=4;perimeterSpacing=0;fontSize=16;gradientColor=#ffe0b2;fillColor=#fff3e0;" vertex="1" parent="1"><mxGeometry x="780" y="3760" width="200" height="120" as="geometry"/></mxCell><mxCell id="29" value="Start" style="ellipse;whiteSpace=wrap;html=1;aspect=fixed;fontFamily=Helvetica;fontSize=16;fillColor=#e8f5e9;strokeColor=#4caf50;strokeWidth=4;gradientColor=#c8e6c9;" vertex="1" parent="1"><mxGeometry x="810" y="3900" width="80" height="80" as="geometry"/></mxCell><mxCell id="30" value="Documentation" style="shape=document;whiteSpace=wrap;html=1;boundedLbl=1;strokeWidth=4;strokeColor=#9C27B0;gradientColor=#E1BEE7;fillColor=#F3E5F5;fontSize=16;" vertex="1" parent="1"><mxGeometry x="810" y="4000" width="200" height="120" as="geometry"/></mxCell><mxCell id="31" value="Phase" style="swimlane;html=1;horizontal=0;swimlaneLine=0;rounded=0;glass=0;comic=0;labelBackgroundColor=none;strokeColor=#000000;strokeWidth=4;fillColor=#26C6DA;gradientColor=#00ACC1;fontFamily=Helvetica;fontSize=16;fontColor=#FFFFFF;gradientDirection=east;spacing=0;spacingTop=-4;swimlaneFillColor=none;collapsible=0;" vertex="1" parent="1"><mxGeometry x="840" y="4140" width="1020" height="200" as="geometry"/></mxCell><mxCell id="32" value="Action" style="rounded=1;whiteSpace=wrap;html=1;glass=0;comic=0;labelBackgroundColor=none;strokeColor=#e91e63;strokeWidth=4;fillColor=#fce4ec;gradientColor=#f8bbd0;fontFamily=Helvetica;fontSize=16;fontColor=#000000;" vertex="1" parent="1"><mxGeometry x="870" y="4360" width="280" height="40" as="geometry"/></mxCell><mxCell id="33" value="Documentation" style="shape=document;whiteSpace=wrap;html=1;boundedLbl=1;strokeWidth=4;strokeColor=#9C27B0;gradientColor=#E1BEE7;fillColor=#F3E5F5;fontSize=16;" vertex="1" parent="1"><mxGeometry x="900" y="4420" width="200" height="120" as="geometry"/></mxCell><mxCell id="34" value="Phase" style="swimlane;html=1;horizontal=0;swimlaneLine=0;rounded=0;glass=0;comic=0;labelBackgroundColor=none;strokeColor=#000000;strokeWidth=4;fillColor=#26C6DA;gradientColor=#00ACC1;fontFamily=Helvetica;fontSize=16;fontColor=#FFFFFF;gradientDirection=east;spacing=0;spacingTop=-4;swimlaneFillColor=none;collapsible=0;" vertex="1" parent="1"><mxGeometry x="930" y="4560" width="1020" height="200" as="geometry"/></mxCell><mxCell id="35" value="Process" style="shape=process;whiteSpace=wrap;html=1;backgroundOutline=1;strokeColor=#ff9800;strokeWidth=4;perimeterSpacing=0;fontSize=16;gradientColor=#ffe0b2;fillColor=#fff3e0;" vertex="1" parent="1"><mxGeometry x="960" y="4780" width="200" height="120" as="geometry"/></mxCell><mxCell id="36" value="Sub-Process" style="shape=process;whiteSpace=wrap;html=1;backgroundOutline=1;strokeColor=#ffb74d;strokeWidth=4;perimeterSpacing=0;fontSize=16;gradientColor=#fff3e0;fillColor=#FFFFFF;" vertex="1" parent="1"><mxGeometry x="990" y="4920" width="200" height="120" as="geometry"/></mxCell><mxCell id="37" value="Process" style="shape=process;whiteSpace=wrap;html=1;backgroundOutline=1;strokeColor=#ff9800;strokeWidth=4;perimeterSpacing=0;fontSize=16;gradientColor=#ffe0b2;fillColor=#fff3e0;" vertex="1" parent="1"><mxGeometry x="1020" y="5060" width="200" height="120" as="geometry"/></mxCell></root></mxGraphModel>';
var doc = mxUtils.parseXml(xmlString);
var codec = new mxCodec(doc);
codec.decode(doc.documentElement, graph.getModel());
}
finally
{
graph.model.endUpdate();
}
}
}
render() {
return <div className="graph-container" ref="divGraph" id="divGraph" />;
}
}
export default Graph;
```
|
I found a way to resolve this in my webpack project (vuejs) , maybe it's the same problem
because all the mx Object must in window dict
e.g. window['mxCell'] = mxCell
so I write this
```
const {mxCell .......} = mxgraph({
mxImageBasePath: ..... })
// register all object
window['mxClient'] = mxClient
window['mxGraph'] = mxGraph
window['mxUtils'] = mxUtils
window['mxEvent'] = mxEvent
window['mxVertexHandler'] = mxVertexHandler
window['mxConstraintHandler'] = mxConstraintHandler
window['mxConnectionHandler'] = mxConnectionHandler
window['mxEdgeHandler'] = mxEdgeHandler
window['mxCellHighlight'] = mxCellHighlight
window['mxEditor'] = mxEditor
window['mxGraphModel'] = mxGraphModel
window['mxKeyHandler'] = mxKeyHandler
window['mxStencilRegistry'] = mxStencilRegistry
window['mxConstants'] = mxConstants
window['mxGraphView'] = mxGraphView
window['mxCodec'] = mxCodec
window['mxToolbar'] = mxToolbar
window['mxRubberband'] = mxRubberband
window['mxShape'] = mxShape
window['mxImage'] = mxImage
window['mxConnectionConstraint'] = mxConnectionConstraint
window['mxPoint'] = mxPoint
window['mxDivResizer'] = mxDivResizer
window['mxGraphHandler'] = mxGraphHandler
window['mxPerimeter'] = mxPerimeter
window['mxPolyline'] = mxPolyline
window['mxCellState'] = mxCellState
window['mxOutline'] = mxOutline
window['mxCell'] = mxCell
window['mxGeometry'] = mxGeometry
window['mxGuide'] = mxGuide
window['mxDefaultKeyHandler'] = mxDefaultKeyHandler
window['mxDefaultToolbar'] = mxDefaultToolbar
window['mxXmlCanvas2D'] = mxXmlCanvas2D
window['mxImageExport'] = mxImageExport
// your code here
```
| 14,388
|
40,764,894
|
I am trying to run Pylint and I am getting the below error:
>
> pkg\_resources.DistributionNotFound: The 'backports.functools-lru-cache' distribution was not found and is required by pylint
>
>
>
I found the below link, but not sure what to do with these files or where to place them.
<https://pypi.python.org/simple/backports.functools-lru-cache/>
How can I fix this?
|
2016/11/23
|
[
"https://Stackoverflow.com/questions/40764894",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4663869/"
] |
I had the same problem and I installed two missing dependencies (wrong configuration on the pylint or not updated pip??)
Just do:
```
pip install backports.functools_lru_cache
```
then if you get an error like:
```
raise DistributionNotFound(req)
```
pkg\_resources.DistributionNotFound: configparser
just do:
```
pip install configparser
```
|
I had this problem when running within a [virtual environment](https://virtualenv.pypa.io/en/stable/) on CentOS 7.
On CentOS, the backports module is packaged as a yum package (`python-backports.x86_64`).
The solution was to create the virtualenv using the [`--system-site-packages`](https://virtualenv.pypa.io/en/stable/userguide/#the-system-site-packages-option) option.
First verify that the ``python-backports` package is installed:
`yum list installed | grep python-backports`
Then create/recreate your virtual environment:
`virtualenv env --system-site-packages`
This allows the virtualenv's pylint to see the backports module when installing.
Then install pylint within the virtual environment:
`env/bin/pip install pylint`
| 14,389
|
13,358,283
|
I tried to start a smtp server with python with the following code:
```
import smtplib
smtp_server = smtplib.SMTP('localhost')
```
And I get the following error:
```
File "test.py", line 2, in <module>
smtp_server = smtplib.SMTP('localhost')
File "/usr/local/lib/python2.7/smtplib.py", line 242, in __init__
(code, msg) = self.connect(host, port)
File "/usr/local/lib/python2.7/smtplib.py", line 302, in connect
self.sock = self._get_socket(host, port, self.timeout)
File "/usr/local/lib/python2.7/smtplib.py", line 277, in _get_socket
return socket.create_connection((port, host), timeout)
File "/usr/local/lib/python2.7/socket.py", line 571, in create_connection
raise err
```
I tried to `telnet localhost 25` and `telnet localhost` but they both give the following error:
```
Trying 127.0.0.1...
telnet: connect to address 127.0.0.1: Connection refused
telnet: Unable to connect to remote host: Connection refused
```
The same python code runs perfectly well on one of my other machines.
Any advice ?
Thanks.
UPDATE:
The machine I'm having problem with is running Python 2.7.2, while the machine that the code works well is running Python 2.6.2. I don't know if this is one of the reasons.
|
2012/11/13
|
[
"https://Stackoverflow.com/questions/13358283",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/639165/"
] |
`smtplib` is SMTP protocol client. What you are looking for is [`smtpd`](http://docs.python.org/2/library/smtpd.html).
|
The problem is that smtplib does not provide an SMTP server implementation, it implements SMTP client. Most likely, the code tries to connect to localhost and fails. See <http://docs.python.org/2/library/smtplib.html>
Is the other machine you write about already running an SMTP server?
And what are you trying to do?
| 14,391
|
42,762,924
|
To view the installed libraries on an environment I run this code within a Jupyter Python notebook cell :
```
%%bash
pip freeze
```
This works, but how to conditionally execute this code ?
This is my attempt :
```
from __future__ import print_function
from ipywidgets import interact, interactive, fixed, interact_manual
import ipywidgets as widgets
def f(x1):
if(x1 == True):
f2()
return x1
interact(f , x1 = False)
def f2():
%%bash
pip freeze
```
But evaluating the cell throws error :
```
File "<ipython-input-186-e8a8ec97ab2d>", line 15
pip freeze
^
SyntaxError: invalid syntax
```
To generate the checkbox I'm using ipywidgets : <https://github.com/ipython/ipywidgets>
Update :
Running `pip freeze` within `check_call` returns 0 results :
[](https://i.stack.imgur.com/Cf9TP.png)
Running
```
%%bash
pip freeze
```
Returns installed libraries so 0 is not correct.
Is `subprocess.check_call("pip freeze", shell=True)` correct ?
Update 2 :
This works :
```
from __future__ import print_function
from ipywidgets import interact, interactive, fixed, interact_manual
import ipywidgets as widgets
import subprocess
def f(View):
if(View == True):
f2()
interact(f , View = False)
def f2():
print(subprocess.check_output(['pip', 'freeze']))
```
|
2017/03/13
|
[
"https://Stackoverflow.com/questions/42762924",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/470184/"
] |
You can just use the standard Python way:
```
import subprocess
print(subprocess.check_output(['pip', 'freeze']))
```
Then your function will work in any Python environment.
|
The short explanation is that the notebook has interactive commands which are handled by the notebook itself, before the Python interpreter even sees them. `%%bash` is an example of such a command; you cannot put this in Python code, because it's not Python.
Using `bash` doesn't actually add anything here *per se;* using the shell offers many interactive benefits, and of course, in an interactive notebook, offering the user access to the shell is a powerful mechanism for allowing users to execute external processes; but in this particular case, with noninteractive execution, there are no actual benefits from putting the shell between yourself and `pip`, so you might simply want
```
import subprocess
if some_condition:
p = subprocess.run(['pip', 'freeze'],
stdout=subprocess.PIPE, universal_newlines=True)
```
(Notice the absence of `shell=True` because we don't need or want the shell here.)
If you want the captured exit code or the output from `pip freeze` they are available as attributes of the returned object `p`. See the [`subprocess.run` documentation](https://docs.python.org/3/library/subprocess.html#subprocess.run) for details. Briefly, `p.returncode` will be 0 if the command succeeded, and the output will be in `p.stdout`.
Older versions of Python had a diverse collection of special-purpose wrappers around `subprocess.Popen` like `check_call`, `check_output`, etc. but these have all been subsumed by `subprocess.run` in recent versions. If you need to support Python versions before 3.5, the legacy functions are still available, but they should arguably be avoided in new code.
| 14,392
|
61,343,399
|
I want to ask about h3-py installation on windows
I have tried to install h3 on windows with python
I ran the command pip install h3 and it is installed.
After install, when I try to import it, i get this error:
```
from h3 import h3
File "C:\Users\ASUS\AppData\Local\Programs\Python\Python36\lib\site-packages\h3\h3.py", line 39, in <module>
libh3 = cdll.LoadLibrary('{}/{}'.format(_dirname, libh3_path))
File "C:\Users\ASUS\AppData\Local\Programs\Python\Python36\lib\ctypes\__init__.py", line 426, in LoadLibrary
return self._dlltype(name)
File "C:\Users\ASUS\AppData\Local\Programs\Python\Python36\lib\ctypes\__init__.py", line 348, in __init__
self._handle = _dlopen(self._name, mode)
OSError: [WinError 126] The specified module could not be found
```
|
2020/04/21
|
[
"https://Stackoverflow.com/questions/61343399",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7765614/"
] |
We're in the process of migrating the wrapper to Cython at <https://github.com/uber/h3-py/tree/cython>
Would you mind trying to install this cython version with these (temporary) install steps:
```
pip install scikit-build
pip install git+https://github.com/uber/h3-py.git@cython
```
Edit:
We've released `3.6.1` along with pre-built wheels on PyPI and conda. Install should now be as simple as
```
pip install h3
```
|
Apprently there was a problem with the package and after fixing it in the release v3.6.1 it works. they fixed install issues.
As mentioned in this issue on github :
<https://github.com/uber/h3-py/issues/32>
| 14,393
|
2,152,463
|
I get the following errors, I've placed [my name] for anonymity:
```
>>> python /Users/[myname]/Desktop/setuptools-0.6c11/ez_setup.py
File "<stdin>", line 1
python /Users/[myname]/Desktop/setuptools-0.6c11/ez_setup.py
^
SyntaxError: invalid syntax
```
If you can't see the ^ is under the 11.
Or I get this error:
```
>>> python /Users/[myname]/Desktop/EZ_tutorial/ez_setup.py
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
NameError: name 'python' is not defined
```
|
2010/01/28
|
[
"https://Stackoverflow.com/questions/2152463",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/260666/"
] |
The ez\_setup.py script may or may not work depending on your environment. If not, follow the instructions [here](http://pypi.python.org/pypi/setuptools). In particular, from the shell, make sure that the python 2.6 you installed is now invoked by the command `python`:
```
$ python
Python 2.6.4 (r264:75821M, Oct 27 2009, 19:48:32)
[GCC 4.0.1 (Apple Inc. build 5493)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> ^D
```
If not, modify your shell's PATH environment variable. Then download the setuptools 2.6 python egg from [here](http://pypi.python.org/pypi/setuptools), change to your brower's download directory, and run the downloaded script:
```
$ cd ~/Downloads # substitute the appropriate directory name
$ sh setuptools-0.6c11-py2.6.egg
```
|
Try running that command from a shell (i.e. straight from Terminal.app), not from inside the python interpreter.
| 14,394
|
69,460,583
|
I am deploying my project with heroku. My Django version number is 3.2.8 and python 3.9.7. It is written on heroku. Heroku supports Python 3.9.7 deployment, but there is an error in my push process. The version above does not support it. What should I do? Thank you for your reply
This is my cmd

my requirements.txt

|
2021/10/06
|
[
"https://Stackoverflow.com/questions/69460583",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/17086297/"
] |
I parsed the file you provided as an example [here](https://www.sec.gov/Archives/edgar/data/1087699/000108769999000001/0001087699-99-000001.txt). I first copied the data from the file to a txt file. The file `copied.txt` needs to be in the current working directory. This could give you an idea how to proceed.
```r
library(tidyverse)
df <- read_file("copied.txt") %>%
# trying to extract data only from the table
(function(x){
tbl_beg <- str_locate(x, "Managers Sole")[2] + 1
tbl_end <- str_locate(x, "\r\n</TABLE>")[1]
str_sub(x, tbl_beg, tbl_end)
}) %>%
# removing some unwanted characters from the beginning and the end of the extracted string
str_sub(start = 4, end = -3) %>%
# splitting for individual lines
str_split('\"\r\n\"') %>% unlist() %>%
# removing broken line break
str_remove("\r\n") %>%
# replacing the original text where there are spaces with one, where there is underscore
# the reason for that is that I need to split the rows into columns using space
str_replace_all("Sole Managers Sole", " Managers_Sole") %>%
# removing extra spaces
str_squish() %>%
# reversing the order of the line (I need to split from the right because the company name contains additional spaces)
# if the company name is the last one, it is okey that there are additional spaces
stringi::stri_reverse() %>%
str_split(pattern = " ", n = 6, simplify = T) %>%
# making the order to the original one
apply(MARGIN = 2, FUN = stringi::stri_reverse) %>%
as_tibble() %>%
select(c(6:1)) %>%
set_names(nm = c("name_of_issuer", "title_of_cl", "cusip_number", "fair_market_value", "shares", "shares_of_princip_mngrs"))
# A tibble: 47 x 6
name_of_issuer title_of_cl cusip_number fair_market_value shares shares_of_princip_mngrs
<chr> <chr> <chr> <chr> <chr> <chr>
1 America Online COM 02364J104 2,940,000 20,000 Managers_Sole
2 Anheuser Busch COM 35229103 3,045,000 40,000 Managers_Sole
3 At Home COM 45919107 787,500 5,000 Managers_Sole
4 AT&T COM 1957109 5,985,937 75,000 Managers_Sole
5 Bank Toyko COM 65379109 700,000 50,000 Managers_Sole
6 Bay View Capital COM 07262L101 14,958,437 792,500 Managers_Sole
7 Broadcast.com COM 111310108 2,954,687 25,000 Managers_Sole
8 Chase Manhattan COM 16161A108 10,578,750 130,000 Managers_Sole
9 Chase Manhattan 4/85C 16161A9DQ 59,375 500 Managers_Sole
10 Cisco Systems COM 17275R102 4,930,312 45,000 Managers_Sole
```
|
You can use `httr` package to request the page:
```
> install.packages("httr")
# follow instructions etc
```
Then in `R` shell (you might need a restart):
```r
> httr::GET("https://www.sec.gov/Archives/edgar/data/1087699/000108769999000001/0001087699-99-000001.txt")
```
This will download the file successfully however I'm not fluent enough in R to parse this text, but it looks pretty easy: split text by `<TABLE>`, spline newlines for rows, split every row by spaces for columns.
| 14,395
|
1,233,284
|
I want to hit a URL from python in google app engine. Can any one please tell me how can i hit the URL using python in google app engine.
|
2009/08/05
|
[
"https://Stackoverflow.com/questions/1233284",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/-1/"
] |
You can use the [URLFetch API](http://code.google.com/appengine/docs/python/urlfetch/overview.html)
```
from google.appengine.api import urlfetch
url = "http://www.google.com/"
result = urlfetch.fetch(url)
if result.status_code == 200:
doSomethingWithResult(result.content)
```
|
It always depends on post or get. urllib can post to a form somewhere else, if we want the rather tricky thing validate between different hashes (sha and md5)
```
import urllib
data = urllib.urlencode({"id" : str(d), "password" : self.request.POST['passwd'], "edit" : "edit"})
result = urlfetch.fetch(url="http://www..../Login.do",
payload=data,
method=urlfetch.POST,
headers={'Content-Type': 'application/x-www-form-urlencoded'})
if result.content.find('loggedInOrYourInfo') > 0:
self.response.out.write("clear")
else:
self.response.out.write("wrong password ")
```
| 14,396
|
46,309,485
|
I have data that looks like this:
```
['6005600401']
['000000: PUSH1 0x05']
['000002: PUSH1 0x04']
['000004: ADD']
```
The initial representation it's derived from is here:
```
6005600401
000000: PUSH1 0x05
000002: PUSH1 0x04
000004: ADD
```
The output I'd like to create is like so:
```
PUSH1 0x60, PUSH1 0x40, MSTORE, CALLDATASIZE, ISZERO, PUSH2 0x006c, JUMPI, PUSH1 0xe0, PUSH1 0x02, EXP, PUSH1 0x00, CALLDATALOAD, DIV
```
I've been experimenting with different techniques of isolating that data in the python shell like so:
```
>>> date_div = "Blah blah blah, Updated: Aug. 23, 2012"
>>> date_div.split('Updated: ')
['Blah blah blah, ', 'Aug. 23, 2012']
>>> date_div.split('Updated: ')[-1]
'Aug. 23, 2012'
>>> line = ['000000: PUSH1 0x05']
>>> date_div.split(':')
['Blah blah blah, Updated', ' Aug. 23, 2012']
>>> line.split(':')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: 'list' object has no attribute 'split'
>>> line = ["000000: PUSH1 0x05"]
>>> line.split(':')
```
But as of yet I've not been able to eliminate all the superfluous characters. How can I employ regex such that my data goes from a representation like this:
```
['6005600401']
['000000: PUSH1 0x05']
['000002: PUSH1 0x04']
['000004: ADD']
```
Yo one like this:
```
PUSH1 0x60, PUSH1 0x40, MSTORE, CALLDATASIZE, ISZERO, PUSH2 0x006c, JUMPI, PUSH1 0xe0, PUSH1 0x02, EXP, PUSH1 0x00, CALLDATALOAD, DIV
```
---
Here is the script used to create it:
```
import csv
import sys
import subprocess
def my_test_func(filename, data):
with open(filename, 'w') as fd:
fd.write(data)
fd.write('\n')
return subprocess.check_output(['evm', 'disasm', filename])
if '__main__' == __name__:
file_name = sys.argv[1]
byte_code = sys.argv[2]
status = my_test_func(file_name, byte_code)
edits = csv.reader(status.splitlines(), delimiter=",")
for row in edits:
print(row)
```
|
2017/09/19
|
[
"https://Stackoverflow.com/questions/46309485",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3787253/"
] |
Why not just split on ':'?
```
for line in status:
data = line.split(:)
if len(data) < 2:
continue
print(data[1].strip())
```
|
This is one good way:
```
opcodes_list = list()
for element in status.split('\n'):
result = re.search(r"\b[A-Z].+", element)
if result:
opcodes_list.append(result.group(0))
```
| 14,397
|
37,467,680
|
in the following code, there is indentation error at line 5.
what i want is when if condition at line 4 is true then break
should execute otherwise div=div+2 should execute.
```
max_num=input()
for num in range(2,max_num+1):
for div in range(3,max_num/2):
if(num%div==0):break
div=div+2
else: print num
if(num==2): num=num+1
else : num=num+2
```
I am new with python. please help me out.. :)
|
2016/05/26
|
[
"https://Stackoverflow.com/questions/37467680",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5774499/"
] |
Please use 2 or 4 spaces for indentation. I see your indentation is inconsistent - one time you use 1 space and the others you use 3.
What you did **should** work as I see, but it's just inconsistent.
Here is a slightly more organized version:
```
max_num=input()
for num in range(2,max_num+1):
for div in range(3,max_num/2):
if(num%div==0): break
div=div+2
else:
print num
if(num==2):
num=num+1
else:
num=num+2
```
You can also optimize it to work more efficiently like so:
```
# Makes sure only ints are allowed. Else throws an exception
max_num = int(raw_input())
for num in range(2, max_num + 1):
try:
(div for div in range(3, max_num / 2, 2) if num % div == 0).next()
except StopIteration:
print num
# This section is not used as I see
# if(num==2):
# num=num+1
# else:
# num=num+2
```
|
I suggest add this code, but i would want to know what is the wanted result for yor code.
```
max_num=input()
for num in range(2,max_num+1):
for div in range(3,max_num/2):
if(num%div==0):break
else:div=div+2
print num
if(num==2): num=num+1
else : num=num+2
```
| 14,398
|
48,254,890
|
I'm learning about how to import libraries from directories and I've stumbled upon an error I can't seem to figure out. I'm using the IMDBpy python library in a folder called lib. Below, I am importing the module which no longer returns any errors, but when I move to line number 6, I get the following error:
```
Traceback (most recent call last):
File "testlib.py", line 4, in <module>
from lib.imdb import IMDb
File "/home/user/Scripts/test/lib/imdb/__init__.py", line 49, in <module>
import imdb._logging
ImportError: No module named 'imdb'
```
Python is throwing an error because the python test file I'm writing is 2 directories up. Not sure how to get it working without putting everything in the same directory.
```
#!/usr/bin/env python3
from lib.imdb import IMDb
# Create the object that will be used to access the IMDb's database.
ia = imdb.IMDb() # by default access the web.
```
|
2018/01/14
|
[
"https://Stackoverflow.com/questions/48254890",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3808507/"
] |
You can either write:
```
from lib.imdb import IMDb
ia = IMDb()
```
or:
```
from lib import imdb
ia = imdb.IMDb()
```
But importing `IMDb` and then calling `imdb.IMDb()` does not work.
One other point to mention: If your *testlib.py* resides above the `lib` foder, then you need to add a `__init__.py` file within `lib` for your import to work. The file can be completely empty.
---
**Update**: This packages uses import statement that are not relative to the package folder, but to an arbitrary folder in `sys.path`. An illustration of this is the exact line that causes the error:
```
# imdb.__init__.py
imdb._logging
```
This works only if the packages resides in a folder that is part of `sys.path`, so either in the same path as the script that does the `from imdb import IMDb` statement or in the python dist-package path (simplified explanation). This and all equivalent imports could be replaced by
```
#Β imdb.__init__.py
from . import _logging
```
In which case the import is relative to the module, so there is no need for the imdbpy package to reside in a directory in `sys.path`.
The consequence for us now is that we cannot simply put imdb in a sub-folder, we need to either put it in a folder from `sys.path` or add the folder it resides in to `sys.path`. So, the options are:
* install the package
* move the imdb folder up into the current folder
* write:
```
import sys
sys.path.insert(0, 'absolute/path/to/lib')
```
before the import statement.
In any of the above cases you will be able to do:
```
from imdb import IMDb
ia = IMDb()
```
|
As mentioned above, it is different importing an object from importing a function.
When you import a module such as `imdb` you may use its functions. However, when you import a function, you can only use that function, not the object/module where it came from.
Check this for better examples: [Python 3 modules](https://docs.python.org/3/tutorial/modules.html#modules)
| 14,399
|
68,083,635
|
I am using a Queensland government API. The format for it is JSON, and the keys have spaces in them. I am trying to access them with python and flask. I can pass through the data required to the HTML file yet cannot print it using flask.
```
{% block content %}
<div>
{% for suburb in data %}
<p>{{ suburb.Age }}</p>
<p>{{ suburb.Manslaughter Unlawful Striking Causing Death }}</p>
{% endfor %}
</div>
{% endblock content %}
```
Above is the code for the HTML file. "Manslaughter Unlawful Striking Causing Death" is the key I am trying to access but it comes up with this error when I load the page.
```
from flask import Flask, render_template
import requests
import json
app = Flask(__name__)
@app.route('/', methods=['GET'])
def index():
req = requests.get("https://www.data.qld.gov.au/api/3/action/datastore_search?resource_id=8b29e643-56a3-4e06-81da-c913f0ecff4b&limit=5")
data = json.loads(req.content)
print(data)
district=[]
for Districts in data['result']['records']:
district.append(Districts)
return render_template('index.html', data=district)
if __name__=="__main__":
app.run(debug=True)
```
Above now is all the python code I am running.
[The error that is shown on the webpage when trying to load it.](https://i.stack.imgur.com/Hq0LY.png)
Any suggestions, please?
|
2021/06/22
|
[
"https://Stackoverflow.com/questions/68083635",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/16289640/"
] |
There's lots of fundamental problems in the code. Most notably, `int**` is not a 2D array and cannot point at one.
* `i<2` typo in the `for(int j...` loop.
* `i < n` in the `for(int k...` loop.
* To allocate a 2D array you must do: `int (*a)[2] = malloc(sizeof(int) * 2 * 2);`. Or if you will `malloc( sizeof(int[2][2]) )`, same thing.
* To access a 2D array you do `a[i][j]`.
* To pass a 2D array to a function you do `void func (int n, int arr[n][n]);`
* Returning a 2D array from a function is trickier, easiest for now is just to use `void*` and get that working.
* `malloc` doesn't initialize the allocated memory. If you want to do `+=` on `c` you should use `calloc` instead, to set everything to zero.
* Don't write an unreadable mess like `*(*(c + i) + j)`. Write `c[i][j]`.
I fixed these problems and got something that runs. You check if the algorithm is correct from there.
```
#include <stdio.h>
#include <stdlib.h>
void* multiply(int n, int a[n][n], int b[n][n]) {
int (*c)[n] = calloc(1, sizeof(int[n][n]));
for (int i = 0; i < n; i++) {
for (int j = 0; j < n; j++) {
for (int k = 0; k < n; k++) {
c[i][j] += a[i][k] * b[k][j];
}
}
}
return c;
}
int main() {
int (*a)[2] = malloc(sizeof(int[2][2]));
int (*b)[2] = malloc(sizeof(int[2][2]));
for (int i = 0; i < 2; i++) {
for (int j = 0; j < 2; j++) {
a[i][j] = i - j;
b[i][j] = j - i;
}
}
int (*c)[2] = multiply(2, a, b);
for (int i = 0; i < 2; i++) {
for (int j = 0; j < 2; j++) {
printf("c[%d][%d] = %d\n", i, j, c[i][j]);
}
}
free(a);
free(b);
free(c);
return 0;
}
```
|
You need to fix multiple errors here:
1/ line 5/24/28: `int **c = malloc(sizeof(int*) * n )`
2/ line 15: `k<n`
3/ Remark: use `a[i][j]` instead of `*(*(a+i)+j)`
4/ line 34: `j<2`
5/ check how to create a 2d matrix using pointers.
```
#include <stdio.h>
#include <stdlib.h>
int** multiply(int** a, int** b, int n) {
int **c = malloc(sizeof(int*) * n );
for (int i=0;i<n;i++){
c[i]=malloc(sizeof(int) * n );
}
// Rows of c
for (int i = 0; i < n; i++) {
// Columns of c
for (int j = 0; j < n; j++) {
// c[i][j] = Row of a * Column of b
for (int k = 0; k < n; k++) {
c[i][j] += a[i][k] * b[k][j];
}
}
}
return c;
}
int main() {
int **a = malloc(sizeof(int*) * 2);
for (int i=0;i<2;i++){
a[i]=malloc(sizeof(int)*2);
}
int **b = malloc(sizeof(int) * 2);
for (int i=0;i<2;i++){
b[i]=malloc(sizeof(int)*2);
}
for (int i = 0; i < 2; i++) {
for (int j = 0; j < 2; j++) {
a[i][j] = i - j;
b[i][j] = i - j;
}
}
int **c = multiply(a, b, 2);
for (int i = 0; i < 2; i++) {
for (int j = 0; j < 2; j++) {
printf("c[%d][%d] = %d\n", i, j, c[i][j]);
}
}
free(a);
free(b);
free(c);
return 0;
}
```
| 14,400
|
47,969,587
|
I can't figure out how to open a dng file in opencv.
The file was created when using the pro options of the Samsung Galaxy S7.
The images that are created when using those options are a dng file as well as a jpg of size 3024 x 4032 (I believe that is the dimensions of the dng file as well).
I tried using the answer from [here](https://stackoverflow.com/questions/18682830/opencv-python-display-raw-image) (except with 3 colors instead of grayscale) like so:
```
import numpy as np
fd = open("image.dng", 'rb')
rows = 4032
cols = 3024
colors = 3
f = np.fromfile(fd, dtype=np.uint8,count=rows*cols*colors)
im = f.reshape((rows, cols,colors)) #notice row, column format
fd.close()
```
However, i got the following error:
```
cannot reshape array of size 24411648 into shape (4032,3024,3)
```
Any help would be appreciated
|
2017/12/25
|
[
"https://Stackoverflow.com/questions/47969587",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8461821/"
] |
As far as i know it is possible that DNG files can be compressed (even though it is lossless format), so you will need to decode your dng image first. <https://www.libraw.org/> is capable of doing that.
There is python wrapper for that library (<https://pypi.python.org/pypi/rawpy>)
```
import rawpy
import imageio
path = 'image.dng'
with rawpy.imread(path) as raw:
rgb = raw.postprocess()
```
|
[process\_raw](https://github.com/DIYer22/process_raw) supports both read and write `.dng` format raw image. Here is a python example:
```py
import cv2
from process_raw import DngFile
# Download raw.dng for test:
# wget https://github.com/yl-data/yl-data.github.io/raw/master/2201.process_raw/raw-12bit-GBRG.dng
dng_path = "./raw-12bit-GBRG.dng"
dng = DngFile.read(dng_path)
rgb1 = dng.postprocess() # demosaicing by rawpy
cv2.imwrite("rgb1.jpg", rgb1[:, :, ::-1])
rgb2 = dng.demosaicing(poww=0.3) # demosaicing with gamma correction 0.3
cv2.imwrite("rgb2.jpg", rgb2[:, :, ::-1])
DngFile.save(dng_path + "-save.dng", dng.raw, bit=dng.bit, pattern=dng.pattern)
```
| 14,402
|
43,287,649
|
Let's say this is normal:
```
@api.route('/something', methods=['GET'])
def some_function():
return jsonify([])
```
**Is it possible to use a function that is already defined?**
```
def some_predefined_function():
return jsonify([])
@api.route('/something', methods=['GET'])
some_predefined_function()
```
I tried the above type of syntax but it didn't work and I"m not a python guy so I'm not sure if it silly to want to do this.
|
2017/04/07
|
[
"https://Stackoverflow.com/questions/43287649",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/680578/"
] |
There are a few ways to add routes in Flask, and although `@api.route` is the most elegant one, it is not the only one.
Basically a decorator is just a fancy function, you can use it inline like this:
```
api.route('/api/galleries')(some_func)
```
Internally `route` is calling [add\_url\_rule](http://flask.pocoo.org/docs/0.12/api/#flask.Flask.add_url_rule) which you can also use like this:
```
app.add_url_rule('/', 'index', index)
```
You can also just create a wrapper function and use it in the classical decorator way like @bren mentioned.
|
Try this:
```
def some_predefined_function():
return jsonify([])
@api.route('/something', methods=['GET'])
def something():
return some_predefined_function()
```
| 14,403
|
42,077,768
|
the code is
```
for(int i = 0; i < max; i++) {
//do something
}
```
I use this exact code many times when I program, always starting at 0 and using the interval i++. There is really only one variable that changes (max)
It could not be that much shorter, but considering how much this code is used, it would be quite helpful.
I know in python there is the range function which is much quicker to type.
|
2017/02/06
|
[
"https://Stackoverflow.com/questions/42077768",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7525600/"
] |
When looping through collections, you can use enhanced loops:
```
int[] numbers =
{1,2,3,4,5,6,7,8,9,10};
for (int item : numbers) {
System.out.println(item);
}
```
|
Since a few people asked for something like this, here are a few things you could do, although whether these are really better is arguable and a matter of taste:
```
void times(int n, Runnable r) {
for (int i = 0; i < n; i++) {
r.run();
}
}
```
Usage:
```
times(10, () -> System.out.println("Hello, world!"));
```
Or:
```
void times(int n, IntConsumer consumer) {
for (int i = 0; i < n; i++) {
consumer.accept(i);
}
}
```
Usage:
```
times(10, x -> System.out.println(x+1));
```
Or:
```
void range(int lo, int hi, IntConsumer consumer) {
for (int i = lo; i < hi; i++) {
consumer.accept(i);
}
}
```
Usage:
```
range(1, 11, x -> System.out.println(x));
```
These are just a few ideas. Designed thoughtfully, placed in a common library, and used consistently, they could make common, idiomatic code terse yet readable. Used carelessly, they could turn otherwise straightforward code into an unmanageable mess. I doubt any two developers would ever agree on exactly where the lines should be drawn.
| 14,404
|
44,560,051
|
For example, I have a 2D Array with dimensions 3 x 3.
```
[1 2 7
4 5 6
7 8 9]
```
And I want to remove all columns which contain 7 - so first and third, outputting a 3 x 1 matrix of:
```
[2
5
8]
```
How do I go about doing this in python? I want to apply it to a large matrix of n x n dimensions.
Thank You!
|
2017/06/15
|
[
"https://Stackoverflow.com/questions/44560051",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8164362/"
] |
```
#Creating array
x = np.array([[1, 2, 7],[4,5, 6],[7,8,9]])
x
Out[]:
array([[1, 2, 7],
[4, 5, 6],
[7, 8, 9]])
#Deletion
a = np.delete(x,np.where(x ==7),axis=1)
a
Out[]:
array([[2],
[5],
[8]])
```
|
`numpy` can help you do this!
```
import numpy as np
a = np.array([1, 2, 7, 4, 5, 6, 7, 8, 9]).reshape((3, 3))
b = np.array([col for col in a.T if 7 not in col]).T
print(b)
```
| 14,414
|
58,338,523
|
The [documentation](https://learn.microsoft.com/en-us/python/api/azure-storage-blob/azure.storage.blob.blockblobservice.blockblobservice?view=azure-python#batch-set-standard-blob-tier-batch-set-blob-tier-sub-requests--timeout-none-) for the batch\_set\_standard\_blob\_tier function part of BlockBlobService in azure python SDK is not clear.
What exactly should be passed in the parameter?
An example would be appreciated.
|
2019/10/11
|
[
"https://Stackoverflow.com/questions/58338523",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6603039/"
] |
You need add `allowSyntheticDefaultImports` in your `tsconfig.json`.
`tsconfig.json`
```
{
"allowSyntheticDefaultImports": true,
"resolveJsonModule": true
}
```
TS
```
import countries from './countries/es.json';
```
|
import is not a good idea here later on if you want to move your file to some server you will need to rewrite the whole logic i would suggest to use [httpclient](https://angular.io/guide/http) get call here.So move you file to assets folder and then
```
constructor(private http:HttpClient){
this.http.get('assets/yourfilepath').subscribe(data=>{
const countries_keys = Object.keys(data['countries']);
console.log(data['countries'])//data is your json object here
})
}
```
| 14,422
|
52,907,038
|
I am looking for a method (if available) that can compare two values and raise an assertion error with a meaningful message when the comparison fails.
If I use `assert`, the failure message does not contain what values were compared when then assertion failed.
```
>>> a = 3
>>> b = 4
>>> assert a == b
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AssertionError
>>>
```
If I use the `assertEqual()` method from the `unittest.TestCase` package, the assertion message contains the values that were compared.
```
a = 3
b = 4
> self.assertEqual(a, b)
E AssertionError: 3 != 4
```
Note that, here, the assertion error message contains the values that were compared. That is very useful in real-life scenarios and hence necessary for me. The plain `assert` (see above) does not do that.
However, so far, I could use `assertEqual()` only in the class that inherits from `unittest.TestCase` and provides few other required methods like `runTest()`. I want to use `assertEqual()` anywhere, not only in the inherited classes. Is that possible?
I tried the following but they did not work.
```
>>> import unittest
>>> unittest.TestCase.assertEqual(a, b)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: unbound method failUnlessEqual() must be called with TestCase instance as first argument (got int instance instead)
>>>
>>>
>>> tc = unittest.TestCase()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lib64/python2.6/unittest.py", line 215, in __init__
(self.__class__, methodName)
ValueError: no such test method in <class 'unittest.TestCase'>: runTest
>>>
```
Is there any other package or library that offers similar methods like `assertEqual()` that can be easily used without additional constraints?
|
2018/10/20
|
[
"https://Stackoverflow.com/questions/52907038",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/278326/"
] |
You have to give the assertion message by hand:
```
assert a == b, '%s != %s' % (a, b)
# AssertionError: 3 != 4
```
|
have you looked at numpy.testing?
<https://docs.scipy.org/doc/numpy-1.13.0/reference/routines.testing.html>
Amongst others it has:
assert\_almost\_equal(actual, desired[, ...]) Raises an AssertionError if two items are not equal up to desired precision.
This assert prints out actual and desired. If you ramp up precision the comparison is arbitrarily close to == (For floats)
| 14,425
|
69,843,983
|
I main java and just started on python, and I ran into this error when I was trying to create a class. Can anyone tell me what is wrong?
```
import rectangle
a = rectangle(4, 5)
print(a.getArea())
```
this is what is in the rectangle class:
```
class rectangle:
l = 0
w = 0
def __init__(self, l, w):
self.l = l
self.w = w
def getArea(self):
return self.l * self.w
def getLength(self):
return self.l
def getWidth(self):
return self.w
def __str__(self):
return "this is a", self.l, "by", self.w, "rectangle with an area of", self.getArea()
```
|
2021/11/04
|
[
"https://Stackoverflow.com/questions/69843983",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/16533352/"
] |
I don't know what you have implemented in the rectangle module but I suspect that what you're actually looking for is this:
```
from rectangle import rectangle
a = rectangle(4, 5)
print(a.getArea())
```
If not, give us an indication of what's in rectangle.py
|
your gonna need to specify what module the function is from
so either import all the functions
so change `import rectangle` to `from rectangle import *`
or switch the `rectangle(4,5)` to `rectangle.rectangle(4,5)`
| 14,433
|
32,967,460
|
I have a django site that is deployed in production and already ran `python manage.py runserver` and so now the server is running on the port of `8000`.
So what I want to do is to hit the running server and visited this on the domain `domainname.com:8000` and am not getting any response from the server.
Should I be doing something else? Very noob sysadmin here.
Note: Already set `debug=False` and `ALLOWED_HOSTS = ['domain.com']`
|
2015/10/06
|
[
"https://Stackoverflow.com/questions/32967460",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1140228/"
] |
>
> I have a django site that is deployed in production and already ran python manage.py runserver
>
>
>
That's not how you deploy Django projects in production, cf <https://docs.djangoproject.com/en/1.8/howto/deployment/>
The builtin dev server is only made for dev - it's unsafe, doesn't handle concurrent requests etc.
|
Additionally to @bruno desthuilliers answer, with which I totally agree, if nevertheless you insist, you have to run the server as:
```
python manage.py runserver 0.0.0.0:8000
```
so that to let it listen to any interface.
Relevant documentation: [django-admin](https://docs.djangoproject.com/en/1.8/ref/django-admin/#runserver-port-or-address-port).
| 14,434
|
61,114,206
|
Recently I created a 24 game solver with python
Read this website if you do not know what the 24 game is:
<https://www.pagat.com/adders/24.html>
Here is the code:
```
from itertools import permutations, product, chain, zip_longest
from fractions import Fraction as F
solutions = []
def ask4():
num1 = input("Enter First Number: ")
num2 = input("Enter Second Number: ")
num3 = input("Enter Third Number: ")
num4 = input("Enter Fourth Number: ")
digits = [num1, num2, num3, num4]
return list(digits)
def solve(digits, solutions):
digit_length = len(digits)
expr_length = 2 * digit_length - 1
digit_perm = sorted(set(permutations(digits)))
op_comb = list(product('+-*/', repeat=digit_length-1))
brackets = ([()] + [(x,y)
for x in range(0, expr_length, 2)
for y in range(x+4, expr_length+2, 2)
if (x,y) != (0,expr_length+1)]
+ [(0, 3+1, 4+2, 7+3)])
for d in digit_perm:
for ops in op_comb:
if '/' in ops:
d2 = [('F(%s)' % i) for i in d]
else:
d2 = d
ex = list(chain.from_iterable(zip_longest(d2, ops, fillvalue='')))
for b in brackets:
exp = ex[::]
for insert_point, bracket in zip(b, '()'*(len(b)//2)):
exp.insert(insert_point, bracket)
txt = ''.join(exp)
try:
num = eval(txt)
except ZeroDivisionError:
continue
if num == 24:
if '/' in ops:
exp = [(term if not term.startswith('F(') else term[2])
for term in exp]
ans = ' '.join(exp).rstrip()
print("Solution found:", ans)
solutions.extend(ans)
return ans
print("No solution found for:", ' '.join(digits))
def main():
digits = ask4()
solve(digits, solutions)
print(len(solutions))
print("Bye")
main()
```
Right now, my code only shows one solution for the numbers given, even when there are clearly more solutions.
So if someone knows how to do this please help me
Thanks
|
2020/04/09
|
[
"https://Stackoverflow.com/questions/61114206",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10379866/"
] |
On the network tab, you must select the post request and then go to the parameters you are sending and check if you are sending the data and if it is the right structure.
This is how it looks like on Chrome there is where you check the data you are sending
[](https://i.stack.imgur.com/fFC8J.png)
Try modifying your handleFormSubmit
```
handleFormSubmit() {
let apppointment = JSON.stringify({
title: this.state.title,
appointment_date: this.state.appointment_date
})
$.ajax({
url: '/appointments',
type: "POST",
data: apppointment,
contentType: 'application/json'
})
}
```
|
meaby you can try with axios instance ajax
| 14,435
|
15,969,213
|
I have recently been working on a pet project in python using flask. It is a simple pastebin with server-side syntax highlighting support with pygments. Because this is a costly task, I delegated the syntax highlighting to a celery task queue and in the request handler I'm waiting for it to finish. Needless to say this does no more than alleviate CPU usage to another worker, because waiting for a result still locks the connection to the webserver.
Despite my instincts telling me to avoid premature optimization like the plague, I still couldn't help myself from looking into async.
**Async**
If have been following python web development lately, you surely have seen that async is everywhere. What async does is bringing back cooperative-multitasking, meaning each "thread" decides when and where to yield to another. This non-preemptive process is more efficient than OS-threads, but still has it's drawbacks. At the moment there seem to be 2 major approaches:
* event/callback style multitasking
* coroutines
The first one provides concurrency through loosely-coupled components executed in an event loop. Although this is safer with respect to race conditions and provides for more consistency, it is considerably less intuitive and harder to code than preemptive multitasking.
The other one is a more traditional solution, closer to threaded programming style, the programmer only having to manually switch context. Although more prone to race-conditions and deadlocks, it provides an easy drop-in solution.
Most async work at the moment is done on what is known as **IO-bound** tasks, tasks that block to wait for input or output. This is usually accomplished through the use of polling and timeout based functions that can be called and if they return negatively, context can be switched.
Despite the name, this could be applied to **CPU-bound** tasks too, which can be delegated to another worker(thread, process, etc) and then non-blockingly waited for to yield. Ideally, these tasks would be written in an async-friendly manner, but realistically this would imply separating code into small enough chunks not to block, preferably without scattering context switches after every line of code. This is especially inconvenient for existing synchronous libraries.
---
Due to the convenience, I settled on using gevent for async work and was wondering how is to be dealt with CPU-bound tasks in an async environment(using futures, celery, etc?).
How to use async execution models(gevent in this case) with traditional web frameworks such as flask? What are some commonly agreed-upon solutions to these problems in python(futures, task queues)?
**EDIT:** To be more specific - How to use gevent with flask and how to deal with CPU-bound tasks in this context?
**EDIT2:** Considering how Python has the GIL which prevents optimal execution of threaded code, this leaves only the multiprocessing option, in my case at least. This means either using *concurrent.futures* or some other external service dealing with processing(can open the doors for even something language agnostic). What would, in this case, be some popular or often-used solutions with gevent(**i.e.** celery)? - best practices
|
2013/04/12
|
[
"https://Stackoverflow.com/questions/15969213",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/492162/"
] |
It should be thread-safe to do something like the following to separate cpu intensive tasks into asynchronous threads:
```
from threading import Thread
def send_async_email(msg):
mail.send(msg)
def send_email(subject, sender, recipients, text_body, html_body):
msg = Message(subject, sender = sender, recipients = recipients)
msg.body = text_body
msg.html = html_body
thr = Thread(target = send_async_email, args = [msg])
thr.start()
```
IF you need something more complicated, then perhaps Flask-Celery or Multiprocessing library with "Pool" might be useful to you.
I'm not too familiar with gevent though I can't imagine what more complexity you might need or why.
I mean if you're attempting to have efficiency of a major world-website, then I'd recommend building C++ applications to do your CPU-intensive work, and then use Flask-celery or Pool to run that process. (this is what YouTube does when mixing C++ & Python)
|
How about simply using ThreadPool and Queue? You can then process your stuff in a seperate thread in a synchronous manner and you won't have to worry about blocking at all. Well, Python is not suited for CPU bound tasks in the first place, so you should also think of spawning subprocesses.
| 14,436
|
27,260,199
|
So I have this issue where libv8-3.16.14.3 fails to install, even though it deceptively tells you it did install.
So the first sign of issue was when it did:
```
An error occurred while installing libv8 (3.16.14.3), and Bundler cannot continue.
Make sure that `gem install libv8 -v '3.16.14.3'` succeeds before bundling.
```
During a `bundle install`. So I did some googling and came across [this response](https://stackoverflow.com/a/19674065/4005826) which when run:
```
gem install libv8 -v '3.16.14.3' -- --with-system-v8
Building native extensions with: '--with-system-v8'
This could take a while...
Successfully installed libv8-3.16.14.3
Parsing documentation for libv8-3.16.14.3
Done installing documentation for libv8 after 1 seconds
1 gem installed
```
Leads you to thinking it worked. but run `bundle install` again and see the error in question which is:
```
An error occurred while installing libv8 (3.16.14.3), and Bundler cannot continue.
Make sure that `gem install libv8 -v '3.16.14.3'` succeeds before bundling.
```
The entire trace log can be seen below (caused by running `bundle install`):
```
Gem::Ext::BuildError: ERROR: Failed to build gem native extension.
/Users/Adam/.rvm/rubies/ruby-2.1.5/bin/ruby extconf.rb
creating Makefile
Compiling v8 for x64
Using python 2.7.6
Configured with: --prefix=/Applications/Xcode.app/Contents/Developer/usr --with-gxx-include-dir=/usr/include/c++/4.2.1
Unable to find a compiler officially supported by v8.
It is recommended to use GCC v4.4 or higher
Using compiler: g++
Configured with: --prefix=/Applications/Xcode.app/Contents/Developer/usr --with-gxx-include-dir=/usr/include/c++/4.2.1
Unable to find a compiler officially supported by v8.
It is recommended to use GCC v4.4 or higher
../src/cached-powers.cc:136:18: error: unused variable 'kCachedPowersLength' [-Werror,-Wunused-const-variable]
static const int kCachedPowersLength = ARRAY_SIZE(kCachedPowers);
^
1 error generated.
make[1]: *** [/Users/Adam/Dropbox/AisisGit/AisisPlatform/.bundle/gems/gems/libv8-3.16.14.3/vendor/v8/out/x64.release/obj.target/preparser_lib/src/cached-powers.o] Error 1
make: *** [x64.release] Error 2
/Users/Adam/Dropbox/AisisGit/AisisPlatform/.bundle/gems/gems/libv8-3.16.14.3/ext/libv8/location.rb:36:in `block in verify_installation!': libv8 did not install properly, expected binary v8 archive '/Users/Adam/Dropbox/AisisGit/AisisPlatform/.bundle/gems/gems/libv8-3.16.14.3/vendor/v8/out/x64.release/obj.target/tools/gyp/libv8_base.a'to exist, but it was not found (Libv8::Location::Vendor::ArchiveNotFound)
from /Users/Adam/Dropbox/AisisGit/AisisPlatform/.bundle/gems/gems/libv8-3.16.14.3/ext/libv8/location.rb:35:in `each'
from /Users/Adam/Dropbox/AisisGit/AisisPlatform/.bundle/gems/gems/libv8-3.16.14.3/ext/libv8/location.rb:35:in `verify_installation!'
from /Users/Adam/Dropbox/AisisGit/AisisPlatform/.bundle/gems/gems/libv8-3.16.14.3/ext/libv8/location.rb:26:in `install!'
from extconf.rb:7:in `<main>'
GYP_GENERATORS=make \
build/gyp/gyp --generator-output="out" build/all.gyp \
-Ibuild/standalone.gypi --depth=. \
-Dv8_target_arch=x64 \
-S.x64 -Dv8_enable_backtrace=1 -Dv8_can_use_vfp2_instructions=true -Darm_fpu=vfpv2 -Dv8_can_use_vfp3_instructions=true -Darm_fpu=vfpv3
CXX(target) /Users/Adam/Dropbox/AisisGit/AisisPlatform/.bundle/gems/gems/libv8-3.16.14.3/vendor/v8/out/x64.release/obj.target/preparser_lib/src/allocation.o
CXX(target) /Users/Adam/Dropbox/AisisGit/AisisPlatform/.bundle/gems/gems/libv8-3.16.14.3/vendor/v8/out/x64.release/obj.target/preparser_lib/src/atomicops_internals_x86_gcc.o
CXX(target) /Users/Adam/Dropbox/AisisGit/AisisPlatform/.bundle/gems/gems/libv8-3.16.14.3/vendor/v8/out/x64.release/obj.target/preparser_lib/src/bignum.o
CXX(target) /Users/Adam/Dropbox/AisisGit/AisisPlatform/.bundle/gems/gems/libv8-3.16.14.3/vendor/v8/out/x64.release/obj.target/preparser_lib/src/bignum-dtoa.o
CXX(target) /Users/Adam/Dropbox/AisisGit/AisisPlatform/.bundle/gems/gems/libv8-3.16.14.3/vendor/v8/out/x64.release/obj.target/preparser_lib/src/cached-powers.o
extconf failed, exit code 1
Gem files will remain installed in /Users/Adam/Dropbox/AisisGit/AisisPlatform/.bundle/gems/gems/libv8-3.16.14.3 for inspection.
Results logged to /Users/Adam/Dropbox/AisisGit/AisisPlatform/.bundle/gems/extensions/x86_64-darwin-14/2.1.0/libv8-3.16.14.3/gem_make.out
An error occurred while installing libv8 (3.16.14.3), and Bundler cannot continue.
Make sure that `gem install libv8 -v '3.16.14.3'` succeeds before bundling.
```
What is going on.
**Note:** I am doing all this on a mac.
|
2014/12/02
|
[
"https://Stackoverflow.com/questions/27260199",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4005826/"
] |
I got this to work by first using Homebrew to install V8:
```
$ brew install v8
```
Then running the command you mentioned you found on Google:
```
$ gem install libv8 -v '3.16.14.3' -- --with-system-v8
```
And finally re-running bundle install:
```
$ bundle install
```
|
As others have suggested:
```
$ brew install v8
$ gem install libv8 -v '3.16.14.3' -- --with-system-v8
$ bundle install
```
If that does not work, try running `bundle update`.
Running `bundle update` in addition was the only way it worked
| 14,437
|
3,442,920
|
I can't seem to find any information on debugging a python web application, specifically stepping through the execution of a web request.
is this just not possible? if no, why not?
|
2010/08/09
|
[
"https://Stackoverflow.com/questions/3442920",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/39677/"
] |
If you put
```
import pdb
pdb.set_trace()
```
in your code, the web app will drop to a pdb debugger session upon executing `set_trace`.
Also useful, is
```
import code
code.interact(local=locals())
```
which drops you to the python interpreter. Pressing Ctrl-d resumes execution.
Still more useful, is
```
import IPython.Shell
ipshell = IPython.Shell.IPShellEmbed()
ipshell(local_ns=locals())
```
which drops you into an IPython session (assuming you've installed IPython). Here too, pressing Ctrl-d resumes execution.
|
use Python Debbuger, `import pdb; pdb.set_trace()` exactly where you want to start debugging, and your terminal will pause in that line.
More info here:
<http://plone.org/documentation/kb/using-pdb>
| 14,440
|
17,919,788
|
I tried to run my python scripts using crontab. As the amount of my python scripts accumulates, it is hard to manage in crontab.
Then I tries two python schedule task libraries named [Advanced Python Scheduler](http://pythonhosted.org/APScheduler/) and [schedule](https://github.com/dbader/schedule).
The two libraries are quite the same in use, for example:
```
import schedule
import time
def job():
print("I'm working...")
schedule.every(10).minutes.do(job)
schedule.every().hour.do(job)
schedule.every().day.at("10:30").do(job)
while True:
schedule.run_pending()
time.sleep(1)
```
The library uses the `time` module to wait until the exact moment to execute the task.
But the script has to run all the time and consumes tens of Megabytes memory. So I want to ask it is a better way to handle schedule jobs using the library? Thanks.
|
2013/07/29
|
[
"https://Stackoverflow.com/questions/17919788",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/975222/"
] |
In 2013 - when this question was created - there were not as many workflow/scheduler management tools freely available on the market as they are today.
So, writing this answer in 2021, I would suggest using Crontab as long as you have only few scripts on very few machines.
With a growing collection of scripts, the need for better monitoring/logging or pipelining you should consider using a dedicated tool for that ( like [Airflow](https://airflow.apache.org/), [N8N](https://n8n.io), [Luigi](https://github.com/spotify/luigi) ... )
|
One way is to use [management commands](https://docs.djangoproject.com/en/dev/howto/custom-management-commands/) and setup a crontab to run those. We use that in production and it works really well.
Another is to use something like celery to [schedule tasks](http://docs.celeryproject.org/en/latest/reference/celery.schedules.html).
| 14,443
|
15,096,667
|
I'm trying to convert code from TCL into python using Tkinter.
I was wondering what would be the equivalent code in Tkinter for
"spawn ssh", "expect", and "send"?
For example, my simple tcl program would be something like:
```
spawn ssh root@138.120.###.###
expect "(yes/no)?" {send -- "yes\r"}
expect "password" {send -- "thepassword\r"}
```
|
2013/02/26
|
[
"https://Stackoverflow.com/questions/15096667",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1988580/"
] |
You could try to use [pexpect](http://www.noah.org/wiki/pexpect).
Expect is used to automate other command line tools.
Edit: Of curse you could just try to execute `package require Expect` through Tkinter, but what benefit would that have over a pure Tcl script? After all you write Tcl code then, wraped in python.
Another Edit: Tkinter is used to get access to Tk (the cool GUI Toolkit :P) from python, and it works by calling Tcl commands somewhere down the line. So, you can convert EVERY tcl programm to python (if you have the right tcl libs installed of course).
|
[pxpect](http://pexpect.sourceforge.net) has a module called pxssh, that takes a lot of the the work out of manipulating simple ssh sessions. I found that this wasn't sufficient for heavy automation and wrote [remote](http://gethub.com/Telenav/python-remote) as an add-on to increase error handling.
using pxssh
```
from pxssh import pxssh
p = pxssh()
p.login('host', 'user', 'password')
p.send('command')
```
using remote
```
from remote import remote
r = remote('host', 'user', 'password')
if r.login():
r.send('command')
```
I'm not sure about the Tkinter use case, my only assumption is that you are using it to craft a gui and there are many tutorials for it. I've used <http://sebsauvage.net/python/gui/> before and it compares both Tkinter and wxPython.
| 14,444
|
22,791,074
|

what function should i use to draw the above performance profile of different algorithms, the running time data is from python implementation, stored in lists for different algorithm. Is there any build-in function in Python or Matlab to draw this kind of figure automatically? Thanks
|
2014/04/01
|
[
"https://Stackoverflow.com/questions/22791074",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3029108/"
] |
You can absolutely have a state without a URL. In fact, none of your states need URLs. That's a core part of the design. Having said that, I wouldn't do what you did above.
If you want two states to have the same URL, create an [abstract parent state](https://github.com/angular-ui/ui-router/wiki/Nested-States-%26-Nested-Views#abstract-states), assign a URL to it, and make the two states children of it (with no URL for either one).
|
To add to the other answer, Multiple Named Views do not use a URL.
From the docs:
>
> If you define a views object, your state's templateUrl, template and
> templateProvider will be ignored. So in the case that you need a
> parent layout of these views, you can define an abstract state that
> contains a template, and a child state under the layout state that
> contains the 'views' object.
>
>
>
The reason for using named views is so that you can have more than one ui-view per template or in other words multiple views inside a single state. This way,
you can change the parts of your site using your routing even if the URL does not change and you can also reuse data in different templates because it's a
component with it's own controller and view.
See [Angular Routing using ui-router](https://scotch.io/tutorials/angular-routing-using-ui-router) for an in-depth explanation with examples.
| 14,445
|
25,185,015
|
I am using python request module for doing HTTP communications. I was using proxy before doing any communication.
```
import requests
proxy = {'http': 'xxx.xxx.xxx.xxx:port'}
OR
proxy = {'http': 'http://xxx.xxx.xxx.xxx:port'}
OR
proxy = {'http://xxx.xxx.xxx.xxx:port'}
requests.get(url, proxies = proxy)
```
I am using the above code to add the proxy to the request object. But its seems like proxy is not working. Requests module is taking my network IP and firing the request.
Are there any bug or issue with the request module or any other know issue or is there anything i am missing.
|
2014/08/07
|
[
"https://Stackoverflow.com/questions/25185015",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1000683/"
] |
Try this:
```
proxy = {'http': 'http://xxx.xxx.xxx.xxx:port'}
```
I guess you just missed the `http://` in the value of the proxy dict.
Check: <http://docs.python-requests.org/en/latest/user/advanced/#proxies>
|
[Documentation](http://docs.python-requests.org/en/latest/user/advanced/#proxies) says:
>
> If you need to use a proxy, you can configure individual requests with
> the proxies argument to any request method:
>
>
>
```
import requests
proxies = {"http": "http://10.10.1.10:3128"}
requests.get("http://example.org", proxies=proxies)
```
Here proxies["http"] = "`http://xxx.xxx.xxx.xxx:port`". It seems you lack **http://**
| 14,446
|
61,226,690
|
I am relatively new to Python so please pardon my ignorance. I want to know answer to following questions
1. How does pip know the location to install packages that it installs? After a built of trial and error
I suspect that it maybe hardcoded at time of installation.
2. Are executables like pip.exe what they call frozen binaries? In essence, does it mean that pip.exe will run without python. Again after a bit of trial and error i suspect that it requires a python installation to execute.
P.S: I know about sys.prefix,sys.executable and sys.exec\_prefix. If there is anything else on which the questions i asked on depends, pls link me to same.
|
2020/04/15
|
[
"https://Stackoverflow.com/questions/61226690",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10796482/"
] |
as I understand you only want to save user input only if contains text
so you have to clean the user input from HTML then check the output length
```
var regex = /(<([^>]+)>)/ig
body = "<p>test</p>"
hasText = !!body.replace(regex, "").length;
if(hasText) save()
```
|
This worked for me so it wouldn't escape images.
```
function isQuillEmpty(value: string) {
if (value.replace(/<(.|\n)*?>/g, '').trim().length === 0 && !value.includes("<img")) {
return true;
}
return false;
}
```
| 14,447
|
37,308,794
|
I'm new to this and trying to deploy a first app to the app engine. However, when i try to i get this message:
"This application does not exist (app\_id=u'udacity')."
I fear it might have to do with the app.yaml file so i'll just leave here what i have there:
application: udacity
version: 1
runtime: python27
api\_version: 1
threadsafe: yes
handlers:
- url: /favicon.ico
static\_files: favicon.ico
upload: favicon.ico
* url: /.\*
script: main.app
libraries:
- name: webapp2
version: "2.5.2"
Thanks in advance.
|
2016/05/18
|
[
"https://Stackoverflow.com/questions/37308794",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6002144/"
] |
Blocks are normal objects so you can store them in NSArray/NSDictionary. Having that said the implementation is straightforward.
```
#import <Foundation/Foundation.h>
/** LBNotificationCenter.h */
typedef void (^Observer)(NSString *name, id data);
@interface LBNotificationCenter : NSObject
- (void)addObserverForName:(NSString *)name block:(Observer)block;
- (void)removeObserverForName:(NSString *)name block:(Observer)block;
- (void)postNotification:(NSString *)name data:(id)data;
@end
/** LBNotificationCenter.m */
@interface LBNotificationCenter ()
@property (strong, nonatomic) NSMutableDictionary <id, NSMutableArray <Observer> *> *observers;
@end
@implementation LBNotificationCenter
- (instancetype)init
{
self = [super init];
if (self) {
_observers = [NSMutableDictionary new];
}
return self;
}
- (void)addObserverForName:(NSString *)name block:(Observer)block
{
// check name and block for presence...
NSMutableArray *nameObservers = self.observers[name];
if (nameObservers == nil) {
nameObservers = (self.observers[name] = [NSMutableArray new]);
}
[nameObservers addObject:block];
}
- (void)removeObserverForName:(NSString *)name block:(Observer)block
{
// check name and block for presence...
NSMutableArray *nameObservers = self.observers[name];
// Some people might argue that this check is not needed
// as Objective-C allows messaging nil
// I prefer to keep it explicit
if (nameObservers == nil) {
return;
}
[nameObservers removeObject:block];
}
- (void)postNotification:(NSString *)name data:(id)data
{
// check name and data for presence...
NSMutableArray *nameObservers = self.observers[name];
if (nameObservers == nil) {
return;
}
for (Observer observer in nameObservers) {
observer(name, data);
}
}
@end
int main(int argc, const char * argv[]) {
@autoreleasepool {
NSString *const Notification1 = @"Notification1";
NSString *const Notification2 = @"Notification2";
LBNotificationCenter *notificationCenter = [LBNotificationCenter new];
Observer observer1 = ^(NSString *name, id data) {
NSLog(@"Observer1 is called for name: %@ with some data: %@", name, data);
};
Observer observer2 = ^(NSString *name, id data) {
NSLog(@"Observer2 is called for name: %@ with some data: %@", name, data);
};
[notificationCenter addObserverForName:Notification1 block:observer1];
[notificationCenter addObserverForName:Notification2 block:observer2];
[notificationCenter postNotification:Notification1 data:@"Some data"];
[notificationCenter postNotification:Notification2 data:@"Some data"];
[notificationCenter removeObserverForName:Notification1 block:observer1];
// no observer is listening at this point so no logs for Notification1...
[notificationCenter postNotification:Notification1 data:@"Some data"];
[notificationCenter postNotification:Notification2 data:@"Some data"];
}
return 0;
}
```
|
Why don't you just add the blocks into the `NSDictionary` ? You can do it like it's explained [in this answer](https://stackoverflow.com/questions/6364648/keep-blocks-inside-a-dictionary)
| 14,448
|
38,435,845
|
I am newbie to elasticsearch, I know there is two official client elasticsearch supplies, but when I use the python elasticsearch, i can't find how to use the transport client..
I read the whole doc which is as following:
```
https://elasticsearch-py.readthedocs.io/en/master/index.html
```
I also search some docs, i can't find the way to use elasticsearch with python.also, in one doc, it says:
>
> Using the native protocol from anything other than Java is not
> recommended, as it would entail implementing a lot of custom
> serialization.
>
>
>
does this mean python elasticsearch can't use transport client?
|
2016/07/18
|
[
"https://Stackoverflow.com/questions/38435845",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6114947/"
] |
The transport client is written in Java, so the only way to use it from Python is by switching to Jython.
|
I think the previous answer is out of date now, if this is [the transport client you mean](https://elasticsearch-py.readthedocs.io/en/master/transports.html).
I've made use of this API to do things like use the [\_rank\_eval](https://www.elastic.co/guide/en/elasticsearch/reference/6.7/search-rank-eval.html) API, which is still considered "experimental" so hasn't made it into the official client yet.
```
def rank_eval(self, query, ratings, metric_name):
res = self.es.transport.perform_request(
"GET",
"/%s/_rank_eval" % INDEX,
body=self.rank_request(query, ratings, metric_name),
)
return res
```
| 14,449
|
56,415,470
|
IPython 7.5 documentation states:
>
> Change to Nested Embed
>
>
> The introduction of the ability to run async code had some effect on the IPython.embed() API. By default, embed
> will not allow you to run asynchronous code unless an event loop is specified.
>
>
>
However, there seem to be no description how to specify the event loop, in the documentation.
Running:
```py
import IPython
IPython.embed()
```
and then
```
In [1]: %autoawait on
In [2]: %autoawait
IPython autoawait is `on`, and set to use `<function _pseudo_sync_runner at 0x00000000066DEF28>`
In [3]: import asyncio
In [4]: await asyncio.sleep(1)
```
Gives:
```
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
~\Envs\[redacted]\lib\site-packages\IPython\core\async_helpers.py in _pseudo_sync_runner(coro)
71 # TODO: do not raise but return an execution result with the right info.
72 raise RuntimeError(
---> 73 "{coro_name!r} needs a real async loop".format(coro_name=coro.__name__)
74 )
75
RuntimeError: 'run_cell_async' needs a real async loop
```
---
On the other hand, running:
```py
import IPython
IPython.embed(using='asyncio')
```
and then:
```
In [1]: %autoawait on
In [2]: %autoawait
IPython autoawait is `on`, and set to use `asyncio`
In [3]: import asyncio
In [4]: await asyncio.sleep(1)
```
Gives:
```
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
~\Envs\[redacted]\lib\site-packages\IPython\core\async_helpers.py in __call__(self, coro)
25 import asyncio
26
---> 27 return asyncio.get_event_loop().run_until_complete(coro)
28
29 def __str__(self):
c:\python35-64\Lib\asyncio\base_events.py in run_until_complete(self, future)
452 future.add_done_callback(_run_until_complete_cb)
453 try:
--> 454 self.run_forever()
455 except:
456 if new_task and future.done() and not future.cancelled():
c:\python35-64\Lib\asyncio\base_events.py in run_forever(self)
406 self._check_closed()
407 if self.is_running():
--> 408 raise RuntimeError('This event loop is already running')
409 if events._get_running_loop() is not None:
410 raise RuntimeError(
RuntimeError: This event loop is already running
```
```
|
2019/06/02
|
[
"https://Stackoverflow.com/questions/56415470",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6543759/"
] |
```
from IPython import embed
import nest_asyncio
nest_asyncio.apply()
```
Then `embed(using='asyncio')` might somewhat work. I don't know why they don't give us a real solution though.
|
This seems to be possible, but you have to use `ipykernel.embed.embed_kernel()` instead of `IPython.embed()`.
`ipykernel` requires you to connect to the embedded kernel remotely from a separate jupyter console though, so it is not as convenient as just being able to spawn the shell on the same window, but at least it seems to work.
| 14,450
|
62,175,337
|
I want to fetch the next 5 records after the specific index.
For example, this is my dataframe:
```
Id Name code
1 java 45
2 python 78
3 c 65
4 c++ 25
5 html 74
6 css 63
7 javascript 45
8 php 44
9 Ajax 88
10 jQuery 92
```
When i provide the index value as `3` then the code must fetch the next 5 values from `3`. So the result should look like:
```
Id Name code
3 c 65
4 c++ 25
5 html 74
6 css 63
7 javascript 45
```
I do not understand how to do this. My code is not working as I want it to.
I am using this code for fetching the next 5 records:
```
data = df.iloc[df.index.get_loc(indexid):+5]
```
|
2020/06/03
|
[
"https://Stackoverflow.com/questions/62175337",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10971762/"
] |
Set up properties on the file attached to the solution, basically you need to make sure the file is included to the solution output:
1. Set the `Build Action` property to `Content`.
2. Set the `Copy to the Output Directory` property to `Copy Always`.
For example, if the file is added to the project and you select it in the Solution Explorer and go to the Properties window you may see the following:
[](https://i.stack.imgur.com/hPVOV.png)
It will be added automatically to the output folder along with other add-in files. So, you will just have to rebuild the installer based on your output.
See [Deploy an Office solution by using Windows Installer](https://learn.microsoft.com/en-us/visualstudio/vsto/deploying-an-office-solution-by-using-windows-installer?view=vs-2019) for more information.
|
You really need to add that that utility to you installer project.
Or you can embed the utility as a resource in your dll, extract it at run-time, copy to some folder, and execute.
| 14,451
|
62,579,298
|
I have a data frame that looks like this:
```
name Title
abc 'Tech support'
xyz 'UX designer'
ghj 'Manager IT'
... ....
```
I want to iterate through the data frame and using `df.str.contains` make another column that will categorize those jobs. There are 8 categories.
The output will be :
```
name Title category
abc 'Tech support' 'Support'
xyz 'UX designer' 'Design'
ghj 'Manager IT' 'Management'
... .... ....
```
here's what I've tried so far:
```
for i in range(len(df)):
if df.Title[i].str.contains("Support"):
df.category[i]=="Support"
elif df.Title[i].str.contains("designer"):
df.category[i]=="Design"
else df.Title[i].str.contains("Manager"):
df.category[i]=="Management"
```
of course , I'm a noob at programming and this throws the error:
```
File "<ipython-input-29-d9457f9cb172>", line 6
else df.Title[i].str.contains("Manager"):
^
SyntaxError: invalid syntax
```
|
2020/06/25
|
[
"https://Stackoverflow.com/questions/62579298",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12724372/"
] |
You can do something like this:
```
cat_dict = {"Support":"Support", "designer":"Designer", "Manager": "Management"}
df['category'] = (df['Title'].str.extract(fr"\b({'|'.join(cat_dict.keys())})\b")[0]
.map(cat_dict)
)
```
|
General syntax of python if statement is:
```
if test expression:
Body of if
elif test expression:
Body of elif
else:
Body of else
```
As you can see in the syntax, to evaluate a *test expression*, it should be in the *if* or in the *elif* construct. The code throws the syntax error as the test expression is placed in the *else* construct. Consider changing the last *else* to *elif* and add a fall back case for error like:
```
else:
df.category[i]=="Others"
```
| 14,453
|
57,309,209
|
I am working on a data frame with DateTimeIndex of hourly temperature data spanning a couple of years. I want to add a column with the minimum temperature between 20:00 of a day and 8:00 of the *following* day. Daytime temperatures - from 8:00 to 20:00 - are not of interest. The result can either be at the same hourly resolution of the original data or be resampled to days.
I have researched a number of strategies to solve this, but am unsure about the most efficienct (in terms of primarily coding efficiency and secondary computing efficiency) respectively pythonic way to do this. Some of the possibilities I have come up with:
1. Attach a column with labels 'day', 'night' depending on `df.index.hour` and use `group_by` or `df.loc` to find the minimum
2. Resample to 12h and drop every second value. Not sure how I can make the resampling period start at 20:00.
3. Add a multi-index - I guess this is similar to approach 1, but feels a bit over the top for what I'm trying to achieve.
4. Use `df.between_time` (<https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.between_time.html#pandas.DataFrame.between_time>) though I'm not sure if the date change over midnight will make this a bit messy.
5. Lastly there is some discussion about combining rolling with a stepping parameter as new pandas feature: <https://github.com/pandas-dev/pandas/issues/15354>
Original df looks like this:
```
datetime temp
2009-07-01 01:00:00 17.16
2009-07-01 02:00:00 16.64
2009-07-01 03:00:00 16.21 #<-- minimum for the night 2009-06-30 (previous date since periods starts 2009-06-30 20:00)
... ...
2019-06-24 22:00:00 14.03 #<-- minimum for the night 2019-06-24
2019-06-24 23:00:00 18.87
2019-06-25 00:00:00 17.85
2019-06-25 01:00:00 17.25
```
I want to get something like this (min temp from day 20:00 to day+1 8:00):
```
datetime temp
2009-06-30 23:00:00 16.21
2009-07-01 00:00:00 16.21
2009-07-01 01:00:00 16.21
2009-07-01 02:00:00 16.21
2009-07-01 03:00:00 16.21
... ...
2019-06-24 22:00:00 14.03
2019-06-24 23:00:00 14.03
2019-06-25 00:00:00 14.03
2019-06-25 01:00:00 14.03
```
or a bit more succinct:
```
datetime temp
2009-06-30 16.21
... ...
2019-06-24 14.03
```
|
2019/08/01
|
[
"https://Stackoverflow.com/questions/57309209",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7057547/"
] |
Use the `base` option to `resample`:
```
rs = df.resample('12h', base=8).min()
```
Then keep only the rows for 20:00:
```
rs[rs.index.hour == 20]
```
|
you can use `TimeGrouper` with `freq=12h` and `base=8` to chunk the dataframe every 12h from 20:00 - (+day)08:00,
then you can just use `.min()`
try this:
```py
import pandas as pd
from io import StringIO
s = """
datetime temp
2009-07-01 01:00:00 17.16
2009-07-01 02:00:00 16.64
2009-07-01 03:00:00 16.21
2019-06-24 22:00:00 14.03
2019-06-24 23:00:00 18.87
2019-06-25 00:00:00 17.85
2019-06-25 01:00:00 17.25"""
df = pd.read_csv(StringIO(s), sep="\s\s+")
df['datetime'] = pd.to_datetime(df['datetime'])
result = df.sort_values('datetime').groupby(pd.Grouper(freq='12h', base=8, key='datetime')).min()['temp'].dropna()
print(result)
```
Output:
```
datetime
2009-06-30 20:00:00 16.21
2019-06-24 20:00:00 14.03
Name: temp, dtype: float64
```
| 14,456
|
54,403,437
|
I want to script with python using Notepad ++ but it works strangely, actually it does not work, so I have pycharm an everything is going well but in notepad ++ when I save file with .py and click run it does not work is there a step by step instruction to follow?
I have same problem with sublime text editor so I am lucky with just Pycharm all of the others has confused me please help me.
|
2019/01/28
|
[
"https://Stackoverflow.com/questions/54403437",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10979398/"
] |
You call `setPreferredSize` twice, which results in the first call doing basically nothing. That means you always have a `preferredSize` equal to the dimensions of the second image. What you *should* do is to set the size to `new Dimension(image.getWidth() + image2.getWidth(), image2.getHeight())` assuming both have the same height. If that is not the case set the `height` as the maximum of both images.
Secondly you need to offset the second image from the first image exactly by the width of the first image:
```
g2d.drawImage(image, 0, 0, this);
g2d.drawImage(image2, image.getWidth(), 0, this);
```
|
I found some errors in your code and I did not got what are you trying to do...
1] Over there you are actually not using the first setup
```
Dimension dimension = new Dimension(image.getWidth(), image.getHeight());
setPreferredSize(dimension); //not used
Dimension dimension2 = new Dimension(image2.getWidth(), image2.getHeight());
setPreferredSize(dimension2); //because overridden by this
```
It means, panel is having dimensions same as the `image2`, you should to set it as follows:
* height as max of the heights of both images
* width at least as summarize of widths of both pictures (if you want to paint them in same panel, as you are trying)
2] what is the `image` and `image2` datatypes? in the block above you have `File` but with different naming variables, `File` class ofcourse dont have width or height argument
I am assuming its [Image](https://docs.oracle.com/javase/7/docs/api/java/awt/Image.html) due usage in `Graphics.drawImage`, then:
You need to **setup preferred size** as I mentioned:
* height to max value of height from images
* width at least as summarize value of each widths
Dimensions things:
```
Dimension panelDim = new Dimension(image.getWidth() + image2.getWidth(),Math.max(image.getHeight(),image2.getHeight()));
setPreferredSize(panelDim)
```
Then you can **draw images in the original size**
*- due coordinates are having 0;0 in the left top corner and right bottom is this.getWidth(); this.getHeight()*
- check eg. [this explanation](https://docs.oracle.com/javase/tutorial/2d/overview/coordinate.html)
- you need to start paint in the left bottom corner and then move to correct position increase "X" as the width of first image
```
@Override
public void paintComponent(Graphics g) {
Graphics2D g2d = (Graphics2D) g;
/* public abstract boolean drawImage(Image img,
int x,
int y,
Color bgcolor,
ImageObserver observer)
*/
//start to paint at [0;0]
g2d.drawImage(image, 0, 0, this);
//start to paint just on the right side of first image (offset equals to width of first picture)- next pixel on the same line, on the bottom of the screen
g2d.drawImage(image2,image2.getWidth()+1, 0, this);
}
```
I didn't had a chance to test it, but it should be like this.
Important things are
* you need to have proper dimensions for fitting both images
* screen coordinates starts in the left top corner of the screens [0;0]
| 14,457
|
44,857,219
|
When I use the command "pip install pyperclip" it gives me this error
```
creating /Library/Python/2.7/site-packages/pyperclip
error: could not create '/Library/Python/2.7/site-packages/pyperclip': Permission denied
----------------------------------------
Command "/usr/bin/python -u -c "import setuptools, tokenize;__file__='/private/var/folders/ts/tdt25dd52pg6ymt1tc1djd540000gn/T/pip-build-QWGKB1/pyperclip/setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" install --record /var/folders/ts/tdt25dd52pg6ymt1tc1djd540000gn/T/pip-sfvxg3-record/install-record.txt --single-version-externally-managed --compile" failed with error code 1 in /private/var/folders/ts/tdt25dd52pg6ymt1tc1djd540000gn/T/pip-build-QWGKB1/pyperclip/
```
Why is it that I do not have permission.
|
2017/07/01
|
[
"https://Stackoverflow.com/questions/44857219",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4081977/"
] |
When ever you get error `Permission denied` its because you are trying to access the root using normal commands. So trying running the command as root to get rid of `Permissin denied` error. Run `sudo command` i.e `sudo pip install pyperclip`
|
Try it with
>
> sudo pip install pyperclip
>
>
>
Now if it throws some access deny error, try this:
>
> sudo pip -H install pyperclip
>
>
>
| 14,461
|
7,238,401
|
I've found similar but not identical questions [742371](https://stackoverflow.com/questions/742371/python-strange-behavior-in-for-loop-or-lists) and [4081217](https://stackoverflow.com/questions/4081217/how-to-modify-list-entries-during-for-loop) with great answers, but haven't come to a solution to my problem.
I'm trying to process items in a list in place while it's being looped over, and re-loop over what's remaining in the list if it has not met a conditional. The conditional will eventually be met as True for all items in the list, but not necessarily on a "known" iteration. It reminds me of building a tree to some degree, as certain items in the list must be processed before others, but the others may be looped over beforehand.
My first instinct is to create a recursive function and edit a slice copy of the list. However I'm having little luck ~
I won't initially know how many passes it will take, but it can never be more passes than elements in the list... just by the nature of at least one element will always meet the conditional as True
Ideally ... the result would be something like the following
```
# initial list
myList = ['it1', 'test', 'blah', 10]
newList = []
# first pass
newList = ['test']
# 2nd pass
newList = ['test', 'blah', 10]
# 3rd pass
newList = ['test', 'blah', 10, 'it1']
```
|
2011/08/30
|
[
"https://Stackoverflow.com/questions/7238401",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/563343/"
] |
How about something like this (just made up a silly condition so I could test it):
```
import random
myList = ['it1', 'test', 'blah', 10]
newList = []
def someCondition(var):
return random.randrange(0,2) == 0
def test():
while len(myList) > 0:
pos = 0
while pos < len(myList):
if someCondition(myList[pos]): # with someCondition being a function here
newList.append(myList.pop(pos))
else:
pos += 1
if __name__ == '__main__':
test()
print(myList)
print(newList)
```
[Result:]
```
[]
['it1', 10, 'blah', 'test']
```
|
A brute force approach would be to create a temporary list of booleans the same size as your original list initialized to `False` everywhere.
In each pass, whenever the item at index `i` of the original list meets the condition, update the value in the temporary array at index `i` with False.
In each subsequent pass, look only at the values where the corresponding index is `False`. Stop when all values have become `True`.
Grr, come to think of it, keep a `set` of indexes that have met the condition. Yes, sets are better than arrays of booleans.
| 14,464
|
65,063,178
|
I am trying to create a hangman game using python in VScode. I imported pygame and now it won't let me do pygame.init(). I looked at other posts here and I tried it but I'm not sure why it is not working. Other posts said to go to setting.json and add
```
{ "python.linting.pylintArgs": [
"--extension-pkg-whitelist=lxml" // The extension is "lxml" not "1xml"
]
}
{"python.linting.pylintArgs": [
"--unsafe-load-any-extension=y"
]
}
```
[enter image description here](https://i.stack.imgur.com/RAvTN.png)
[enter image description here](https://i.stack.imgur.com/Jc8Ps.png)
|
2020/11/29
|
[
"https://Stackoverflow.com/questions/65063178",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14730665/"
] |
1. If you want to turn off Pylint notifications via settings, please use the following format:
>
>
> ```
> "python.linting.pylintArgs":
> [
> "--extension-pkg-whitelist=lxml", // The extension is "lxml" not "1xml"
> "--unsafe-load-any-extension=y"
> ],
>
> ```
>
>
[](https://i.stack.imgur.com/fBLXp.png)
In addition, using this method will turn off all Pylint information.
2.It is recommended that you use the following settings to turn off "[no-member](https://github.com/janjur/readable-pylint-messages/blob/master/README.md#e1101---s-r-has-no-r-members)" notifications after the code can be executed:
>
>
> ```
> "python.linting.pylintArgs": [
> "--disable=E1101"
> ],
>
> ```
>
>
[](https://i.stack.imgur.com/KPbY4.png)
|
Vscode has launched new python language server `pylance` install it from extensions. After that change you language server to pylance.
You will get amazing features like autoimport, auto completion, linter, debugger, etc.
Also check in which environment you have installed pygame. Switch the python interpreter using ctrl+shift+p.
| 14,467
|
66,321,777
|
I have a Flask server that will fetch keys from Redis. How would I maintain a list of already deleted keys? Every time I delete a key, it needs to be added to this list within Redis. Every time I try to check for a key in the cache, if it cannot be found I want to check if it is already deleted. This would allow me to distinguish in my API between requests for unknown keys and requests for deleted keys.
I am using the standard redis python library.
|
2021/02/22
|
[
"https://Stackoverflow.com/questions/66321777",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/127251/"
] |
I think you should separate the name and the number into different attributes:
```
name | number | id
```
and the SQL query should be something like this:
```
select ...
group by name, number;
```
depending on what you want to do.
Is something like this?
|
The grouping SQL statement will be
```
select id, name from commande_ligne group by name order by name
```
The outcome should show
product 78
id
5
6
11
Product 12
Id
14
15
That's the result that you will get
| 14,468
|
67,108,896
|
I am using python for webscraping (new to this) and am trying to grab the brand name from a website. It is not visible on the website but I have found the element for it:
`<span itemprop="Brand" style="display:none;">Revlon</span>`
I want to extract the "Revlon" text in the HTML. I am currently using html requests and have tried grabbing the selector (CSS) and text:
`brandname = r.html.find('body > div:nth-child(96) > span:nth-child(2)', first=True).text.strip()`
but this returns `None` and an error. I am not sure how to extract this specifically. Any help would be appreciated.
|
2021/04/15
|
[
"https://Stackoverflow.com/questions/67108896",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11472545/"
] |
Here is a working solution with Selenium:
```
from seleniumwire import webdriver
from webdriver_manager.chrome import ChromeDriverManager
driver = webdriver.Chrome(ChromeDriverManager().install())
website = 'https://www.boots.com/revlon-colorstay-makeup-for-normal-dry-skin-10212694'
driver.get(website)
brand_name = driver.find_element_by_xpath('//*[@id="estore_product_title"]/h1')
print('brand name: '+brand_name.text.split(' ')[0])
```
**You can also use beautifulsoup for that:**
```
from bs4 import BeautifulSoup
import requests
urlpage = 'https://www.boots.com/revlon-colorstay-makeup-for-normal-dry-skin-10212694'
# query the website and return the html to the variable 'page'
page = requests.get(urlpage)
# parse the html using beautiful soup and store in variable 'soup'
soup = BeautifulSoup(page.content, 'html.parser')
name = soup.find(id='estore_product_title')
print(name.text.split(' ')[0])
```
|
try this method
.find("span", itemprop="Brand")
I think it's work
```
from bs4 import BeautifulSoup
import requests
urlpage = 'https://www.boots.com/revlon-colorstay-makeup-for-normal-dry-skin-10212694'
page = requests.get(urlpage)
# parse the html using beautiful soup and store in variable 'soup'
soup = BeautifulSoup(page.content, 'html.parser')
print(soup.find("span", itemprop="Brand").text)
```
| 14,473
|
56,613,286
|
Say that I have a list of strings, such as
```
listStrings = [ 'cat', 'bat', 'hat', 'dad', 'look', 'ball', 'hero', 'up']
```
Is there a way would return all rows if a particular column contains 3 or more of the strings from the list?
For example
If the column contained 'My dad is a hero for saving the cat'
Then the row would be returned.
But if the column only contained 'the cat and bat teamed up to find some food'
That row wouldn't be returned.
The only way I can think of is to get every combination of 3 from the list of strings, and use AND statements. e.g. 'cat' AND 'bat' AND 'hat'.
But this doesn't seem computationally efficient nor pythonic.
Is there a more efficient, compact way to do this?
Edit
Here is a pandas example
```
import pandas as pd
listStrings = [ 'cat', 'bat', 'hat', 'dad', 'look', 'ball', 'hero', 'up']
df = pd.DataFrame(['test1', 'test2', 'test3'], ['My dad is a hero for saving the cat', 'the cat and bat teamed up to find some food', 'The dog found a bowl'])
df.head()
0
My dad is a hero for saving the cat test1
the cat and bat teamed up to find some food test2
The dog found a bowl test3
```
So using the `listStrings`, I would like row 1 returned, but not row 2 or row 3.
|
2019/06/15
|
[
"https://Stackoverflow.com/questions/56613286",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3259896/"
] |
You could use something like this:
```
$reviews = Review::orderBy('created_at', 'DESC')
->orWhere('created_at','>=',now()->subDays($request->review == 7 ? 7 : 30)
->where('listing_id', $id)
->where('user_id', Auth::user()->id)
->paginate(6, ['*'], 'review');
```
but keep in mind you might still get wrong results.
You should make sure brackets are added into query in correct place, otherwise you might get other user's results, so it's quite possible in fact you would like to use something like this:
```
$reviews = Review::orderBy('created_at', 'DESC')
->where(function($q) use ($request) {
$q->where('listing_id', $id)
->orWhere('created_at','>=',now()->subDays($request->review == 7 ? 7 : 30)
})
->where('user_id', Auth::user()->id)
->paginate(6, ['*'], 'review');
```
but you haven't explained what exact results you want to get from database so it's just a hint.
|
You can split Eloquent Builder by cursor.
```php
$cursor = Review::orderByDesc('created_at');
if ($request->review == "7") {
$cursor->orWhere('created_at','>=', 7));
}
$reviews = $cursor->where('listing_id', $id)
->where('user_id', Auth::user()->id)
->paginate(6, ['*'], 'review');
```
| 14,474
|
21,449,085
|
I may get slammed because this question is too broad, but anyway I going to ask cause what else do I do? Digging through the Python source code should surely give me enough "good effort" points to warrant helping me?
I am trying to use Python 3.4's new email content manager <http://docs.python.org/dev/library/email.contentmanager.html#content-manager-instances>
It is my understanding that this should allow me to read an email message, then be able to access all the email header fields and body as UTF-8, without going through the painful process of decoding from whatever weird encoding back into clean UTF-8. I understand is also handles parsing of date headers and email address headers. Generally making life easier for reading emails in Python. Great stuff, very interesting.
However I am a beginner programmer - there are no examples in the current documentation of how to start from the start. I need a simple example showing how to read an email file and using the new email content manager, read back the header fields, address fields and body/
I have dug into the python 3.4 source code and looked at the tests for the email content manager. I will admit to being sufficiently amatuerish that I was too confused to be able to glean enough from the tests to start writing my own simple example.
So, is anyone willing to help with a simple example of how to use the Python 3.4 email content manager to read the header fields and body and address fields of an email?
thanks
|
2014/01/30
|
[
"https://Stackoverflow.com/questions/21449085",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/627492/"
] |
First: the βaddress fieldsβ in an email are in fact simply headers whose names have been agreed upon in standards, like `To` and `From`. So all you need are the email headers and body and you are done.
Given a modern `contentmanager`-powered `EmailMessage` instance such as Python 3.4 returns if you specify a policy (like `default`) when reading in an email message, you can access its auto-decoded headers by treating it like a Python dictionary, and its body with the `get_body()` call. Here is an example script I wrote that does both maneuvers in a safe and standard way:
<https://github.com/brandon-rhodes/fopnp/blob/m/py3/chapter12/display_email.py>
Behind the scenes, the policy is what is really in charge of what happens to both headers and content β with the `default` policy automatically subjecting headers to the encoding and decoding functions in `email.utils`, and content to the logic you asked about that is inside of `contentmanager`.
But as the caller you usually will not need to know the behind-the-scenes magic, because headers will βjust workβ and content can be easily accessed through the methods illustrated in the above script.
|
If you have an email in a file and want to read it into Python, it's the [`email.Parser` you should probably look at](http://docs.python.org/dev/library/email.parser.html#parser-class-api) first. [Like Brandon](https://stackoverflow.com/a/22697432/923794), I don't quite see the need for using the `contentmanager`, but maybe your question *is* too broad and you need to help me understand it better.
Code could look like:
```
filename = 'your_file_here.email.txt'
import email.parser
with open(filename, 'r') as fh:
message = email.parser.Parser().parse(fh)
```
There are even convenience functions, and the one for your case would be:
```
import email
message = email.message_from_file('your_file_here.email.txt')
```
Then check the [docs on email.message](http://docs.python.org/dev/library/email.message.html) to see how to access the message's content. You can check with `is_multipart()` if it's a single monolithic block of text, or a MIME message consisting of multiple parts. In the latter case, there's `walk()` to iterate over each part.
| 14,477
|
48,065,360
|
I want to use python interpolate polynomial on points from a finite-field and get a polynomial with coefficients in that field.
Currently I'm trying to use SymPy and specifically interpolate (from `sympy.polys.polyfuncs`), but I don't know how to force the interpolation to happen in a specific gf. If not, can this be done with another module?
Edit: I'm interested in a Python implementation/library.
|
2018/01/02
|
[
"https://Stackoverflow.com/questions/48065360",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5603149/"
] |
SymPy's [interpolating\_poly](http://docs.sympy.org/latest/modules/polys/reference.html#sympy.polys.specialpolys.interpolating_poly) does not support polynomials over finite fields. But there are enough details under the hood of SymPy to put together a class for finite fields, and find the coefficients of [Lagrange polynomial](https://en.wikipedia.org/wiki/Lagrange_polynomial) in a brutally direct fashion.
As usual, the elements of finite field GF(pn) are [represented by polynomials](https://en.wikipedia.org/wiki/Finite_field#Explicit_construction_of_finite_fields) of degree less than n, with coefficients in GF(p). Multiplication is done modulo a reducing polynomial of degree n, which is selected at the time of field construction. Inversion is done with extended Euclidean algorithm.
The polynomials are represented by lists of coefficients, highest degrees first. For example, the elements of GF(32) are:
```
[], [1], [2], [1, 0], [1, 1], [1, 2], [2, 0], [2, 1], [2, 2]
```
The empty list represents 0.
### Class GF, finite fields
Implements arithmetics as methods `add`, `sub`, `mul`, `inv` (multiplicative inverse). For convenience of testing interpolation includes `eval_poly` which evaluates a given polynomial with coefficients in GF(pn) at a point of GF(pn).
Note that the constructor is used as G(3, 2), not as G(9), - the prime and its power are supplied separately.
```
import itertools
from functools import reduce
from sympy import symbols, Dummy
from sympy.polys.domains import ZZ
from sympy.polys.galoistools import (gf_irreducible_p, gf_add, \
gf_sub, gf_mul, gf_rem, gf_gcdex)
from sympy.ntheory.primetest import isprime
class GF():
def __init__(self, p, n=1):
p, n = int(p), int(n)
if not isprime(p):
raise ValueError("p must be a prime number, not %s" % p)
if n <= 0:
raise ValueError("n must be a positive integer, not %s" % n)
self.p = p
self.n = n
if n == 1:
self.reducing = [1, 0]
else:
for c in itertools.product(range(p), repeat=n):
poly = (1, *c)
if gf_irreducible_p(poly, p, ZZ):
self.reducing = poly
break
def add(self, x, y):
return gf_add(x, y, self.p, ZZ)
def sub(self, x, y):
return gf_sub(x, y, self.p, ZZ)
def mul(self, x, y):
return gf_rem(gf_mul(x, y, self.p, ZZ), self.reducing, self.p, ZZ)
def inv(self, x):
s, t, h = gf_gcdex(x, self.reducing, self.p, ZZ)
return s
def eval_poly(self, poly, point):
val = []
for c in poly:
val = self.mul(val, point)
val = self.add(val, c)
return val
```
### Class PolyRing, polynomials over a field
This one is simpler: it implements addition, subtraction, and multiplication of polynomials, referring to the ground field for operations on coefficients. There is a lot of list reversals `[::-1]` because of SymPy's convention to list monomials starting with highest powers.
```
class PolyRing():
def __init__(self, field):
self.K = field
def add(self, p, q):
s = [self.K.add(x, y) for x, y in \
itertools.zip_longest(p[::-1], q[::-1], fillvalue=[])]
return s[::-1]
def sub(self, p, q):
s = [self.K.sub(x, y) for x, y in \
itertools.zip_longest(p[::-1], q[::-1], fillvalue=[])]
return s[::-1]
def mul(self, p, q):
if len(p) < len(q):
p, q = q, p
s = [[]]
for j, c in enumerate(q):
s = self.add(s, [self.K.mul(b, c) for b in p] + \
[[]] * (len(q) - j - 1))
return s
```
### Construction of interpolating polynomial.
The [Lagrange polynomial](https://en.wikipedia.org/wiki/Lagrange_polynomial) is constructed for given x-values in list X and corresponding y-values in array Y. It is a linear combination of basis polynomials, one for each element of X. Each basis polynomial is obtained by multiplying `(x-x_k)` polynomials, represented as `[[1], K.sub([], x_k)]`. The denominator is a scalar, so it's even easier to compute.
```
def interp_poly(X, Y, K):
R = PolyRing(K)
poly = [[]]
for j, y in enumerate(Y):
Xe = X[:j] + X[j+1:]
numer = reduce(lambda p, q: R.mul(p, q), ([[1], K.sub([], x)] for x in Xe))
denom = reduce(lambda x, y: K.mul(x, y), (K.sub(X[j], x) for x in Xe))
poly = R.add(poly, R.mul(numer, [K.mul(y, K.inv(denom))]))
return poly
```
### Example of usage:
```
K = GF(2, 4)
X = [[], [1], [1, 0, 1]] # 0, 1, a^2 + 1
Y = [[1, 0], [1, 0, 0], [1, 0, 0, 0]] # a, a^2, a^3
intpoly = interp_poly(X, Y, K)
pprint(intpoly)
pprint([K.eval_poly(intpoly, x) for x in X]) # same as Y
```
The pretty print is just to avoid some type-related decorations on the output. The polynomial is shown as `[[1], [1, 1, 1], [1, 0]]`. To help readability, I added a function to turn this in a more familiar form, with a symbol `a` being a generator of finite field, and `x` being the variable in the polynomial.
```
def readable(poly, a, x):
return Poly(sum((sum((c*a**j for j, c in enumerate(coef[::-1])), S.Zero) * x**k \
for k, coef in enumerate(poly[::-1])), S.Zero), x)
```
So we can do
```
a, x = symbols('a x')
print(readable(intpoly, a, x))
```
and get
```
Poly(x**2 + (a**2 + a + 1)*x + a, x, domain='ZZ[a]')
```
This algebraic object is not a polynomial over our field, this is just for the sake of readable output.
### Sage
As an alternative, or just another safety check, one can use the [`lagrange_polynomial`](http://doc.sagemath.org/html/en/reference/polynomial_rings/sage/rings/polynomial/polynomial_ring.html#sage.rings.polynomial.polynomial_ring.PolynomialRing_field.lagrange_polynomial) from Sage for the same data.
```
field = GF(16, 'a')
a = field.gen()
R = PolynomialRing(field, "x")
points = [(0, a), (1, a^2), (a^2+1, a^3)]
R.lagrange_polynomial(points)
```
Output: `x^2 + (a^2 + a + 1)*x + a`
|
I'm the author of the [`galois`](https://github.com/mhostetter/galois) Python library. Polynomial interpolation can be performed with the `lagrange_poly()` function. Here's a simple example.
```py
In [1]: import galois
In [2]: galois.__version__
Out[2]: '0.0.32'
In [3]: GF = galois.GF(3**5)
In [4]: x = GF.Random(10); x
Out[4]: GF([ 33, 58, 59, 21, 141, 133, 207, 182, 125, 162], order=3^5)
In [5]: y = GF.Random(10); y
Out[5]: GF([ 34, 239, 120, 170, 31, 165, 180, 79, 215, 215], order=3^5)
In [6]: f = galois.lagrange_poly(x, y); f
Out[6]: Poly(165x^9 + 96x^8 + 9x^7 + 111x^6 + 40x^5 + 208x^4 + 55x^3 + 17x^2 + 118x + 203, GF(3^5))
In [7]: f(x)
Out[7]: GF([ 34, 239, 120, 170, 31, 165, 180, 79, 215, 215], order=3^5)
```
The finite field element display may be changed to either the polynomial or power representation.
```py
In [8]: GF.display("poly"); f(x)
Out[8]:
GF([ Ξ±^3 + 2Ξ± + 1, 2Ξ±^4 + 2Ξ±^3 + 2Ξ±^2 + Ξ± + 2,
Ξ±^4 + Ξ±^3 + Ξ±^2 + Ξ±, 2Ξ±^4 + 2Ξ± + 2,
Ξ±^3 + Ξ± + 1, 2Ξ±^4 + Ξ±,
2Ξ±^4 + 2Ξ±^2, 2Ξ±^3 + 2Ξ±^2 + 2Ξ± + 1,
2Ξ±^4 + Ξ±^3 + 2Ξ±^2 + 2Ξ± + 2, 2Ξ±^4 + Ξ±^3 + 2Ξ±^2 + 2Ξ± + 2], order=3^5)
In [9]: GF.display("power"); f(x)
Out[9]:
GF([Ξ±^198, Ξ±^162, Ξ±^116, Ξ±^100, Ξ±^214, Ξ±^137, Ξ±^169, Ξ±^95, Ξ±^175, Ξ±^175],
order=3^5)
```
| 14,478
|
16,809,248
|
My works relates to instrumentation of code fragments in python code. So in my work i would be writing a script in python such that I take another python file as input and insert any necessary code in the required place with my script.
The following code is a sample code of a file which i would be instrumenting:
```
A.py #normal un-instrumented code
statements
....
....
def move(self,a):
statements
......
print "My function is defined"
......
statements
......
```
My script what actually does is to check each lines in the A.py and if there is a "def" then a code fragment is instrumented on top of the code the def function
The following example is how the final out put should be:
```
A.py #instrumented code
statements
....
....
@decorator #<------ inserted code
def move(self,a):
statements
......
print "My function is defined"
......
statements
......
```
But I have been resulted with different output. The following code is the final output which i am getting:
A.py #instrumented code
```
statements
....
....
@decorator #<------ inserted code
def move(self,a):
statements
......
@decorator #<------ inserted code [this should not occur]
print "My function is defined"
......
statements
......
```
I can understand that in the instrumented code it recognizes "def" in the word "defined" and so it instruments the a code above it.
In realty the instrumented code has lots of these problems I was not able to properly instrument the given python file. Is there any other way to differentiate the actual "def" from string?
Thank you
|
2013/05/29
|
[
"https://Stackoverflow.com/questions/16809248",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2135762/"
] |
Use the [`ast` module](http://docs.python.org/2/library/ast.html) to parse the file properly.
This code prints the line number and column offset of each `def` statement:
```
import ast
with open('mymodule.py') as f:
tree = ast.parse(f.read())
for node in ast.walk(tree):
if isinstance(node, ast.FunctionDef):
print node.lineno, node.col_offset
```
|
You could use a Regular Expression. To avoid `def` inside quotes then you can use negative look-arounds:
```
import re
for line in open('A.py'):
m = re.search(r"(?!<[\"'])\bdef\b(?![\"'])", line)
if m:
print r'@decorator #<------ inserted code'
print line
```
However, there might be other occurances of `def` that you or I can't think of, and if we are not careful we end-up writing the Python parser all over again. @Janne Karila's suggestion of using `ast.parse` is probably safer in the long term.
| 14,479
|
26,027,271
|
I have a dictionary of configs (defined by the user as settings for a Django app).
And I need to check the config to make sure it fits the rules.
The rule is very simple. The 'range' within each option must be unique.
sample settings
===============
```
breakpoints = {
'small': {
'verbose_name': _('Small screens'),
'min_width': None,
'max_width': 640,
},
'medium': {
'verbose_name': _('Medium screens'),
'min_width': 641,
'max_width': 1024,
},
'large': {
'verbose_name': _('Large screens'),
'min_width': 1025,
'max_width': 1440,
},
'xlarge': {
'verbose_name': _('XLarge screens'),
'min_width': 1441,
'max_width': 1920,
},
'xxlarge': {
'verbose_name': _('XXLarge screens'),
'min_width': 1921,
'max_width': None,
}
}
```
Here's what Ive come with so far. It works but don't seem very pythonic.
```
for alias, config in breakpoints.items():
for alias2, config2 in breakpoints.items():
if not alias2 is alias:
msg = error_msg % (alias, 'breakpoint clashes with %s breakpoint' % alias2)
for attr in ('min_width', 'max_width', ):
if config[attr] is not None:
if (config2['min_width'] and config2['max_width']) and \
(config2['min_width'] <= config[attr] <= config2['max_width']):
raise ImproperlyConfigured(msg)
elif (config2['min_width'] and not config2['max_width']) and \
(config2['min_width'] < config[attr]):
raise ImproperlyConfigured(msg)
elif (config2['max_width'] and not config2['min_width']) and \
(config2['max_width'] > config[attr]):
raise ImproperlyConfigured(msg)
```
Is there a better way I can solve this?
|
2014/09/24
|
[
"https://Stackoverflow.com/questions/26027271",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1682844/"
] |
Its easy to scan for overlapping ranges if you sort the dataset first. 'None' appears to be used for different things in different places (as min its zero) as max its "greater than anything" - but that's harder to compare. If you have a real maximum, it makes the sorting a bit easier.
(edit: scan for max because there is no known maximum)
```
MAX = max(val.get('max_width', 0) for val in breakpoints.itervalues()) + 1
# sort by min/max
items = sorted(
(data['min_width'] or 0, data['max_width'] or MAX, name)
for name, data in breakpoints.iteritems())
# check if any range overlaps the next higher item
for i in range(len(items)-1):
if items[i][0] > items[i][1]:
print "range is incorrect for", items[i][1]
elif items[i][1] >= items[i+1][0]:
print items[i+1][2], 'overlaps'
```
|
You can get from that a dict with the ranges as pairs:
```
ranges = { 'small': (None, 640), 'medium': (641, 1024), ...}
```
and then check, say, that `len(set(ranges.values())) == len(ranges)`
EDIT: this works if the requirement is for ranges to be different. See @tdelaney's answer for disjoint ranges.
| 14,480
|
8,510,972
|
I have seen a lot of posts on this topic, however I have not found regarding this warning:
```
CMake Warning:
Manually-specified variables were not used by the project:
BUILD_PYTHON_SUPPORT
```
when I compile with cmake. When building OpenCV with this warning, it turns out that it doesn't include python support (surprise).
I use this command to compile the build-files
```
cmake -D CMAKE_BUILD_TYPE=RELEASE -D CMAKE_INSTALL_PREFIX=/usr/local -D BUILD_PYTHON_SUPPORT=ON -D BUILD_EXAMPLES=ON ..
```
I have installed python-dev.
|
2011/12/14
|
[
"https://Stackoverflow.com/questions/8510972",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/256664/"
] |
It looks like you're using an old install guide. Use `BUILD_NEW_PYTHON_SUPPORT` instead.
So, execute CMake like this:
```
cmake -D CMAKE_BUILD_TYPE=RELEASE -D CMAKE_INSTALL_PREFIX=/usr/local -D BUILD_NEW_PYTHON_SUPPORT=ON -D BUILD_EXAMPLES=ON ..
```
Also, if you use the CMake GUI, it is easier to see all of the options you can set for OpenCV (there are so many it's quite tedious to type them all on the command-line). To get it for Ubuntu, do this:
```
sudo apt-get install cmake-qt-gui
```
|
**Simple instructions to install opencv with python bindings in Linux - Ubuntu/Fedora**
1. Install gcc, g++/gcc-c++, cmake (apt-get or yum, in case of yum
use gcc-c++).
**#apt-get install gcc, g++, cmake**
2. Downlaod latest opencv from openCV's website
(<http://opencv.org/downloads.html>).
3. Untar it **#tar - xvf opencv-*\****
4. Inside the untarred folder make a new folder called "**release**" (or
any folder name) and run this command in it #**"cmake -D
CMAKE\_BUILD\_TYPE=RELEASE -D CMAKE\_INSTALL\_PREFIX=/usr/local -D
BUILD\_NEW\_PYTHON\_SUPPORT=ON -D BUILD\_EXAMPLES=ON .."** the ".." will pull
files from the parents folder and will get the system ready for
installation on your platform.
5. in the release (#cd release) folder run **#make**
6. After about 2-3 mins of make processing when its finished run
**#make install**
That's it, now go to python and try ">>> **import cv2**" you should not get any error message.
Tested on python 2.7, should be virtually similar to python 3.x.
| 14,481
|
3,693,891
|
I'm writing a program that (part of what is does is) executes other programs. I want to to be able to run as many types of programs (written in different languages) as possible using `Process.Start` . So, I'm thinking I should:
1. Open the file
2. Read in the first line
3. Check if it starts with `#!`
4. If so, use what follows the `#!` as the program to execute, and pass in the filename as an argument instead
5. If no `#!` is found, check the file extension against a dictionary of known programs (e.g., `.py -> python`) and execute that program instead
6. Otherwise, just try executing the file and catch any errors
But, I'm thinking it might be easier/more efficient to actually check if the file is executable first and if so, jump to 6. Is there a way to do this?
|
2010/09/12
|
[
"https://Stackoverflow.com/questions/3693891",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/65387/"
] |
In Windows there is no real notion of "executable", like the specific permission that exists in \*NIX systems.
You have two options. The first one, like saurabh had suggested before me, is to rely on the system to associate between the file extension and the command to be performed. This approach (of using Process.Start) has many advantages - it leaves the power of association to the user, as in letting the user pick the correct way to run the various file types.
The second option is to mimic the Windows file association process, by having a dictionary from an extension to the command that can run the file, and falling back to checking the first line of the file if needed. This has the advantage of you having the power of setting the associations, but it also requires constant modifications and maintenance on your side, in addition to losing the flexibility on the user side - which may be a good thing or a bad thing.
|
if you are using .net than Process.Start do lot of things for you.
if you pass a exe , it will run the exe.
If you pass a word document , it will open the word document
and may more
| 14,482
|
16,082,243
|
```
<Control-Shift-Key-0>
<Control-Key-plus>
```
works but
```
<Control-Key-/>
```
doesn't.
I am unable to bind `ctrl` + `/` in python. Is there any documentation of all the possible keys?
|
2013/04/18
|
[
"https://Stackoverflow.com/questions/16082243",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2219529/"
] |
Use `<Control-slash>`:
```
def quit(event):
print "you pressed control-forwardslash"
root.quit()
root = tk.Tk()
root.bind('<Control-slash>', quit) # forward-slash
# root.bind('<Control-backslash>', quit) # backslash
root.mainloop()
```
---
I don't have a link to a complete list of these event names. Here is a partial list I've collected:
```
| event | name |
| Ctrl-c | Control-c |
| Ctrl-/ | Control-slash |
| Ctrl-\ | Control-backslash |
| Ctrl+(Mouse Button-1) | Control-1 |
| Ctrl-1 | Control-Key-1 |
| Enter key | Return |
| | Button-1 |
| | ButtonRelease-1 |
| | Home |
| | Up, Down, Left, Right |
| | Configure |
| window exposed | Expose |
| mouse enters widget | Enter |
| mouse leaves widget | Leave |
| | Key |
| | Tab |
| | space |
| | BackSpace |
| | KeyRelease-BackSpace |
| any key release | KeyRelease |
| escape | Escape |
| | F1 |
| | Alt-h |
```
|
Here is a list of all the tk keysysm codes:
<https://www.tcl.tk/man/tcl8.6/TkCmd/keysyms.htm>
The two I was looking for was `<Win_L>` and `<Win_R>`.
| 14,491
|
68,005,264
|
I'm executing an extract query to google storage as follows:
```
job_config = bigquery.ExtractJobConfig()
job_config.compression = bigquery.Compression.GZIP
job_config.destination_format = (bigquery.DestinationFormat.CSV)
job_config.print_header = False
job_config.field_delimiter = "|"
extract_job = client.extract_table(
table_ref,
destination_uri,
job_config=job_config,
location='us-east1',
retry=query_retry,
timeout=10) # API request
extract_job.result()
```
Which returns an ExtractJob class and, by the google documentation (<https://googleapis.dev/python/bigquery/1.24.0/generated/google.cloud.bigquery.job.ExtractJob.html#google.cloud.bigquery.job.ExtractJob>),
I need to call extract\_job.result() to wait the job to complete. After the completion, I notice that the files on Google Cloud Storage are not there (yet), maybe there's a delay. I need to ensure that the files are ready to consume after the extraction job, there's a API method to solve this or I have to make a workaround sleeping and waiting the files?
|
2021/06/16
|
[
"https://Stackoverflow.com/questions/68005264",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3447863/"
] |
First of all, Start Events must not have an incoming Edge. It is not allowed by the BPMN standard. So you should replace your Start Events 2 and 3 within the process with intermediate Events.
The decission logic to skip or execute the Intermediate Event now representing the event of what was before Start Event 3 could be implemented in an Event Based Gateway, describing on the Edges which path to take under which condition.
[](https://i.stack.imgur.com/2fY31.png)
|
Based on Simulat's answer I found a alternative solution which I think is the better fit. The red path should not be possible because of the logic gate with the red circle (the top path is only viable if `Start Event 3` has not occurred).
The problem I have with Simulat's answer are the intermediate events and the event based gate. Since there are no "real" events on those points so I think they should be xor logic gates, but I'm not sure. Feedback is welcome:
[](https://i.stack.imgur.com/srL8p.png)
| 14,492
|
54,272,604
|
I have a recursive solution that works, but it turns out a lot of subproblems are being recalculated. I need help with MEMOIZATION.
So here's the problem statement:
>
> You are a professional robber planning to rob houses along a street.
> Each house has a certain amount of money stashed, the only constraint
> stopping you from robbing each of them is that adjacent houses have
> security system connected and it will automatically contact the police
> if two adjacent houses were broken into on the same night.
>
>
> Given a list of non-negative integers representing the amount of money
> of each house, determine the maximum amount of money you can rob
> tonight without alerting the police.
>
>
>
Examples:
>
> Input: `[2,7,9,3,1]` Output: `12` Explanation: Rob house 1 (money = 2),
> rob house 3 (money = 9) and rob house 5 (money = 1).
> Total amount you can rob = `2 + 9 + 1 = 12`.
>
>
>
Another one:
>
> Input: `[1,2,3,1]` Output: `4` Explanation: Rob house 1 (money = 1) and
> then rob house 3 (money = 3).
> Total amount you can rob = `1 + 3 = 4`.
>
>
>
And another one
>
> Input: `[2, 1, 1, 2]` Output: `4` Explanation: Rob house 1 (money = 2) and
> then rob house 4 (money = 2).
> Total amount you can rob = `2 + 2 = 4`.
>
>
>
Now like I said I have a perfectly working recursive solution: When I build a recursive solution. I don't THINK too much. I just try to understand what the smaller subproblems are.
`option_1`: I add the value in my current `index`, and go to `index + 2`
`option_2`: I don't add the value in my current `index`, and I search starting from `index + 1`
Maximum amount of money = `max(option_1, option_2)`
```
money = [1, 2, 1, 1] #Amounts that can be looted
def helper(value, index):
if index >= len(money):
return value
else:
option1 = value + money[index]
new_index1 = index + 2
option2 = value
new_index2 = index + 1
return max(helper(option1, new_index1), helper(option2, new_index2))
helper(0, 0) #Starting of at value = 0 and index = 0
```
This works perfectly.. and returns the correct value `3`.
I then try my hand at MEMOIZING.
```
money = [1, 2, 1, 1]
max_dict = {}
def helper(value, index):
if index in max_dict:
return max_dict[index]
elif index >= len(l1):
return value
else:
option1 = value + money[index]
new_index1 = index + 2
option2 = value
new_index2 = index + 1
max_dict[index] = max(helper(option1, new_index1), helper(option2, new_index2))
return max_dict[index]
helper(0, 0)
```
I simply have a dictionary called `max_dict` that STORES the value, and each recursive call checks if the value already exists and then accordingly grabs it and prints it out..
But I get the wrong solution for this as `2` instead of `3`. I went to `pythontutor.com` and typed my solution out, but I can't seem to get the recursion tree and where it's failing..
**Could someone give me a correct implementation of memoization while keeping the overall structure the same? In other words, I don't want the recursive function definition to change**
|
2019/01/20
|
[
"https://Stackoverflow.com/questions/54272604",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7938658/"
] |
The `helper` can be called with different `value` parameter for same `index`. So the `value` must be removed (subtracted from the stored `max_dict`). One way to do this is to add `value` just before returning, not earlier:
```
money = [2, 1, 1, 2]
max_dict = {}
def helper(value, index):
if index in max_dict:
return value + max_dict[index]
elif index >= len(money):
return value
else:
option1 = money[index]
new_index1 = index + 2
option2 = 0
new_index2 = index + 1
max_dict[index] = max(helper(option1, new_index1), helper(option2, new_index2))
return value + max_dict[index]
helper(0, 0)
```
A more detailed explanation what happens is given by @ggorlen's answer
|
Your approach for memoization won't work because when you reach some index `i`, if you've already computed some result for `i`, your algorithm fails to consider the fact that there might be a *better* result available by robbing a more optimal set of houses in the left portion of the array.
The solution to this dilemma is to avoid passing the running `value` (money you've robbed) downward through the recursive calls from parents to children. The idea is to compute sub-problem results without *any input* from ancestor nodes, then build the larger solutions from the smaller ones on the way back up the call stack.
Memoization of index `i` will then work because a given index `i` will always have a unique set of subproblems whose solutions will not be corrupted by choices from ancestors in the left portion of the array. This preserves the optimal substructure that's necessary for DP to work.
Additionally, I recommend avoiding global variables in favor of passing your data directly into the function.
```
def maximize_robberies(houses, memo, i=0):
if i in memo:
return memo[i]
elif i >= len(houses):
return 0
memo[i] = max(
maximize_robberies(houses, memo, i + 1),
maximize_robberies(houses, memo, i + 2) + houses[i],
)
return memo[i]
if __name__ == "__main__":
print(maximize_robberies([1, 2, 1, 1], {}))
```
| 14,493
|
150,532
|
Similar to [this](https://stackoverflow.com/questions/5415/convert-bytes-to-floating-point-numbers-in-python) question, I am trying to read in an ID3v2 tag header and am having trouble figuring out how to get individual bytes in python.
I first read all ten bytes into a string. I then want to parse out the individual pieces of information.
I can grab the two version number chars in the string, but then I have no idea how to take those two chars and get an integer out of them.
The struct package seems to be what I want, but I can't get it to work.
Here is my code so-far (I am very new to python btw...so take it easy on me):
```
def __init__(self, ten_byte_string):
self.whole_string = ten_byte_string
self.file_identifier = self.whole_string[:3]
self.major_version = struct.pack('x', self.whole_string[3:4]) #this
self.minor_version = struct.pack('x', self.whole_string[4:5]) # and this
self.flags = self.whole_string[5:6]
self.len = self.whole_string[6:10]
```
Printing out any value except is obviously crap because they are not formatted correctly.
|
2008/09/29
|
[
"https://Stackoverflow.com/questions/150532",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2598/"
] |
If you have a string, with 2 bytes that you wish to interpret as a 16 bit integer, you can do so by:
```
>>> s = '\0\x02'
>>> struct.unpack('>H', s)
(2,)
```
Note that the > is for big-endian (the largest part of the integer comes first). This is the format id3 tags use.
For other sizes of integer, you use different format codes. eg. "i" for a signed 32 bit integer. See help(struct) for details.
You can also unpack several elements at once. eg for 2 unsigned shorts, followed by a signed 32 bit value:
```
>>> a,b,c = struct.unpack('>HHi', some_string)
```
Going by your code, you are looking for (in order):
* a 3 char string
* 2 single byte values (major and minor version)
* a 1 byte flags variable
* a 32 bit length quantity
The format string for this would be:
```
ident, major, minor, flags, len = struct.unpack('>3sBBBI', ten_byte_string)
```
|
I was going to recommend the `struct` package but then you said you had tried it. Try this:
```
self.major_version = struct.unpack('H', self.whole_string[3:5])
```
The `pack()` function convers Python data types to bits, and the `unpack()` function converts bits to Python data types.
| 14,496
|
64,647,954
|
I want to webscrape german real estate website immobilienscout24.de. I would like to download the HTML of a given URL and then work with the HTML offline. It is not intended for commercial use or publication and I do not intend on spamming the site, it is merely for coding practice. I would like to write a python tool that automatically downloads the HTML of given immobilienscout24.de sites. I have tried to use beautifulsoup for this, however, the parsed HTML doesn't show the content but asks if I am a robot etc., meaning my webscraper got detected and blocked (I can access the site in Firefox just fine). I have set a referer, a delay and a user agent. What else can I do to avoid being detected (i.e. rotating proxies, rotating user agents, random clicks, other webscraping tools that don't get detected...)? I have tried to use my phones IP but got the same result. A GUI webscraping tool is not an option as I need to control it with python.
Please give some implementable code if possible.
Here is my code so far:
```
import urllib.request
from bs4 import BeautifulSoup
import requests
import time
import numpy
url = "https://www.immobilienscout24.de/Suche/de/wohnung-mieten?sorting=2#"
req = urllib.request.Request(url, data=None, headers={ 'User-Agent' : 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/86.0.4240.75 Safari/537.36' })
req.add_header('Referer', 'https://www.google.de/search?q=immoscout24)
delays = [3, 2, 4, 6, 7, 10, 11, 17]
time.sleep(numpy.random.choice(delays)) # I want to implement delays like this
page = urllib.request.urlopen(req)
soup = BeautifulSoup(page, 'html.parser')
print(soup.prettify)
```
```
username:~/Desktop$ uname -a
Linux username 5.4.0-52-generic #57-Ubuntu SMP Thu Oct 15 10:57:00 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
```
Thank you!
|
2020/11/02
|
[
"https://Stackoverflow.com/questions/64647954",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14361382/"
] |
Try to set `Accept-Language` HTTP header (this worked for me to get correct response from server):
```
import requests
from bs4 import BeautifulSoup
url = "https://www.immobilienscout24.de/Suche/de/wohnung-mieten?sorting=2#"
headers = {
'User-Agent': 'Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:82.0) Gecko/20100101 Firefox/82.0',
'Accept-Language': 'en-US,en;q=0.5'
}
soup = BeautifulSoup(requests.get(url, headers=headers).content, 'html.parser')
for h5 in soup.select('h5'):
print(h5.get_text(strip=True, separator=' '))
```
Prints:
```
NEU Albertstadt: Praktisch geschnitten und groΓer Balkon
NEU Sehr gerΓ€umige 3-Raum-Wohnung in der Eisenacher Weststadt
NEU Gepflegte 3-Zimmer-Wohnung am Rosenberg in Hofheim a.Taunus
NEU ERSTBEZUG: Wohnung neu renoviert
NEU Freundliche 3,5-Zimmer-Wohnung mit Balkon und EBK in Rheinfelden
NEU FΓΌr Singles und Studenten! 2 ZKB mit EBK und Balkon
NEU SchΓΆne 3-Zimmer-Wohnung mit 2 Balkonen im Parkend
NEU Γffentlich gefΓΆrderte 3-Zimmer-Neubau-Wohnung fΓΌr die kleine Familie in Iserbrook!
NEU Komfortable, neuwertige Erdgeschosswohnung in gefragter Lage am Wall
NEU MΓΆbliertes, freundliches Appartem. TOP LAGE, S-Balkon, EBK, ruhig, Schwabing Nord/, Milbertshofen
NEU Extravagant & frisch saniert! 2,5-Zimmer DG-Wohnung in Duisburg-NeumΓΌhl
NEU wunderschΓΆne 3 Zimmer Dachgeschosswohnung mit EinbaukΓΌche. 2er WG-tauglich.
NEU Erstbezug nach Sanierung: Helle 3-Zimmer-Wohnung mit Balkon in Monheim am Rhein
NEU Morgen schon im neuen Zuhause mit der ganzen Familie! 3,5 Raum zur Miete in DUI-Overbruch
NEU Erstbezug: ansprechende 2-Zimmer-EG-Wohnung in Bad DΓΌben
NEU CALENBERGER NEUSTADT | 3-Zimmer-Wohnung mit groΓem SΓΌd-Balkon
NEU Wohnen und Arbeiten in Bestlage von HH-Lokstedt !
NEU Erstbezug: WohlfΓΌhlwohnen in modernem Dachgeschoss nach kompletter Sanierung!
NEU CASACONCEPT Stilaltbau-Wohnung MΓΌnchen-Bogenhausen nahe Prinzregentenplatz
NEU schΓΆne Wohnung mit Balkon und Laminatboden
```
|
Maybe have a go with [requests](https://requests.readthedocs.io/en/master/), the code below seems to work fine for me:
```
import requests
from bs4 import BeautifulSoup
r = requests.get('https://www.immobilienscout24.de/')
soup = BeautifulSoup(r.text, 'html.parser')
print(soup.prettify)
```
Another approach is to use [selenium](https://selenium-python.readthedocs.io/); it's powerful but maybe a bit more complicated.
Edit:
A possible solution using Selenium (it seems to work for me for the link you provided in the comment):
```
from selenium import webdriver
from bs4 import BeautifulSoup
driver = webdriver.Chrome('path/to/chromedriver') # it can also work with Firefox, Safari, etc
driver.get('some_url')
soup = BeautifulSoup(driver.page_source, 'html.parser')
```
If you haven't used selenium before, have a look [here](https://selenium-python.readthedocs.io/installation.html) first on how to get started.
| 14,499
|
15,011,674
|
Can you dereference a variable id retrieved from the [`id`](https://docs.python.org/library/functions.html#id) function in Python? For example:
```
dereference(id(a)) == a
```
I want to know from an academic standpoint; I understand that there are more practical methods.
|
2013/02/21
|
[
"https://Stackoverflow.com/questions/15011674",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1248775/"
] |
Here's a utility function based on a (now-deleted) comment made by "Tiran" in a weblog [discussion](https://web.archive.org/web/20090404012257/http://www.friday.com/bbum/2007/08/24/python-di/) @Hophat Abc references in [his own answer](https://stackoverflow.com/a/15012076/355230) that will work in both Python 2 and 3.
**Disclaimer:** If you read the the linked discussion, you'll find that some folks think this is so unsafe that it should *never* be used (as likewise mentioned in some of the comments below). I don't agree with that assessment but feel I should at least mention that there's some debate about using it.
```
import _ctypes
def di(obj_id):
""" Inverse of id() function. """
return _ctypes.PyObj_FromPtr(obj_id)
if __name__ == '__main__':
a = 42
b = 'answer'
print(di(id(a))) # -> 42
print(di(id(b))) # -> answer
```
|
Not easily.
You could recurse through the [`gc.get_objects()`](http://docs.python.org/2/library/gc.html#gc.get_objects) list, testing each and every object if it has the same `id()` but that's not very practical.
The `id()` function is *not intended* to be dereferenceable; the fact that it is based on the memory address is a CPython implementation detail, that other Python implementations do not follow.
| 14,504
|
18,394,350
|
Today, I wrote a short script for a prime sieve, and I am looking to improve on it. I am rather new to python and programming in general, and so I am wondering: what is a good way to reduce memory usage in a program where large lists of numbers are involved? Here is my example script:
```
def ES(n):
A = list(range(2, n+1))
for i in range(2, n+1):
for k in range(2, (n+i)//i):
A[i*k-2] = str(i*k)
A = [x for x in A if isinstance(x, int)]
return A
```
This script turns all composites in the list, A, into strings and then returns the list of remaining integers, which are all prime, yet it runs the A[i\*k-2] = str(i\*k) three times for the number 12, as it goes through all multiples of 2, then 3 and again for 6. With something like that happening, while storing such a large list, I hit a brick wall rather soon and it crashes. Any advice would be greatly appreciated! Thanks in advance.
EDIT: I don't know if this makes a difference, but I'm using Python 3.3
|
2013/08/23
|
[
"https://Stackoverflow.com/questions/18394350",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2467476/"
] |
First, you're using a really weird, inefficient way to record whether something is composite. You don't need to store string representations of the numbers, or even the numbers themselves. You can just use a big list of booleans, where `prime[n]` is true if `n` is prime.
Second, there's no reason to worry about wasting a bit of space at the beginning of the list if it simplifies indexing. It's tiny compared to the space taken by the rest of the list, let alone all the strings and ints and stuff you were using. It's like saving $3 on paint for your $300,000 car.
Third, `range` takes a `step` parameter you can use to simplify your loops.
```
def sieve(n):
"""Returns a list of primes less than n."""
# n-long list, all entries initially True
# prime[n] is True if we haven't found a factor of n yet.
prime = [True]*n
# 0 and 1 aren't prime
prime[0], prime[1] = False, False
for i in range(2, n):
if prime[i]:
# Loop from i*2 to n, in increments of i.
# In other words, go through the multiples of i.
for j in range(i*2, n, i):
prime[j] = False
return [x for x, x_is_prime in enumerate(prime) if x_is_prime]
```
|
Rather than using large lists, you can use generators.
```
def ESgen():
d = {}
q = 2
while 1:
if q not in d:
yield q
d[q*q] = [q]
else:
for p in d[q]:
d.setdefault(p+q,[]).append(p)
del d[q]
q += 1
```
To get the list of first `n` primes using the generator, you can do this:
```
from itertools import islice
gen = ESgen()
list(islice(gen, n))
```
If you only want to check if a number is prime, `set` is faster than `list`:
```
from itertools import islice
gen = ESgen()
set(islice(gen, n))
```
| 14,507
|
46,909,017
|
I want to insert a row in a table of a database from a python script. The column fiels values are saved in variables of different formats: strings, integers, and floats. I search in forums, I tried differente options but no one is working
I tried this options:
```
cursor.execute('INSERT INTO table(device, number1, number2) VALUES (%s,%d,%f)',(device_var,number1_var, number2_var))
```
I also tried:
```
cursor.execute('INSERT INTO table(device, number1, number2) VALUES ({0},{1},{2})',(device_var,number1_var, number2_var))
```
And
```
cursor.execute('INSERT INTO table(device, number1, number2) VALUES ({0},{1},{2})'.format (device_var,number1_var, number2_var))
```
ERROR:OperationalError: (1054, "Unknown column 'device\_var\_content' in 'field list'")
I aslo tried this to see if there is a problem in the table but this works OK:
cursor.execute('INSERT INTO table(device, number1, number2) VALUES ("dev1",1,2.4)'
Thanks for your time
**SOLVED:**
```
cursor.execute('INSERT INTO table(device, number1, number2) VALUES ("{}",{},{})'.format (string_var,number1_var, number2_var))
```
Thanks for your help, your answers give me the way where keep looking.
|
2017/10/24
|
[
"https://Stackoverflow.com/questions/46909017",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3278790/"
] |
you can use parameter binding. you don't need to worry about the datatypes of the variables that being passed.
```
cursor.execute('INSERT INTO table(device, number1, number2) VALUES (?, ?, ?)', ("dev1",1,2.4))
```
|
Check if you're calling commit() post execution of query or also you can enable autocommit for connection.
```
connection_obj.commit()
```
| 14,508
|
46,917,831
|
I am working on a simple Docker image that has a large number of environment variables. Are you able to import an environment variable file like with docker-compose? I cannot find anything about this in the docker file documentation.
**Dockerfile**
```
FROM python:3.6
ENV ENV1 9.3
ENV ENV2 9.3.4
...
ADD . /
RUN pip install -r requirements.txt
CMD [ "python", "./manager.py" ]
```
I guess a good way to rephrase the question would be: how do you efficiently load multiple environment variables in a Dockerfile? If you are not able to load a file, you would not be able to commit a docker file to GitHub.
|
2017/10/24
|
[
"https://Stackoverflow.com/questions/46917831",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3361028/"
] |
Yes, there are a couple of ways you can do this.
Docker Compose
--------------
In Docker Compose, you can supply environment variables in the file itself, or point to an external env file:
```
# docker-compose.yml
version: '2'
services:
service-name:
image: service-app
environment:
- GREETING=hello
env_file:
- .env
```
Incidentally, one nice feature that is somewhat related is that you can use multiple Compose files, with each subsequent one adding to the other. So if the above were to define a base, you can then do this (e.g. per run-time environment):
```
# docker-compose-dev.yml
version: '2'
services:
service-name:
environment:
- GREETING=goodbye
```
You can then run it thus:
```
docker-compose -f docker-compose.yml -f docker-compose-dev.yml up
```
Docker only
-----------
To do this in Docker only, use your entrypoint or command to run an intermediate script, thus:
```
#Dockerfile
....
ENTRYPOINT ["sh", "bin/start.sh"]
```
And then in your start script:
```
#!/bin/sh
source .env
python /manager.py
```
I've [used this related answer](https://stackoverflow.com/a/33186458/472495) as a helpful reference for myself in the past.
Update on PID 1
---------------
To amplify my remark in the comments, if you make your entry point a shell or Python script, it is likely that [Unix signals](https://en.wikipedia.org/wiki/Signal_(IPC)) (stop, kill, etc) will not be passed onto your process. This is because that script will become [process ID 1](https://en.wikipedia.org/wiki/Init), which makes it the parent process of all other processes in the container - in Linux/Unix there is an expectation that this PID will forward signals to its children, but unless you explicitly implement that, it won't happen.
To rectify this, you can install an init system. I use [dumb-init from Yelp](https://github.com/Yelp/dumb-init). This repo also features plenty of detail if you want to understand it a bit better, or simple install instructions if you just want to "install and forget".
|
If you need environment variables runtime, it's easiest to create a launcher script that sets up the environment with multiple `export` statements and then launches your process.
If you need them build time, have a look at the `ARG` and `ENV` statements. You'll need one per variable.
| 14,513
|
59,868,524
|
My goal is to get a list of the names of all the new items that have been posted on <https://www.prusaprinters.org/prints> during the full 24 hours of a given day.
Through a bit of reading I've learned that I should be using Selenium because the site I'm scraping is dynamic (loads more objects as the user scrolls).
Trouble is, I can't seem to get anything but an empty list from `webdriver.find_elements_by_` with any of the suffixes listed at <https://selenium-python.readthedocs.io/locating-elements.html>.
On the site, I see `"class = name"` and `"class = clamp-two-lines"` when I inspect the element I want to get the title of (see screenshot), but I can't seem to return a list of all the elements on the page with that `name` class or the `clamp-two-lines` class.
[](https://i.stack.imgur.com/A74ZM.png)
Here's the code I have so far (the lines commented out are failed attempts):
```
from timeit import default_timer as timer
start_time = timer()
print("Script Started")
import bs4, selenium, smtplib, time
from bs4 import BeautifulSoup
from selenium import webdriver
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.common.by import By
driver = webdriver.Chrome(r'D:\PortableApps\Python Peripherals\chromedriver.exe')
url = 'https://www.prusaprinters.org/prints'
driver.get(url)
# foo = driver.find_elements_by_name('name')
# foo = driver.find_elements_by_xpath('name')
# foo = driver.find_elements_by_class_name('name')
# foo = driver.find_elements_by_tag_name('name')
# foo = [i.get_attribute('href') for i in driver.find_elements_by_css_selector('[id*=name]')]
# foo = [i.get_attribute('href') for i in driver.find_elements_by_css_selector('[class*=name]')]
# foo = [i.get_attribute('href') for i in driver.find_elements_by_css_selector('[id*=clamp-two-lines]')]
# foo = WebDriverWait(driver, 10).until(EC.presence_of_all_elements_located((By.XPATH, '//*[@id="printListOuter"]//ul[@class="clamp-two-lines"]/li')))
print(foo)
driver.quit()
print("Time to run: " + str(round(timer() - start_time,4)) + "s")
```
My research:
1. [Selenium only returns an empty list](https://stackoverflow.com/questions/58260935/selenium-only-returns-an-empty-list)
2. [Selenium find\_elements\_by\_css\_selector returns an empty list](https://stackoverflow.com/questions/34469504/selenium-find-elements-by-css-selector-returns-an-empty-list)
3. [Web Scraping Python (BeautifulSoup,Requests)](https://stackoverflow.com/questions/46860838/web-scraping-python-beautifulsoup-requests)
4. [Get HTML Source of WebElement in Selenium WebDriver using Python](https://stackoverflow.com/questions/7263824/get-html-source-of-webelement-in-selenium-webdriver-using-python)
5. [How to get Inspect Element code in Selenium WebDriver](https://stackoverflow.com/questions/26306566/how-to-get-inspect-element-code-in-selenium-webdriver)
6. [Web Scraping Python (BeautifulSoup,Requests)](https://stackoverflow.com/questions/46860838/web-scraping-python-beautifulsoup-requests)
7. <https://chrisalbon.com/python/web_scraping/monitor_a_website/>
8. <https://www.codementor.io/@gergelykovcs/how-and-why-i-built-a-simple-web-scrapig-script-to-notify-us-about-our-favourite-food-fcrhuhn45>
9. <https://www.tutorialspoint.com/python_web_scraping/python_web_scraping_dynamic_websites.htm>
|
2020/01/22
|
[
"https://Stackoverflow.com/questions/59868524",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9452116/"
] |
To get text wait for visibility of the elements. Css selector for titles is `#printListOuter h3`:
```
titles = WebDriverWait(driver, 10).until(EC.visibility_of_all_elements_located((By.CSS_SELECTOR, '#printListOuter h3')))
for title in titles:
print(title.text)
```
Shorter version:
```
wait = WebDriverWait(driver, 10)
titles = [title.text for title in wait.until(EC.visibility_of_all_elements_located((By.CSS_SELECTOR, '#printListOuter h3')))]
```
|
This is xpath of the name of the items:
```
.//div[@class='print-list-item']/div/a/h3/span
```
| 14,518
|
14,549,433
|
Here is what my data frame looks like;
```
Zaman Operasyon Paket
1 2013-01-18 21:39:00 installed linux-api-headers
2 2013-01-18 21:39:00 installed tzdata
3 2013-01-18 21:39:00 installed glibc
4 2013-01-18 21:39:00 installed ncurses
5 2013-01-18 21:39:00 installed readline
6 2013-01-18 21:39:00 installed bash
```
Zaman is a POSIXct, other 2 colums are factors. I want to count the number of rows grouped by each day. I have tried,
```
> aggregate(data,by=list(data$Zaman),FUN=sum)
Error occured: Summary.POSIXct(c(1358537940, 1358537940, 1358537940, 1358537940, :
'sum' not defined for "POSIXt" objects
> aggregate(data,by=list(data$Zaman),FUN=count)
Error occured: match.fun(FUN) : Couldn't find 'count' object
```
How can I do this?
Output of `dput(data)`
```
dput(data)
structure(list(Zaman = structure(c(1358537940, 1358537940, 1358537940,
1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940,
1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940,
1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940,
1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940,
1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940,
1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940,
1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940,
1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940,
1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940,
1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940,
1358537940, 1358537940, 1358537940, 1358538000, 1358538000, 1358538000,
1358538000, 1358538000, 1358538000, 1358538000, 1358538000, 1358538000,
1358538000, 1358538000, 1358538000, 1358538000, 1358538000, 1358538000,
1358538000, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060,
1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060,
1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060,
1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060,
1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060,
1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060,
1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060,
1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060,
1358538720, 1358549220, 1358549580, 1358549580, 1358550240, 1358550240,
1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240,
1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240,
1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240,
1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240,
1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240,
1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240,
1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240,
1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240,
1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240,
1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240,
1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240,
1358550300, 1358550300, 1358550300, 1358550300, 1358550300, 1358550300,
1358550300, 1358550300, 1358550300, 1358550360, 1358550360, 1358550360,
1358550360, 1358550480, 1358550480, 1358550480, 1358550660, 1358550660,
1358550660, 1358550660, 1358550660, 1358550660, 1358550660, 1358550660,
1358550660, 1358550840, 1358551140, 1358551140, 1358551140, 1358551140,
1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140,
1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140,
1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140,
1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140,
1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140,
1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140,
1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140,
1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140,
1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140,
1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140,
1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140,
1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140,
1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140,
1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140,
1358551140, 1358551200, 1358551200, 1358551200, 1358551200, 1358551200,
1358551200, 1358551200, 1358551200, 1358551200, 1358551320, 1358551320,
1358551320, 1358551320, 1358551320, 1358551320, 1358551800, 1358552580,
1358552580, 1358553300, 1358553300, 1358553300, 1358553300, 1358553300,
1358553720, 1358553720, 1358554320, 1358554320, 1358554320, 1358554320,
1358554320, 1358554380, 1358554380, 1358555100, 1358555100, 1358555100,
1358555280, 1358555280, 1358555280, 1358555460, 1358555460, 1358555460,
1358555460, 1358555460, 1358555460, 1358555460, 1358555460, 1358555460,
1358555520, 1358555520, 1358555520, 1358555520, 1358555520, 1358555520,
1358555520, 1358555520, 1358555520, 1358555520, 1358555520, 1358555520,
1358555520, 1358555520, 1358555640, 1358556240, 1358556240, 1358556240,
1358556240, 1358556300, 1358556300, 1358556300, 1358557020, 1358557860,
1358558280, 1358558280, 1358558280, 1358558280, 1358558280, 1358558280,
1358558280, 1358558280, 1358558280, 1358558280, 1358558280, 1358558280,
1358558280, 1358558280, 1358559660, 1358559660, 1358559900, 1358559900,
1358559900, 1358559900, 1358560140, 1358613600, 1358614140, 1358614140,
1358614140, 1358614140, 1358614140, 1358614140, 1358614140, 1358614140,
1358614140, 1358614140, 1358614140, 1358614140, 1358614140, 1358614140,
1358614140, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200,
1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200,
1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200,
1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200,
1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200,
1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200,
1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200,
1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200,
1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200,
1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200,
1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200,
1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200,
1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200,
1358614200, 1358614260, 1358614260, 1358614260, 1358614260, 1358614260,
1358614260, 1358614260, 1358614320, 1358614320, 1358614320, 1358614320,
1358614320, 1358614500, 1358614500, 1358614500, 1358614500, 1358616120,
1358616120, 1358616120, 1358616120, 1358618280, 1358618280, 1358618280,
1358618280, 1358618280, 1358618280, 1358618280, 1358618280, 1358618280,
1358618280, 1358618280, 1358618280, 1358618280, 1358618280, 1358618280,
1358618280, 1358618280, 1358618280, 1358620500, 1358620500, 1358620500,
1358620500, 1358620500, 1358620500, 1358620500, 1358620500, 1358689500,
1358689500, 1358689500, 1358689560, 1358691060, 1358708700, 1358708700,
1358708700, 1358708700, 1358708700, 1358708700, 1358708700, 1358708700,
1358708700, 1358708700, 1358708700, 1358708700, 1358708700, 1358708760,
1358709600, 1358726940, 1358726940, 1358726940, 1358726940, 1358727000,
1358727000, 1358768340, 1358768340, 1358856660, 1358856660, 1358856660,
1358856660, 1358856660, 1358856660, 1358856720, 1358856720, 1358856720,
1358857080, 1358857260, 1358857260, 1358857260, 1358857260, 1358857260,
1358857260, 1358857260, 1358857260, 1358857260, 1358857260, 1358857260,
1358857260, 1358857260, 1358857260, 1358857260, 1358857260, 1358857260,
1358857260, 1358857260, 1358857260, 1358857260, 1358857260, 1358857260,
1358857260, 1358857260, 1358857260, 1358857260, 1358857260, 1358857260,
1358857260, 1358857260, 1358857260, 1358857260, 1358857380, 1358857380,
1358857500, 1358858160, 1358858160, 1358858160, 1358858160, 1358858160,
1358858160, 1358858160, 1358858160, 1358858160, 1358858160, 1358858160,
1358858160, 1358858160, 1358858160, 1358858160, 1358858160, 1358858160,
1358858460, 1358858520, 1358859780, 1358859840, 1358859840, 1358862600,
1358862660, 1358863440, 1358863500, 1358896140, 1358896140, 1358896200,
1358896200, 1358943900, 1358943900, 1358943900, 1358944020, 1358944020,
1358944080, 1358949840, 1358949840, 1358949840, 1358949840, 1358949840,
1358949900, 1358949900, 1358949900, 1358949900, 1358949960, 1358949960,
1358949960, 1358949960, 1358949960, 1358949960, 1358949960, 1358949960,
1358949960, 1358949960, 1358949960, 1358949960, 1358950020, 1358950020,
1358950020, 1358950020, 1358966040, 1358966040, 1358966040, 1358966040,
1358966040, 1358966100, 1358966520, 1358966520, 1358966640, 1358966640,
1358967000, 1358967000, 1358967000, 1358967000, 1358967000, 1358967000,
1358967000, 1358970720, 1358970720, 1358970720, 1358970720, 1358970720,
1358970720, 1358970720, 1358970720, 1359120240, 1359120240, 1359120240,
1359120240, 1359120240, 1359120240, 1359120240, 1359120240, 1359120240,
1359120420, 1359194820, 1359194820, 1359194820, 1359194820, 1359223080,
1359223080, 1359223080, 1359223080, 1359223080, 1359223080, 1359223080,
1359224100), class = c("POSIXct", "POSIXt"), tzone = ""), Operasyon = structure(c(1L,
1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L,
1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L,
1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L,
1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L,
1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L,
1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L,
1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L,
1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L,
1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L,
1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L,
1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L,
1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L,
1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L,
1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L,
1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L,
1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L,
1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L,
1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L,
1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L,
1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L,
1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L,
1L, 1L, 1L, 1L, 2L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 2L, 2L,
2L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L,
1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L,
1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 3L,
1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L,
1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L,
1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L,
1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L,
1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L,
1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L,
1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L,
1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L,
1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L,
1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L,
1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 3L, 1L,
1L, 1L, 3L, 3L, 3L, 3L, 3L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L,
1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L,
1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L,
1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L,
1L, 1L, 1L, 1L, 3L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L,
3L, 1L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L,
3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 1L, 1L, 1L, 1L, 1L, 1L,
1L, 1L, 1L, 1L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 1L, 3L, 3L, 3L, 3L,
3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L,
3L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L), .Label = c("installed",
"removed", "upgraded"), class = "factor"), Paket = structure(c(381L,
563L, 126L, 406L, 506L, 23L, 31L, 108L, 56L, 654L, 50L, 257L,
330L, 429L, 428L, 17L, 3L, 130L, 227L, 49L, 517L, 51L, 256L,
250L, 530L, 573L, 77L, 179L, 86L, 57L, 437L, 247L, 125L, 204L,
209L, 177L, 652L, 547L, 61L, 466L, 52L, 62L, 64L, 70L, 84L, 87L,
104L, 116L, 143L, 170L, 111L, 438L, 420L, 315L, 271L, 207L, 210L,
172L, 185L, 190L, 189L, 545L, 191L, 198L, 217L, 379L, 382L, 400L,
218L, 399L, 380L, 383L, 388L, 554L, 144L, 291L, 391L, 392L, 394L,
405L, 409L, 528L, 32L, 324L, 53L, 472L, 270L, 220L, 455L, 65L,
135L, 139L, 427L, 8L, 426L, 435L, 436L, 285L, 336L, 288L, 469L,
470L, 471L, 511L, 549L, 548L, 551L, 572L, 575L, 586L, 612L, 389L,
20L, 21L, 24L, 25L, 80L, 91L, 402L, 282L, 193L, 40L, 468L, 106L,
273L, 331L, 390L, 433L, 457L, 546L, 192L, 588L, 587L, 649L, 354L,
97L, 248L, 94L, 359L, 289L, 237L, 456L, 348L, 592L, 350L, 205L,
347L, 363L, 642L, 616L, 632L, 622L, 621L, 618L, 627L, 626L, 619L,
620L, 93L, 623L, 629L, 404L, 599L, 628L, 598L, 355L, 261L, 320L,
373L, 366L, 634L, 640L, 624L, 631L, 637L, 593L, 635L, 602L, 377L,
638L, 639L, 512L, 369L, 503L, 368L, 644L, 187L, 361L, 615L, 362L,
641L, 643L, 393L, 645L, 646L, 647L, 88L, 358L, 352L, 648L, 630L,
55L, 353L, 253L, 251L, 188L, 576L, 375L, 376L, 601L, 397L, 508L,
374L, 600L, 578L, 140L, 577L, 633L, 367L, 349L, 360L, 636L, 625L,
591L, 337L, 650L, 561L, 14L, 293L, 34L, 234L, 326L, 181L, 142L,
171L, 430L, 48L, 351L, 365L, 533L, 325L, 410L, 424L, 136L, 268L,
328L, 233L, 22L, 232L, 112L, 159L, 160L, 173L, 357L, 58L, 610L,
542L, 356L, 76L, 103L, 161L, 214L, 260L, 412L, 201L, 460L, 531L,
541L, 47L, 13L, 59L, 12L, 162L, 163L, 60L, 246L, 286L, 555L,
556L, 562L, 603L, 370L, 343L, 605L, 155L, 423L, 156L, 4L, 36L,
340L, 287L, 341L, 327L, 157L, 333L, 604L, 458L, 459L, 482L, 292L,
571L, 266L, 570L, 529L, 432L, 222L, 385L, 564L, 606L, 607L, 194L,
364L, 450L, 451L, 180L, 133L, 132L, 158L, 608L, 582L, 581L, 609L,
611L, 613L, 614L, 413L, 597L, 245L, 520L, 372L, 39L, 544L, 443L,
120L, 63L, 89L, 321L, 314L, 6L, 401L, 90L, 577L, 378L, 516L,
386L, 169L, 552L, 557L, 166L, 145L, 122L, 122L, 145L, 166L, 278L,
500L, 501L, 534L, 496L, 480L, 485L, 478L, 495L, 371L, 67L, 68L,
10L, 513L, 452L, 448L, 617L, 486L, 492L, 490L, 494L, 487L, 497L,
121L, 203L, 85L, 568L, 653L, 425L, 9L, 566L, 408L, 590L, 102L,
228L, 229L, 230L, 294L, 145L, 124L, 322L, 255L, 323L, 565L, 316L,
168L, 544L, 479L, 481L, 243L, 66L, 215L, 419L, 596L, 95L, 311L,
176L, 407L, 504L, 505L, 509L, 178L, 30L, 41L, 263L, 422L, 302L,
175L, 300L, 384L, 301L, 303L, 304L, 305L, 75L, 543L, 16L, 567L,
510L, 267L, 579L, 536L, 141L, 532L, 499L, 235L, 462L, 11L, 174L,
72L, 119L, 96L, 128L, 196L, 182L, 416L, 1L, 239L, 236L, 521L,
523L, 241L, 240L, 269L, 329L, 387L, 242L, 275L, 656L, 550L, 283L,
221L, 202L, 295L, 147L, 211L, 249L, 208L, 244L, 338L, 342L, 415L,
417L, 515L, 519L, 540L, 574L, 589L, 651L, 82L, 79L, 334L, 317L,
274L, 284L, 279L, 507L, 71L, 98L, 219L, 594L, 580L, 454L, 298L,
453L, 395L, 206L, 306L, 307L, 467L, 308L, 33L, 200L, 199L, 197L,
309L, 310L, 345L, 346L, 312L, 262L, 184L, 118L, 81L, 280L, 259L,
19L, 18L, 476L, 489L, 252L, 484L, 477L, 493L, 28L, 335L, 418L,
272L, 183L, 414L, 461L, 553L, 537L, 225L, 411L, 27L, 166L, 483L,
491L, 138L, 290L, 475L, 488L, 113L, 115L, 114L, 332L, 29L, 498L,
105L, 146L, 319L, 127L, 35L, 431L, 15L, 164L, 167L, 213L, 464L,
463L, 465L, 186L, 117L, 344L, 258L, 165L, 231L, 313L, 134L, 137L,
516L, 131L, 74L, 73L, 34L, 204L, 380L, 501L, 601L, 595L, 110L,
154L, 151L, 150L, 195L, 583L, 2L, 238L, 224L, 299L, 223L, 265L,
152L, 129L, 297L, 296L, 559L, 46L, 42L, 43L, 44L, 398L, 38L,
277L, 403L, 281L, 78L, 37L, 538L, 539L, 254L, 421L, 149L, 558L,
318L, 153L, 148L, 441L, 444L, 440L, 439L, 449L, 525L, 527L, 535L,
276L, 524L, 522L, 526L, 447L, 446L, 442L, 445L, 101L, 92L, 69L,
123L, 100L, 99L, 396L, 92L, 655L, 54L, 26L, 212L, 502L, 107L,
45L, 7L, 569L, 264L, 585L, 32L, 560L, 181L, 30L, 29L, 34L, 39L,
171L, 183L, 186L, 302L, 300L, 301L, 303L, 304L, 305L, 306L, 307L,
308L, 309L, 310L, 312L, 380L, 414L, 500L, 504L, 514L, 83L, 584L,
518L, 473L, 109L, 5L, 474L, 226L, 434L, 474L, 473L, 5L, 83L,
514L, 518L, 584L, 216L, 253L, 251L, 188L, 208L, 249L, 244L, 397L,
32L, 135L, 139L, 332L, 443L, 452L, 629L, 628L, 648L, 502L, 43L,
44L, 47L, 544L, 514L, 83L, 584L, 518L, 473L, 5L, 474L, 339L), .Label = c("a52dec",
"aalib", "acl", "alsa-lib", "alsa-plugins", "alsa-utils", "apache-ant",
"archlinux-keyring", "arj", "asciidoc", "aspell", "at-spi2-atk",
"at-spi2-core", "atk", "atkmm", "attica", "attr", "audacious",
"audacious-plugins", "autoconf", "automake", "avahi", "bash",
"binutils", "bison", "blas", "blueman", "bluez", "boost", "boost-libs",
"bzip2", "ca-certificates", "ca-certificates-java", "cairo",
"cairomm", "cdparanoia", "celt", "chromaprint", "chromium", "cloog",
"clucene", "clutter", "clutter-gst", "clutter-gtk", "cmake",
"cogl", "colord", "compositeproto", "coreutils", "cracklib",
"cronie", "cryptsetup", "curl", "curlpaste", "damageproto", "db",
"dbus", "dbus-glib", "dconf", "desktop-file-utils", "device-mapper",
"dhcpcd", "dialog", "diffutils", "dirmngr", "dnssec-anchors",
"docbook-xml", "docbook-xsl", "doukutsu", "e2fsprogs", "enca",
"enchant", "eog", "exempi", "exiv2", "exo", "expat", "faac",
"faad2", "fakeroot", "feh", "ffmpeg", "fftw", "file", "file-roller",
"filesystem", "findutils", "fixesproto", "flac", "flashplugin",
"flex", "fltk", "fontconfig", "fontsproto", "foxitreader", "freeglut",
"freetype2", "fribidi", "frogatto", "frogatto-data", "frozen-bubble",
"fuse", "garcon", "gawk", "gc", "gcc", "gcc-fortran", "gcc-libs",
"gconf", "gdb", "gdbm", "gdk-pixbuf2", "gedit", "geoip", "geoip-database",
"gettext", "ghex", "giblib", "giflib", "git", "git-cola", "gitg",
"glew", "glib-networking", "glib2", "glibc", "glibmm", "glu",
"gmime", "gmp", "gnome-desktop", "gnome-icon-theme", "gnome-icon-theme-symbolic",
"gnome-system-monitor", "gnupg", "gnutls", "go", "gobject-introspection",
"gpgme", "gpm", "grantlee", "graphite", "grep", "groff", "gsettings-desktop-schemas",
"gsl", "gsm", "gst-libav", "gst-plugins-bad", "gst-plugins-base",
"gst-plugins-base-libs", "gst-plugins-good", "gst-plugins-ugly",
"gstreamer", "gstreamer0.10", "gstreamer0.10-base", "gstreamer0.10-base-plugins",
"gtk-engines", "gtk-update-icon-cache", "gtk2", "gtk2-xfce-engine",
"gtk3", "gtk3-xfce-engine", "gtkmm", "gtkmm3", "gtksourceview3",
"gtkspell", "gvfs", "gvim", "gzip", "harfbuzz", "heirloom-mailx",
"hicolor-icon-theme", "hspell", "hsqldb-java", "hunspell", "hwids",
"hyphen", "iana-etc", "icon-naming-utils", "icu", "ilmbase",
"imagemagick", "imlib2", "inetutils", "inkscape", "inputproto",
"intel-dri", "iproute2", "iptables", "iputils", "ipw2200-fw",
"isl", "iso-codes", "jack", "jasper", "jdk7-openjdk", "jfsutils",
"jre7-openjdk", "jre7-openjdk-headless", "js", "json-c", "json-glib",
"kbd", "kbproto", "kdelibs", "keyutils", "khrplatform-devel",
"kmod", "krb5", "lame", "lapack", "lcms", "lcms2", "ldns", "leafpad",
"less", "libarchive", "libass", "libassuan", "libasyncns", "libatasmart",
"libavc1394", "libcaca", "libcanberra", "libcanberra-pulse",
"libcap", "libcddb", "libcdio", "libcdio-paranoia", "libcroco",
"libcups", "libdaemon", "libdatrie", "libdbusmenu-qt", "libdca",
"libdrm", "libdv", "libdvbpsi", "libdvdnav", "libdvdread", "libebml",
"libedit", "libegl", "libevent", "libexif", "libffi", "libfontenc",
"libgbm", "libgcrypt", "libgl", "libglade", "libglapi", "libgme",
"libgnome-keyring", "libgpg-error", "libgssglue", "libgtop",
"libguess", "libgusb", "libice", "libid3tag", "libidl2", "libidn",
"libiec61883", "libimobiledevice", "libiodbc", "libjpeg-turbo",
"libkate", "libksba", "libldap", "liblqr", "libltdl", "libmad",
"libmatroska", "libmikmod", "libmms", "libmng", "libmodplug",
"libmowgli", "libmp4v2", "libmpc", "libmpcdec", "libmpeg2", "libnl",
"libnotify", "libogg", "libpcap", "libpciaccess", "libpeas",
"libpipeline", "libplist", "libpng", "libproxy", "libpulse",
"libquvi", "libquvi-scripts", "libqzeitgeist", "libraw1394",
"libreoffice-base", "libreoffice-calc", "libreoffice-common",
"libreoffice-draw", "libreoffice-gnome", "libreoffice-impress",
"libreoffice-kde4", "libreoffice-math", "libreoffice-postgresql-connector",
"libreoffice-sdk", "libreoffice-sdk-doc", "libreoffice-tr", "libreoffice-writer",
"librsvg", "libsamplerate", "libsasl", "libsecret", "libshout",
"libsidplay", "libsigc++", "libsm", "libsndfile", "libsoup",
"libsoup-gnome", "libssh2", "libtasn1", "libthai", "libtheora",
"libtiff", "libtiger", "libtirpc", "libtool", "libtorrent-rasterbar",
"libunique", "libupnp", "libusb-compat", "libusbx", "libutempter",
"libva", "libva-driver-intel-g45-h264", "libvisual", "libvorbis",
"libvpx", "libwnck", "libwnck3", "libwpd", "libwps", "libx11",
"libxau", "libxaw", "libxcb", "libxcomposite", "libxcursor",
"libxdamage", "libxdmcp", "libxext", "libxfce4ui", "libxfce4util",
"libxfixes", "libxfont", "libxft", "libxi", "libxinerama", "libxkbfile",
"libxklavier", "libxml2", "libxmu", "libxpm", "libxrandr", "libxrender",
"libxres", "libxslt", "libxss", "libxt", "libxtst", "libxv",
"libxvmc", "libxxf86vm", "libyaml", "licenses", "linux", "linux-api-headers",
"linux-firmware", "logrotate", "lpsolve", "lsof", "lua", "lua51",
"lvm2", "m4", "make", "man-db", "man-pages", "mcpp", "mdadm",
"media-player-info", "mercurial", "mesa", "mjpegtools", "mkinitcpio",
"mkinitcpio-busybox", "mozilla-common", "mpfr", "mpg123", "mtdev",
"nano", "ncurses", "neon", "net-tools", "netcfg", "nettle", "notification-daemon",
"nspr", "nss", "obex-data-server", "opencore-amr", "openexr",
"openjpeg", "openobex", "openssh", "openssl", "opus", "orbit2",
"orc", "p11-kit", "p7zip", "pacman", "pacman-mirrorlist", "pam",
"pambase", "pango", "pangomm", "parted", "patch", "pavucontrol",
"pciutils", "pcmciautils", "pcre", "perl", "perl-alien-sdl",
"perl-capture-tiny", "perl-class-inspector", "perl-compress-bzip2",
"perl-error", "perl-file-sharedir", "perl-file-which", "perl-ipc-system-simple",
"perl-sdl", "perl-test-pod", "perl-tie-simple", "perl-xml-parser",
"perl-xml-simple", "perl-yaml-syck", "phonon", "phonon-vlc",
"pinentry", "pixman", "pkg-config", "pm-quirks", "pm-utils",
"polkit", "polkit-gnome", "polkit-qt", "poppler", "poppler-data",
"poppler-glib", "popt", "postgresql-libs", "ppl", "ppp", "procps-ng",
"psmisc", "pth", "pulseaudio", "pulseaudio-alsa", "pygobject-devel",
"pygobject2-devel", "pygtk", "pyqt-common", "python", "python-dbus-common",
"python-pyinotify", "python2", "python2-beaker", "python2-cairo",
"python2-dbus", "python2-distribute", "python2-docutils", "python2-gobject",
"python2-gobject2", "python2-jinja", "python2-mako", "python2-markupsafe",
"python2-notify", "python2-pygments", "python2-pyqt", "python2-sip",
"python2-sphinx", "qbittorrent", "qca", "qt", "qtwebkit", "r",
"randrproto", "raptor", "rasqal", "readline", "recode", "recordproto",
"redland", "redland-storage-virtuoso", "reiserfsprogs", "renderproto",
"rsync", "rtkit", "rtmpdump", "ruby", "run-parts", "sbc", "schroedinger",
"scrnsaverproto", "sdl", "sdl_gfx", "sdl_image", "sdl_mixer",
"sdl_net", "sdl_pango", "sdl_ttf", "sed", "sg3_utils", "shadow",
"shared-color-profiles", "shared-desktop-ontologies", "shared-mime-info",
"sip", "smpeg", "soprano", "sound-theme-freedesktop", "soundtouch",
"spandsp", "speex", "sqlite", "startup-notification", "strigi",
"sudo", "sysfsutils", "syslinux", "systemd", "systemd-sysvcompat",
"sysvinit-tools", "taglib", "tar", "tcl", "tdb", "texinfo", "thunar",
"thunar-volman", "tk", "totem", "totem-plparser", "traceroute",
"ttf-dejavu", "tumbler", "tzdata", "udisks", "udisks2", "unace",
"unixodbc", "unrar", "unzip", "upower", "usbmuxd", "usbutils",
"util-linux", "v4l-utils", "vi", "videoproto", "vim", "vim-runtime",
"virtuoso-base", "vlc", "vte", "vte-common", "wavpack", "webrtc-audio-processing",
"wget", "which", "wpa_actiond", "wpa_supplicant", "x264", "xampp",
"xbitmaps", "xcb-proto", "xcb-util", "xcb-util-keysyms", "xchat",
"xclip", "xdg-utils", "xextproto", "xf86-input-evdev", "xf86-input-synaptics",
"xf86-video-intel", "xf86vidmodeproto", "xfce4-appfinder", "xfce4-mixer",
"xfce4-panel", "xfce4-power-manager", "xfce4-session", "xfce4-settings",
"xfce4-terminal", "xfconf", "xfdesktop", "xfsprogs", "xfwm4",
"xfwm4-themes", "xineramaproto", "xkeyboard-config", "xmlto",
"xorg-bdftopcf", "xorg-font-util", "xorg-font-utils", "xorg-fonts-alias",
"xorg-fonts-encodings", "xorg-fonts-misc", "xorg-iceauth", "xorg-luit",
"xorg-mkfontdir", "xorg-mkfontscale", "xorg-server", "xorg-server-common",
"xorg-server-utils", "xorg-sessreg", "xorg-setxkbmap", "xorg-twm",
"xorg-xauth", "xorg-xbacklight", "xorg-xclock", "xorg-xcmsdb",
"xorg-xgamma", "xorg-xhost", "xorg-xinit", "xorg-xinput", "xorg-xkbcomp",
"xorg-xmodmap", "xorg-xrandr", "xorg-xrdb", "xorg-xrefresh",
"xorg-xset", "xorg-xsetroot", "xproto", "xterm", "xvidcore",
"xz", "zip", "zlib", "zsh", "zvbi"), class = "factor")), .Names = c("Zaman",
"Operasyon", "Paket"), row.names = c(NA, -730L), class = "data.frame")
```
|
2013/01/27
|
[
"https://Stackoverflow.com/questions/14549433",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/886669/"
] |
You must express that you want *days*; this should do the trick:
```
table(factor(format(data$Zaman,"%D")))
```
|
You get error with count because there isn't a function named `count`.
But you can use `count` of `plyr` package :
```
count(dat, vars = 'Zaman')
Zaman freq
1 2013-01-18 20:39:00 66
2 2013-01-18 20:40:00 16
3 2013-01-18 20:41:00 47
4 2013-01-18 20:52:00 1
5 2013-01-18 23:47:00 1
6 2013-01-18 23:53:00 2
7 2013-01-19 00:04:00 68
8 2013-01-19 00:05:00 9
9 2013-01-19 00:06:00 4
10 2013-01-19 00:08:00 3
11 2013-01-19 00:11:00 9
12 2013-01-19 00:14:00 1
13 2013-01-19 00:19:00 89
14 2013-01-19 00:20:00 9
```
| 14,519
|
22,325,245
|
I have an example script where I am searching for `[Data]` in a line. The odd thing is that it always matches when reading the file with `csv.reader`. See code below. Any ideas?
```
#!/opt/Python-2.7.3/bin/python
import csv
import re
import os
content = '''# foo
[Header],,
foo bar,blah,
[Settings]
Yadda,yadda
[Data],,
Alfa,Beta,Gamma,Delta,Epsilon
One,Two,Tree,Four,Five
'''
f1 = open("/tmp/file", "w")
print >>f1, content
f1.close()
f = open("/tmp/file", "r")
reader = csv.reader(f)
found_csv = 0
mycsv = []
for l in reader:
line = str(l)
if line == '[]': continue
if re.search("[Data]", line):
print line
found_csv = 1
if found_csv:
mycsv.append(l)
```
Prints:
```
['[Header]', '', '']
['foo bar', 'blah', '']
['[Settings]']
['Yadda', 'yadda']
['[Data]', '', '']
['Alfa', 'Beta', 'Gamma', 'Delta', 'Epsilon']
```
|
2014/03/11
|
[
"https://Stackoverflow.com/questions/22325245",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/719016/"
] |
If you want to find a line with string `[Data]` (with brackets), you should add `\` before brackets in pattern:
```
#if re.search("[Data]", line):
if re.search("\[Data\]", line):
```
Pattern `[Data]` without backslashes means that you want search any character from set (`D` or `a` or `t`) in line.
|
Edit: The error in the original code is how you set `founc_csv` the first time it finds data, and then never resets it to 0. That or you can simply remove the need for it entirely:
```
mycsv = []
for l in reader:
line = str(l)
if line == '[]':
continue
elif "[Data]" in line:
print line
mycsv.append(l)
```
-- Original answer --
re is treating `"[Data]"` as a regular expression. You actually don't even need re at all in this case an simply can change it to:
```
if "[Data]" in line:
```
However if you are going to do more advanced regular expression based searching, just make sure to [format them properly](https://pythex.org/):
```
if re.search(r'\[Data\]', line):
```
| 14,526
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.